Updates from: 06/03/2022 01:11:53
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Howto Mfa Nps Extension Advanced https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-nps-extension-advanced.md
Previously updated : 07/11/2018 Last updated : 06/01/2022
The Network Policy Server (NPS) extension extends your cloud-based Azure AD Mult
Since the NPS extension connects to both your on-premises and cloud directories, you might encounter an issue where your on-premises user principal names (UPNs) don't match the names in the cloud. To solve this problem, use alternate login IDs.
-Within the NPS extension, you can designate an Active Directory attribute to be used in place of the UPN for Azure AD Multi-Factor Authentication. This enables you to protect your on-premises resources with two-step verification without modifying your on-premises UPNs.
+Within the NPS extension, you can designate an Active Directory attribute to be used as the UPN for Azure AD Multi-Factor Authentication. This enables you to protect your on-premises resources with two-step verification without modifying your on-premises UPNs.
To configure alternate login IDs, go to `HKLM\SOFTWARE\Microsoft\AzureMfa` and edit the following registry values: | Name | Type | Default value | Description | | - | - | - | -- |
-| LDAP_ALTERNATE_LOGINID_ATTRIBUTE | string | Empty | Designate the name of Active Directory attribute that you want to use instead of the UPN. This attribute is used as the AlternateLoginId attribute. If this registry value is set to a [valid Active Directory attribute](/windows/win32/adschema/attributes-all) (for example, mail or displayName), then the attribute's value is used in place of the user's UPN for authentication. If this registry value is empty or not configured, then AlternateLoginId is disabled and the user's UPN is used for authentication. |
+| LDAP_ALTERNATE_LOGINID_ATTRIBUTE | string | Empty | Designate the name of Active Directory attribute that you want to use as the UPN. This attribute is used as the AlternateLoginId attribute. If this registry value is set to a [valid Active Directory attribute](/windows/win32/adschema/attributes-all) (for example, mail or displayName), then the attribute's value is used as the user's UPN for authentication. If this registry value is empty or not configured, then AlternateLoginId is disabled and the user's UPN is used for authentication. |
| LDAP_FORCE_GLOBAL_CATALOG | boolean | False | Use this flag to force the use of Global Catalog for LDAP searches when looking up AlternateLoginId. Configure a domain controller as a Global Catalog, add the AlternateLoginId attribute to the Global Catalog, and then enable this flag. <br><br> If LDAP_LOOKUP_FORESTS is configured (not empty), **this flag is enforced as true**, regardless of the value of the registry setting. In this case, the NPS extension requires the Global Catalog to be configured with the AlternateLoginId attribute for each forest. | | LDAP_LOOKUP_FORESTS | string | Empty | Provide a semi-colon separated list of forests to search. For example, *contoso.com;foobar.com*. If this registry value is configured, the NPS extension iteratively searches all the forests in the order in which they were listed, and returns the first successful AlternateLoginId value. If this registry value is not configured, the AlternateLoginId lookup is confined to the current domain.|
active-directory Developer Guide Conditional Access Authentication Context https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/developer-guide-conditional-access-authentication-context.md
Do not use auth context where the app itself is going to be a target of Conditio
- [Conditional Access authentication context](../conditional-access/concept-conditional-access-cloud-apps.md#authentication-context-preview) - [authenticationContextClassReference resource type - MS Graph](/graph/api/conditionalaccessroot-list-authenticationcontextclassreferences) - [Claims challenge, claims request, and client capabilities in the Microsoft identity platform](claims-challenge.md)-- [Using authentication context with Microsoft Information Protection and SharePoint](/microsoft-365/compliance/sensitivity-labels-teams-groups-sites#more-information-about-the-dependencies-for-the-authentication-context-option)
+- [Using authentication context with Microsoft Purview Information Protection and SharePoint](/microsoft-365/compliance/sensitivity-labels-teams-groups-sites#more-information-about-the-dependencies-for-the-authentication-context-option)
- [How to use Continuous Access Evaluation enabled APIs in your applications](app-resilience-continuous-access-evaluation.md)
active-directory Scenario Daemon Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-daemon-call-api.md
Title: Call a web API from a daemon app- description: Learn how to build a daemon app that calls a web API.
active-directory Scenario Daemon Production https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-daemon-production.md
Title: Move a daemon app that calls web APIs to production- description: Learn how to move a daemon app that calls web APIs to production
active-directory Scenario Desktop App Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-desktop-app-configuration.md
Title: Configure desktop apps that call web APIs- description: Learn how to configure the code of a desktop app that calls web APIs
if let application = try? MSALPublicClientApplication(configuration: config) { /
# [Node.js](#tab/nodejs)
-Configuration parameters can be loaded from many sources, like a JSON file or from environment variables. Below, an *.env* file is used.
+Configuration parameters can be loaded from many sources, like a JavaScript file or from environment variables. Below, an *authConfig.js* file is used.
-```Text
-# Credentials
-CLIENT_ID=Enter_the_Application_Id_Here
-TENANT_ID=Enter_the_Tenant_Info_Here
-# Configuration
-REDIRECT_URI=msal://redirect
-
-# Endpoints
-AAD_ENDPOINT_HOST=Enter_the_Cloud_Instance_Id_Here
-GRAPH_ENDPOINT_HOST=Enter_the_Graph_Endpoint_Here
-
-# RESOURCES
-GRAPH_ME_ENDPOINT=v1.0/me
-GRAPH_MAIL_ENDPOINT=v1.0/me/messages
-
-# SCOPES
-GRAPH_SCOPES=User.Read Mail.Read
-```
-
-Load the *.env* file to environment variables. MSAL Node can be initialized minimally as below. See the available [configuration options](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-node/docs/configuration.md).
+Import the configuration object from *authConfig.js* file. MSAL Node can be initialized minimally as below. See the available [configuration options](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-node/docs/configuration.md).
```JavaScript const { PublicClientApplication } = require('@azure/msal-node');
+const { msalConfig } = require('./authConfig')
-const MSAL_CONFIG = {
- auth: {
- clientId: process.env.CLIENT_ID,
- authority: `${process.env.AAD_ENDPOINT_HOST}${process.env.TENANT_ID}`,
- redirectUri: process.env.REDIRECT_URI,
- },
- system: {
- loggerOptions: {
- loggerCallback(loglevel, message, containsPii) {
- console.log(message);
- },
- piiLoggingEnabled: false,
- logLevel: LogLevel.Verbose,
- }
- }
-};
-
-clientApplication = new PublicClientApplication(MSAL_CONFIG);
+/**
+* Initialize a public client application. For more information, visit:
+* https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-node/docs/initialize-public-client-application.md
+*/
+clientApplication = new PublicClientApplication(msalConfig);
``` # [Python](#tab/python)
active-directory Scenario Desktop App Registration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-desktop-app-registration.md
Title: Register desktop apps that call web APIs- description: Learn how to build a desktop app that calls web APIs (app registration)
Specify the redirect URI for your app by [configuring the platform settings](qui
> As a security best practice, we recommend explicitly setting `https://login.microsoftonline.com/common/oauth2/nativeclient` or `http://localhost` as the redirect URI. Some authentication libraries like MSAL.NET use a default value of `urn:ietf:wg:oauth:2.0:oob` when no other redirect URI is specified, which is not recommended. This default will be updated as a breaking change in the next major release. - If you build a native Objective-C or Swift app for macOS, register the redirect URI based on your application's bundle identifier in the following format: `msauth.<your.app.bundle.id>://auth`. Replace `<your.app.bundle.id>` with your application's bundle identifier.-- If you build a Node.js Electron app, use a custom file protocol instead of a regular web (https://) redirect URI in order to handle the redirection step of the authorization flow, for instance `msal://redirect`. The custom file protocol name shouldn't be obvious to guess and should follow the suggestions in the [OAuth2.0 specification for Native Apps](https://tools.ietf.org/html/rfc8252#section-7.1).
+- If you build a Node.js Electron app, use a custom string protocol instead of a regular web (https://) redirect URI in order to handle the redirection step of the authorization flow, for instance `msal{Your_Application/Client_Id}://auth` (e.g. *msalfa29b4c9-7675-4b61-8a0a-bf7b2b4fda91://auth*). The custom string protocol name shouldn't be obvious to guess and should follow the suggestions in the [OAuth2.0 specification for Native Apps](https://tools.ietf.org/html/rfc8252#section-7.1).
- If your app uses only integrated Windows authentication or a username and a password, you don't need to register a redirect URI for your application. These flows do a round trip to the Microsoft identity platform v2.0 endpoint. Your application won't be called back on any specific URI. - To distinguish [device code flow](scenario-desktop-acquire-token-device-code-flow.md), [integrated Windows authentication](scenario-desktop-acquire-token-integrated-windows-authentication.md), and a [username and a password](scenario-desktop-acquire-token-username-password.md) from a confidential client application using a client credential flow used in [daemon applications](scenario-daemon-overview.md), none of which requires a redirect URI, configure it as a public client application. To achieve this configuration:
active-directory Scenario Desktop Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-desktop-call-api.md
Title: Call web APIs from a desktop app- description: Learn how to build a desktop app that calls web APIs
active-directory Scenario Desktop Production https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-desktop-production.md
Title: Move desktop app calling web APIs to production- description: Learn how to move a desktop app that calls web APIs to production
active-directory Tutorial V2 Nodejs Desktop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-v2-nodejs-desktop.md
In this tutorial, you build an Electron desktop application that signs in users
Follow the steps in this tutorial to: > [!div class="checklist"]
+>
> - Register the application in the Azure portal > - Create an Electron desktop app project > - Add authentication logic to your app
Use the following settings for your app registration:
- Name: `ElectronDesktopApp` (suggested) - Supported account types: **Accounts in any organizational directory (Any Azure AD directory - Multitenant) and personal Microsoft accounts (e.g. Skype, Xbox)** - Platform type: **Mobile and desktop applications**-- Redirect URI: `msal://redirect`
+- Redirect URI: `msal{Your_Application/Client_Id}://auth`
## Create the project
Create a folder to host your application, for example *ElectronDesktopApp*.
```console npm init -y npm install --save @azure/msal-node axios bootstrap dotenv jquery popper.js
- npm install --save-dev babel electron@10.1.6 webpack
+ npm install --save-dev babel electron@18.2.3 webpack
``` 2. Then, create a folder named *App*. Inside this folder, create a file named *https://docsupdatetracker.net/index.html* that will serve as UI. Add the following code there:
- ```html
- <!DOCTYPE html>
- <html lang="en">
-
- <head>
- <meta charset="UTF-8">
- <meta name="viewport" content="width=device-width, initial-scale=1.0, shrink-to-fit=no">
- <meta http-equiv="Content-Security-Policy" content="script-src 'self' 'unsafe-inline';" />
- <title>MSAL Node Electron Sample App</title>
-
- <!-- adding Bootstrap 4 for UI components -->
- <link rel="stylesheet" href="../node_modules/bootstrap/dist/css/bootstrap.min.css">
-
- <link rel="SHORTCUT ICON" href="https://c.s-microsoft.com/favicon.ico?v2" type="image/x-icon">
- </head>
-
- <body>
- <nav class="navbar navbar-expand-lg navbar-dark bg-primary">
- <a class="navbar-brand">Microsoft identity platform</a>
- <div class="btn-group ml-auto dropleft">
- <button type="button" id="signIn" class="btn btn-secondary" aria-expanded="false">
- Sign in
- </button>
- <button type="button" id="signOut" class="btn btn-success" hidden aria-expanded="false">
- Sign out
- </button>
- </div>
- </nav>
- <br>
- <h5 class="card-header text-center">Electron sample app calling MS Graph API using MSAL Node</h5>
- <br>
- <div class="row" style="margin:auto">
- <div id="cardDiv" class="col-md-3" style="display:none">
- <div class="card text-center">
- <div class="card-body">
- <h5 class="card-title" id="WelcomeMessage">Please sign-in to see your profile and read your mails
- </h5>
- <div id="profileDiv"></div>
- <br>
- <br>
- <button class="btn btn-primary" id="seeProfile">See Profile</button>
- <br>
- <br>
- <button class="btn btn-primary" id="readMail">Read Mails</button>
- </div>
- </div>
- </div>
- <br>
- <br>
- <div class="col-md-4">
- <div class="list-group" id="list-tab" role="tablist">
- </div>
- </div>
- <div class="col-md-5">
- <div class="tab-content" id="nav-tabContent">
- </div>
- </div>
- </div>
- <br>
- <br>
-
- <script>
- window.jQuery = window.$ = require('jquery');
- require("./renderer.js");
- </script>
-
- <!-- importing bootstrap.js and supporting js libraries -->
- <script src="../node_modules/jquery/dist/jquery.js"></script>
- <script src="../node_modules/popper.js/dist/umd/popper.js"></script>
- <script src="../node_modules/bootstrap/dist/js/bootstrap.js"></script>
- </body>
-
- </html>
- ```
+ :::code language="html" source="~/ms-identity-JavaScript-nodejs-desktop/App/https://docsupdatetracker.net/index.html":::
3. Next, create file named *main.js* and add the following code:
- ```JavaScript
- require('dotenv').config()
-
- const path = require('path');
- const { app, ipcMain, BrowserWindow } = require('electron');
- const { IPC_MESSAGES } = require('./constants');
-
- const { callEndpointWithToken } = require('./fetch');
- const AuthProvider = require('./AuthProvider');
-
- const authProvider = new AuthProvider();
- let mainWindow;
-
- function createWindow () {
- mainWindow = new BrowserWindow({
- width: 800,
- height: 600,
- webPreferences: {
- nodeIntegration: true,
- contextIsolation: false
- }
- });
-
- mainWindow.loadFile(path.join(__dirname, './https://docsupdatetracker.net/index.html'));
- };
-
- app.on('ready', () => {
- createWindow();
- });
-
- app.on('window-all-closed', () => {
- app.quit();
- });
-
+ :::code language="js" source="~/ms-identity-JavaScript-nodejs-desktop/App/main.js":::
- // Event handlers
- ipcMain.on(IPC_MESSAGES.LOGIN, async() => {
- const account = await authProvider.login(mainWindow);
-
- await mainWindow.loadFile(path.join(__dirname, './https://docsupdatetracker.net/index.html'));
-
- mainWindow.webContents.send(IPC_MESSAGES.SHOW_WELCOME_MESSAGE, account);
- });
-
- ipcMain.on(IPC_MESSAGES.LOGOUT, async() => {
- await authProvider.logout();
- await mainWindow.loadFile(path.join(__dirname, './https://docsupdatetracker.net/index.html'));
- });
-
- ipcMain.on(IPC_MESSAGES.GET_PROFILE, async() => {
-
- const tokenRequest = {
- scopes: ['User.Read'],
- };
-
- const token = await authProvider.getToken(mainWindow, tokenRequest);
- const account = authProvider.account
+In the code snippet above, we initialize an Electron main window object and create some event handlers for interactions with the Electron window. We also import configuration parameters, instantiate *authProvider* class for handling sign-in, sign-out and token acquisition, and call the Microsoft Graph API.
- await mainWindow.loadFile(path.join(__dirname, './https://docsupdatetracker.net/index.html'));
+4. In the same folder (*App*), create another file named *renderer.js* and add the following code:
- const graphResponse = await callEndpointWithToken(`${process.env.GRAPH_ENDPOINT_HOST}${process.env.GRAPH_ME_ENDPOINT}`, token);
+ :::code language="js" source="~/ms-identity-JavaScript-nodejs-desktop/App/renderer.js":::
- mainWindow.webContents.send(IPC_MESSAGES.SHOW_WELCOME_MESSAGE, account);
- mainWindow.webContents.send(IPC_MESSAGES.SET_PROFILE, graphResponse);
- });
+The renderer methods are exposed by the preload script found in the *preload.js* file in order to give the renderer access to the `Node API` in a secure and controlled way
- ipcMain.on(IPC_MESSAGES.GET_MAIL, async() => {
+5. Then, create a new file *preload.js* and add the following code:
- const tokenRequest = {
- scopes: ['Mail.Read'],
- };
+ :::code language="js" source="~/ms-identity-JavaScript-nodejs-desktop/App/preload.js":::
- const token = await authProvider.getToken(mainWindow, tokenRequest);
- const account = authProvider.account;
+This preload script exposes a renderer methods to give the renderer process controlled access to some `Node APIs` by applying IPC channels that have been configured for communication between the main and renderer processes.
- await mainWindow.loadFile(path.join(__dirname, './https://docsupdatetracker.net/index.html'));
+6. Next, create *UIManager.js* class inside the *App* folder and add the following code:
- const graphResponse = await callEndpointWithToken(`${process.env.GRAPH_ENDPOINT_HOST}${process.env.GRAPH_MAIL_ENDPOINT}`, token);
+ :::code language="js" source="~/ms-identity-JavaScript-nodejs-desktop/App/UIManager.js":::
- mainWindow.webContents.send(IPC_MESSAGES.SHOW_WELCOME_MESSAGE, account);
- mainWindow.webContents.send(IPC_MESSAGES.SET_MAIL, graphResponse);
- });
- ```
+7. After that, create *CustomProtocolListener.js* class and add the following code there:
-In the code snippet above, we initialize an Electron main window object and create some event handlers for interactions with the Electron window. We also import configuration parameters, instantiate *authProvider* class for handling sign-in, sign-out and token acquisition, and call the Microsoft Graph API.
+ :::code language="js" source="~/ms-identity-JavaScript-nodejs-desktop/App/CustomProtocolListener.js":::
-4. In the same folder (*App*), create another file named *renderer.js* and add the following code:
+*CustomProtocolListener* class can be instantiated in order to register and unregister a custom typed protocol on which MSAL Node can listen for Auth Code responses.
- ```JavaScript
- const { ipcRenderer } = require('electron');
- const { IPC_MESSAGES } = require('./constants');
-
- // UI event handlers
- document.querySelector('#signIn').addEventListener('click', () => {
- ipcRenderer.send(IPC_MESSAGES.LOGIN);
- });
-
- document.querySelector('#signOut').addEventListener('click', () => {
- ipcRenderer.send(IPC_MESSAGES.LOGOUT);
- });
-
- document.querySelector('#seeProfile').addEventListener('click', () => {
- ipcRenderer.send(IPC_MESSAGES.GET_PROFILE);
- });
-
- document.querySelector('#readMail').addEventListener('click', () => {
- ipcRenderer.send(IPC_MESSAGES.GET_MAIL);
- });
-
- // Main process message subscribers
- ipcRenderer.on(IPC_MESSAGES.SHOW_WELCOME_MESSAGE, (event, account) => {
- showWelcomeMessage(account);
- });
-
- ipcRenderer.on(IPC_MESSAGES.SET_PROFILE, (event, graphResponse) => {
- updateUI(graphResponse, `${process.env.GRAPH_ENDPOINT_HOST}${process.env.GRAPH_ME_ENDPOINT}`);
- });
-
- ipcRenderer.on(IPC_MESSAGES.SET_MAIL, (event, graphResponse) => {
- updateUI(graphResponse, `${process.env.GRAPH_ENDPOINT_HOST}${process.env.GRAPH_MAIL_ENDPOINT}`);
- });
-
- // DOM elements to work with
- const welcomeDiv = document.getElementById("WelcomeMessage");
- const signInButton = document.getElementById("signIn");
- const signOutButton = document.getElementById("signOut");
- const cardDiv = document.getElementById("cardDiv");
- const profileDiv = document.getElementById("profileDiv");
- const tabList = document.getElementById("list-tab");
- const tabContent = document.getElementById("nav-tabContent");
-
- function showWelcomeMessage(account) {
- cardDiv.style.display = "initial";
- welcomeDiv.innerHTML = `Welcome ${account.name}`;
- signInButton.hidden = true;
- signOutButton.hidden = false;
- }
-
- function clearTabs() {
- tabList.innerHTML = "";
- tabContent.innerHTML = "";
- }
-
- function updateUI(data, endpoint) {
-
- console.log(`Graph API responded at: ${new Date().toString()}`);
-
- if (endpoint === `${process.env.GRAPH_ENDPOINT_HOST}${process.env.GRAPH_ME_ENDPOINT}`) {
- setProfile(data);
- } else if (endpoint === `${process.env.GRAPH_ENDPOINT_HOST}${process.env.GRAPH_MAIL_ENDPOINT}`) {
- setMail(data);
- }
- }
-
- function setProfile(data) {
- profileDiv.innerHTML = ''
-
- const title = document.createElement('p');
- const email = document.createElement('p');
- const phone = document.createElement('p');
- const address = document.createElement('p');
-
- title.innerHTML = "<strong>Title: </strong>" + data.jobTitle;
- email.innerHTML = "<strong>Mail: </strong>" + data.mail;
- phone.innerHTML = "<strong>Phone: </strong>" + data.businessPhones[0];
- address.innerHTML = "<strong>Location: </strong>" + data.officeLocation;
-
- profileDiv.appendChild(title);
- profileDiv.appendChild(email);
- profileDiv.appendChild(phone);
- profileDiv.appendChild(address);
- }
-
- function setMail(data) {
- const mailInfo = data;
- if (mailInfo.value.length < 1) {
- alert("Your mailbox is empty!")
- } else {
- clearTabs();
- mailInfo.value.slice(0, 10).forEach((d, i) => {
- createAndAppendListItem(d, i);
- createAndAppendContentItem(d, i);
- });
- }
- }
-
- function createAndAppendListItem(d, i) {
- const listItem = document.createElement("a");
- listItem.setAttribute("class", "list-group-item list-group-item-action")
- listItem.setAttribute("id", "list" + i + "list")
- listItem.setAttribute("data-toggle", "list")
- listItem.setAttribute("href", "#list" + i)
- listItem.setAttribute("role", "tab")
- listItem.setAttribute("aria-controls", i)
- listItem.innerHTML = d.subject;
- tabList.appendChild(listItem);
- }
-
- function createAndAppendContentItem(d, i) {
- const contentItem = document.createElement("div");
- contentItem.setAttribute("class", "tab-pane fade")
- contentItem.setAttribute("id", "list" + i)
- contentItem.setAttribute("role", "tabpanel")
- contentItem.setAttribute("aria-labelledby", "list" + i + "list")
-
- if (d.from) {
- contentItem.innerHTML = "<strong> from: " + d.from.emailAddress.address + "</strong><br><br>" + d.bodyPreview + "...";
- tabContent.appendChild(contentItem);
- }
- }
- ```
+8. Finally, create a file named *constants.js* that will store the strings constants for describing the application **events**:
-5. Finally, create a file named *constants.js* that will store the strings constants for describing the application **events**:
-
- ```JavaScript
- const IPC_MESSAGES = {
- SHOW_WELCOME_MESSAGE: 'SHOW_WELCOME_MESSAGE',
- LOGIN: 'LOGIN',
- LOGOUT: 'LOGOUT',
- GET_PROFILE: 'GET_PROFILE',
- SET_PROFILE: 'SET_PROFILE',
- GET_MAIL: 'GET_MAIL',
- SET_MAIL: 'SET_MAIL'
- }
-
- module.exports = {
- IPC_MESSAGES: IPC_MESSAGES,
- }
- ```
+ :::code language="js" source="~/ms-identity-JavaScript-nodejs-desktop/App/constants.js":::
You now have a simple GUI and interactions for your Electron app. After completing the rest of the tutorial, the file and folder structure of your project should look similar to the following: ``` ElectronDesktopApp/ Γö£ΓöÇΓöÇ App
-│   ├── authProvider.js
+│   ├── AuthProvider.js
│   ├── constants.js
+│   ├── CustomProtocolListener.js
│   ├── fetch.js
-│   ├── main.js
-│   ├── renderer.js
│   ├── https://docsupdatetracker.net/index.html
+| Γö£ΓöÇΓöÇ main.js
+| Γö£ΓöÇΓöÇ preload.js
+| Γö£ΓöÇΓöÇ renderer.js
+│   ├── UIManager.js
+│   ├── authConfig.js
Γö£ΓöÇΓöÇ package.json
-ΓööΓöÇΓöÇ .env
``` ## Add authentication logic to your app
-In *App* folder, create a file named *AuthProvider.js*. This will contain an authentication provider class that will handle login, logout, token acquisition, account selection and related authentication tasks using MSAL Node. Add the following code there:
-
-```JavaScript
-const { PublicClientApplication, LogLevel, CryptoProvider } = require('@azure/msal-node');
-const { protocol } = require('electron');
-const path = require('path');
-const url = require('url');
-
-/**
- * To demonstrate best security practices, this Electron sample application makes use of
- * a custom file protocol instead of a regular web (https://) redirect URI in order to
- * handle the redirection step of the authorization flow, as suggested in the OAuth2.0 specification for Native Apps.
- */
-const CUSTOM_FILE_PROTOCOL_NAME = process.env.REDIRECT_URI.split(':')[0]; // e.g. msal://redirect
-
-/**
- * Configuration object to be passed to MSAL instance on creation.
- * For a full list of MSAL Node configuration parameters, visit:
- * https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-node/docs/configuration.md
- */
-const MSAL_CONFIG = {
- auth: {
- clientId: process.env.CLIENT_ID,
- authority: `${process.env.AAD_ENDPOINT_HOST}${process.env.TENANT_ID}`,
- redirectUri: process.env.REDIRECT_URI,
- },
- system: {
- loggerOptions: {
- loggerCallback(loglevel, message, containsPii) {
-     console.log(message);
- },
-         piiLoggingEnabled: false,
- logLevel: LogLevel.Verbose,
- }
- }
-};
-
-class AuthProvider {
-
- clientApplication;
- cryptoProvider;
- authCodeUrlParams;
- authCodeRequest;
- pkceCodes;
- account;
-
- constructor() {
- /**
- * Initialize a public client application. For more information, visit:
- * https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-node/docs/initialize-public-client-application.md
- */
- this.clientApplication = new PublicClientApplication(MSAL_CONFIG);
- this.account = null;
-
- // Initialize CryptoProvider instance
- this.cryptoProvider = new CryptoProvider();
-
- this.setRequestObjects();
- }
-
- /**
- * Initialize request objects used by this AuthModule.
- */
- setRequestObjects() {
- const requestScopes = ['openid', 'profile', 'User.Read'];
- const redirectUri = process.env.REDIRECT_URI;
-
- this.authCodeUrlParams = {
- scopes: requestScopes,
- redirectUri: redirectUri
- };
-
- this.authCodeRequest = {
- scopes: requestScopes,
- redirectUri: redirectUri,
- code: null
- }
-
- this.pkceCodes = {
- challengeMethod: "S256", // Use SHA256 Algorithm
- verifier: "", // Generate a code verifier for the Auth Code Request first
- challenge: "" // Generate a code challenge from the previously generated code verifier
- };
- }
-
- async login(authWindow) {
- const authResult = await this.getTokenInteractive(authWindow, this.authCodeUrlParams);
- return this.handleResponse(authResult);
- }
-
- async logout() {
- if (this.account) {
- await this.clientApplication.getTokenCache().removeAccount(this.account);
- this.account = null;
- }
- }
-
- async getToken(authWindow, tokenRequest) {
- let authResponse;
-
- authResponse = await this.getTokenInteractive(authWindow, tokenRequest);
-
- return authResponse.accessToken || null;
- }
-
- // This method contains an implementation of access token acquisition in authorization code flow
- async getTokenInteractive(authWindow, tokenRequest) {
-
- /**
- * Proof Key for Code Exchange (PKCE) Setup
- *
- * MSAL enables PKCE in the Authorization Code Grant Flow by including the codeChallenge and codeChallengeMethod parameters
- * in the request passed into getAuthCodeUrl() API, as well as the codeVerifier parameter in the
- * second leg (acquireTokenByCode() API).
- *
- * MSAL Node provides PKCE Generation tools through the CryptoProvider class, which exposes
- * the generatePkceCodes() asynchronous API. As illustrated in the example below, the verifier
- * and challenge values should be generated previous to the authorization flow initiation.
- *
- * For details on PKCE code generation logic, consult the
- * PKCE specification https://tools.ietf.org/html/rfc7636#section-4
- */
-
- const {verifier, challenge} = await this.cryptoProvider.generatePkceCodes();
-
- this.pkceCodes.verifier = verifier;
- this.pkceCodes.challenge = challenge;
-
- const authCodeUrlParams = {
- ...this.authCodeUrlParams,
- scopes: tokenRequest.scopes,
- codeChallenge: this.pkceCodes.challenge, // PKCE Code Challenge
- codeChallengeMethod: this.pkceCodes.challengeMethod // PKCE Code Challenge Method
- };
-
- const authCodeUrl = await this.clientApplication.getAuthCodeUrl(authCodeUrlParams);
-
- protocol.registerFileProtocol(CUSTOM_FILE_PROTOCOL_NAME, (req, callback) => {
- const requestUrl = url.parse(req.url, true);
- callback(path.normalize(`${__dirname}/${requestUrl.path}`));
- });
-
- const authCode = await this.listenForAuthCode(authCodeUrl, authWindow);
-
- const authResponse = await this.clientApplication.acquireTokenByCode({
- ...this.authCodeRequest,
- scopes: tokenRequest.scopes,
- code: authCode,
- codeVerifier: this.pkceCodes.verifier // PKCE Code Verifier
- });
-
- return authResponse;
- }
-
- // Listen for authorization code response from Azure AD
- async listenForAuthCode(navigateUrl, authWindow) {
-
- authWindow.loadURL(navigateUrl);
-
- return new Promise((resolve, reject) => {
- authWindow.webContents.on('will-redirect', (event, responseUrl) => {
- try {
- const parsedUrl = new URL(responseUrl);
- const authCode = parsedUrl.searchParams.get('code');
- resolve(authCode);
- } catch (err) {
- reject(err);
- }
- });
- });
- }
-
- /**
- * Handles the response from a popup or redirect. If response is null, will check if we have any accounts and attempt to sign in.
- * @param response
- */
- async handleResponse(response) {
- if (response !== null) {
- this.account = response.account;
- } else {
- this.account = await this.getAccount();
- }
-
- return this.account;
- }
-
- /**
- * Calls getAllAccounts and determines the correct account to sign into, currently defaults to first account found in cache.
- * https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-common/docs/Accounts.md
- */
- async getAccount() {
- const cache = this.clientApplication.getTokenCache();
- const currentAccounts = await cache.getAllAccounts();
-
- if (currentAccounts === null) {
- console.log('No accounts detected');
- return null;
- }
-
- if (currentAccounts.length > 1) {
- // Add choose account code here
- console.log('Multiple accounts detected, need to add choose account code.');
- return currentAccounts[0];
- } else if (currentAccounts.length === 1) {
- return currentAccounts[0];
- } else {
- return null;
- }
- }
-}
-
-module.exports = AuthProvider;
-```
+In *App* folder, create a file named *AuthProvider.js*. The *AuthProvider.js* file will contain an authentication provider class that will handle login, logout, token acquisition, account selection and related authentication tasks using MSAL Node. Add the following code there:
+ In the code snippet above, we first initialized MSAL Node `PublicClientApplication` by passing a configuration object (`msalConfig`). We then exposed `login`, `logout` and `getToken` methods to be called by main module (*main.js*). In `login` and `getToken`, we acquire ID and access tokens, respectively, by first requesting an authorization code and then exchanging this with a token using MSAL Node `acquireTokenByCode` public API.
In the code snippet above, we first initialized MSAL Node `PublicClientApplicati
Create another file named *fetch.js*. This file will contain an Axios HTTP client for making REST calls to the Microsoft Graph API.
-```JavaScript
-const axios = require('axios');
-
-/**
- * Makes an Authorization 'Bearer' request with the given accessToken to the given endpoint.
- * @param endpoint
- * @param accessToken
- */
-async function callEndpointWithToken(endpoint, accessToken) {
- const options = {
- headers: {
- Authorization: `Bearer ${accessToken}`
- }
- };
-
- console.log('Request made at: ' + new Date().toString());
-
- const response = await axios.default.get(endpoint, options);
-
- return response.data;
-}
-
-module.exports = {
- callEndpointWithToken: callEndpointWithToken,
-};
-```
## Add app registration details
-Finally, create an environment file to store the app registration details that will be used when acquiring tokens. To do so, create a file named *.env* inside the root folder of the sample (*ElectronDesktopApp*), and add the following code:
-
-```
-# Credentials
-CLIENT_ID=Enter_the_Application_Id_Here
-TENANT_ID=Enter_the_Tenant_Id_Here
+Finally, create an environment file to store the app registration details that will be used when acquiring tokens. To do so, create a file named *authConfig.js* inside the root folder of the sample (*ElectronDesktopApp*), and add the following code:
-# Configuration
-REDIRECT_URI=msal://redirect
-
-# Endpoints
-AAD_ENDPOINT_HOST=Enter_the_Cloud_Instance_Id_Here
-GRAPH_ENDPOINT_HOST=Enter_the_Graph_Endpoint_Here
-
-# RESOURCES
-GRAPH_ME_ENDPOINT=v1.0/me
-GRAPH_MAIL_ENDPOINT=v1.0/me/messages
-
-# SCOPES
-GRAPH_SCOPES=User.Read Mail.Read
-```
Fill in these details with the values you obtain from Azure app registration portal:
Fill in these details with the values you obtain from Azure app registration por
- `Enter_the_Cloud_Instance_Id_Here`: The Azure cloud instance in which your application is registered. - For the main (or *global*) Azure cloud, enter `https://login.microsoftonline.com/`. - For **national** clouds (for example, China), you can find appropriate values in [National clouds](authentication-national-cloud.md).
+- `Enter_the_Redirect_Uri_Here`: The Redirect Uri of the application you registered `msal{Your_Application/Client_Id}:///auth`.
- `Enter_the_Graph_Endpoint_Here` is the instance of the Microsoft Graph API the application should communicate with. - For the **global** Microsoft Graph API endpoint, replace both instances of this string with `https://graph.microsoft.com/`. - For endpoints in **national** cloud deployments, see [National cloud deployments](/graph/deployments) in the Microsoft Graph documentation.
Select **Read Mails** to view the messages in user's account. You'll be presente
:::image type="content" source="media/tutorial-v2-nodejs-desktop/desktop-05-consent-mail.png" alt-text="consent screen for read.mail permission":::
-After consent, you will view the messages returned in the response from the call to the Microsoft Graph API:
+After consent, you'll view the messages returned in the response from the call to the Microsoft Graph API:
:::image type="content" source="media/tutorial-v2-nodejs-desktop/desktop-06-mails.png" alt-text="mail information from Microsoft Graph"::: ## How the application works
-When a user selects the **Sign In** button for the first time, get `getTokenInteractive` method of *AuthProvider.js* is called. This method redirects the user to sign-in with the *Microsoft identity platform endpoint* and validate the user's credentials, and then obtains an **authorization code**. This code is then exchanged for an access token using `acquireTokenByCode` public API of MSAL Node.
+When a user selects the **Sign In** button for the first time, get `getTokenInteractive` method of *AuthProvider.js* is called. This method redirects the user to sign-in with the Microsoft identity platform endpoint and validates the user's credentials, and then obtains an **authorization code**. This code is then exchanged for an access token using `acquireTokenByCode` public API of MSAL Node.
At this point, a PKCE-protected authorization code is sent to the CORS-protected token endpoint and is exchanged for tokens. An ID token, access token, and refresh token are received by your application and processed by MSAL Node, and the information contained in the tokens is cached.
The ID token contains basic information about the user, like their display name.
The desktop app you've created in this tutorial makes a REST call to the Microsoft Graph API using an access token as bearer token in request header ([RFC 6750](https://tools.ietf.org/html/rfc6750)).
-The Microsoft Graph API requires the *user.read* scope to read a user's profile. By default, this scope is automatically added in every application that's registered in the Azure portal. Other APIs for Microsoft Graph, as well as custom APIs for your back-end server, might require additional scopes. For example, the Microsoft Graph API requires the *Mail.Read* scope in order to list the user's email.
+The Microsoft Graph API requires the *user.read* scope to read a user's profile. By default, this scope is automatically added in every application that's registered in the Azure portal. Other APIs for Microsoft Graph, and custom APIs for your back-end server, might require extra scopes. For example, the Microsoft Graph API requires the *Mail.Read* scope in order to list the user's email.
-As you add scopes, your users might be prompted to provide additional consent for the added scopes.
+As you add scopes, your users might be prompted to provide another consent for the added scopes.
[!INCLUDE [Help and support](../../../includes/active-directory-develop-help-support-include.md)]
active-directory V2 Supported Account Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/v2-supported-account-types.md
Title: Supported account types- description: Conceptual documentation about audiences and supported account types in applications
active-directory Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/whats-new-docs.md
Previously updated : 04/04/2022 Last updated : 06/02/2022
Welcome to what's new in the Microsoft identity platform documentation. This article lists new docs that have been added and those that have had significant updates in the last three months.
+## May 2022
+
+### Updated articles
+
+- [Developer guide to Conditional Access authentication context](developer-guide-conditional-access-authentication-context.md)
+- [Migrate confidential client applications from ADAL.NET to MSAL.NET](msal-net-migration-confidential-client.md)
+- [Protected web API: App registration](scenario-protected-web-api-app-registration.md)
+- [Quickstart: Sign in users and call the Microsoft Graph API from an Android app](mobile-app-quickstart-portal-android.md)
+- [Quickstart: Sign in users and call the Microsoft Graph API from an iOS or macOS app](mobile-app-quickstart-portal-ios.md)
+- [Set up your application's Azure AD test environment](test-setup-environment.md)
+- [Single Sign-On SAML protocol](single-sign-on-saml-protocol.md)
+- [Single sign-on with MSAL.js](msal-js-sso.md)
+- [Tutorial: Sign in users and acquire a token for Microsoft Graph in a Node.js & Express web app](tutorial-v2-nodejs-webapp-msal.md)
+- [What's new for authentication?](reference-breaking-changes.md)
+ ## March 2022 ### New articles
Welcome to what's new in the Microsoft identity platform documentation. This art
### Updated articles - [Desktop app that calls web APIs: Acquire a token using WAM](scenario-desktop-acquire-token-wam.md)-
-## January 2022
-
-### New articles
--- [Access Azure AD protected resources from an app in Google Cloud (preview)](workload-identity-federation-create-trust-gcp.md)-
-### Updated articles
--- [Confidential client assertions](msal-net-client-assertions.md)-- [Claims mapping policy type](reference-claims-mapping-policy-type.md)-- [Configure an app to trust a GitHub repo (preview)](workload-identity-federation-create-trust-github.md)-- [Configure an app to trust an external identity provider (preview)](workload-identity-federation-create-trust.md)-- [Exchange a SAML token issued by AD FS for a Microsoft Graph access token](v2-saml-bearer-assertion.md)-- [Logging in MSAL.js](msal-logging-js.md)-- [Permissions and consent in the Microsoft identity platform](v2-permissions-and-consent.md)-- [Quickstart: Acquire a token and call the Microsoft Graph API by using a console app's identity](console-app-quickstart.md)-- [Quickstart: Acquire a token and call Microsoft Graph API from a desktop application](desktop-app-quickstart.md)-- [Quickstart: Add sign-in with Microsoft to a web app](web-app-quickstart.md)-- [Quickstart: Protect a web API with the Microsoft identity platform](web-api-quickstart.md)-- [Quickstart: Sign in users and call the Microsoft Graph API from a mobile application](mobile-app-quickstart.md)
active-directory Groups Dynamic Membership https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-dynamic-membership.md
Previously updated : 09/24/2021 Last updated : 06/02/2022
# Dynamic membership rules for groups in Azure Active Directory
-In Azure Active Directory (Azure AD), you can create complex attribute-based rules to enable dynamic memberships for groups. Dynamic group membership reduces the administrative overhead of adding and removing users. This article details the properties and syntax to create dynamic membership rules for users or devices. You can set up a rule for dynamic membership on security groups or Microsoft 365 groups.
+In Azure Active Directory (Azure AD), you can create attribute-based rules to enable dynamic membership for a group. Dynamic group membership adds and removes group members automatically using membership rules based on member attributes. This article details the properties and syntax to create dynamic membership rules for users or devices. You can set up a rule for dynamic membership on security groups or Microsoft 365 groups.
When any attributes of a user or device change, the system evaluates all dynamic group rules in a directory to see if the change would trigger any group adds or removes. If a user or device satisfies a rule on a group, they are added as a member of that group. If they no longer satisfy the rule, they are removed. You can't manually add or remove a member of a dynamic group. - You can create a dynamic group for devices or for users, but you can't create a rule that contains both users and devices.-- You can't create a device group based on the device owners' attributes. Device membership rules can only reference device attributes.
+- You can't create a device group based on the user attributes of the device owner. Device membership rules can reference only device attributes.
> [!NOTE] > This feature requires an Azure AD Premium P1 license or Intune for Education for each unique user that is a member of one or more dynamic groups. You don't have to assign licenses to users for them to be members of dynamic groups, but you must have the minimum number of licenses in the Azure AD organization to cover all such users. For example, if you had a total of 1,000 unique users in all dynamic groups in your organization, you would need at least 1,000 licenses for Azure AD Premium P1 to meet the license requirement.
assignedPlans is a multi-value property that lists all service plans assigned to
user.assignedPlans -any (assignedPlan.servicePlanId -eq "efb87545-963c-4e0d-99df-69c6916d9eb0" -and assignedPlan.capabilityStatus -eq "Enabled") ```
-A rule such as this one can be used to group all users for whom a Microsoft 365 (or other Microsoft Online Service) capability is enabled. You could then apply with a set of policies to the group.
+A rule such as this one can be used to group all users for whom a Microsoft 365 or other Microsoft Online Service capability is enabled. You could then apply with a set of policies to the group.
#### Example 2
device.objectId -ne null
## Extension properties and custom extension properties
-Extension attributes and custom extension properties are supported as string properties in dynamic membership rules. [Extension attributes](/graph/api/resources/onpremisesextensionattributes) are synced from on-premises Window Server AD and take the format of "ExtensionAttributeX", where X equals 1 - 15. Here's an example of a rule that uses an extension attribute as a property:
+Extension attributes and custom extension properties are supported as string properties in dynamic membership rules. [Extension attributes](/graph/api/resources/onpremisesextensionattributes) are synced from on-premises Window Server Active Directory and take the format of "ExtensionAttributeX", where X equals 1 - 15. Here's an example of a rule that uses an extension attribute as a property:
``` (user.extensionAttribute15 -eq "Marketing") ```
-[Custom extension properties](../hybrid/how-to-connect-sync-feature-directory-extensions.md) are synced from on-premises Windows Server AD or from a connected SaaS application and are of the format of `user.extension_[GUID]_[Attribute]`, where:
+[Custom extension properties](../hybrid/how-to-connect-sync-feature-directory-extensions.md) are synced from on-premises Windows Server Active Directory or from a connected SaaS application and are of the format of `user.extension_[GUID]_[Attribute]`, where:
- [GUID] is the unique identifier in Azure AD for the application that created the property in Azure AD - [Attribute] is the name of the property as it was created
active-directory Groups Dynamic Rule Member Of https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-dynamic-rule-member-of.md
+
+ Title: Group membership for Azure AD dynamic groups with memberOf - Azure AD | Microsoft Docs
+description: How to create a dynamic membership group that can contain members of other groups in Azure Active Directory.
+
+documentationcenter: ''
++++++ Last updated : 06/02/2022++++++
+# Group membership in a dynamic group (preview) in Azure Active Directory
+
+This feature preview enables admins to create dynamic groups in Azure Active Directory (Azure AD) that populate by adding members of other groups using the memberOf attribute. Apps that couldn't read group-based membership previously in Azure AD can now read the entire membership of these new memberOf groups. Not only can these groups be used for apps, they can also be used for licensing assignment and role-based access control. The following diagram illustrates how you could create Dynamic-Group-A with members of Security-Group-X and Security-Group-Y. Members of the groups inside of Security-Group-X and Security-Group-Y don't become members of Dynamic-Group-A.
+
+
+With this preview, admins can configure dynamic groups with the memberOf attribute in the Azure portal, Microsoft Graph, and PowerShell. Security groups, Microsoft 365 groups, groups that are synced from on-premises Active Directory can all be added as members of these dynamic groups, and can all be added to a single group. For example, the dynamic group could be a security group, but you can use Microsoft 365 groups, security groups, and groups that are synced from on-premises to define its membership.
+
+## Prerequisites
+
+Only administrators in the Global Administrator, Intune Administrator, or User Administrator role can use the memberOf attribute to create an Azure AD dynamic group. You must have an Azure AD Premium license for the Azure AD tenant.
+
+## Preview limitations
+
+- Each Azure AD tenant is limited to 500 dynamic groups using the memberOf attribute. memberOf groups do count towards the total dynamic group member quota of 5,000.
+- Each dynamic group can have up to 50 member groups.
+- When adding members of security groups to memberOf dynamic groups, only direct members of the security group become members of the dynamic group.
+- You can't use one memberOf dynamic group to define the membership of another memberOf dynamic groups. For example, Dynamic Group A, with members of group B and C in it, can't be a member of Dynamic Group D).
+- MemberOf can't be used with other rules. For example, a rule that states dynamic group A should contain members of group B and also should contain only users located in Redmond will fail.
+- Dynamic group rule builder and validate feature can't be used for memberOf at this time.
+- MemberOf can't be used with other operators. For example, you can't create a rule that states ΓÇ£Members Of group A can't be in Dynamic group B.ΓÇ¥
+
+## Getting started
+
+This feature can be used in the Azure AD portal, Microsoft Graph, and in PowerShell. Because memberOf isn't yet supported in the rule builder, you must enter your rule in the rule editor.
+
+### Steps to create a memberOf dynamic group
+
+1. Sign in to the Azure portal with an account that has Global Administrator, Intune Administrator, or User Administrator role permissions.
+1. Select **Azure Active Directory** > **Groups**, and then select **New group**.
+1. Fill in group details. The group type can be Security or Microsoft 365, and the membership type can be set to **Dynamic User** or **Dynamic Device**.
+1. Select **Add dynamic query**.
+1. MemberOf isn't yet supported in the rule builder. Select **Edit** to write the rule in the **Rule syntax** box.
+ 1. Example user rule: `user.memberof -any (group.objectId -in ['groupId', 'groupId'])`
+ 1. Example device rule: `device.memberof -any (group.objectId -in ['groupId', 'groupId'])`
+1. Select **OK**.
+1. Select **Create group**.
+
+## Next steps
+
+To report an issue, contact us in the [Teams channel](https://teams.microsoft.com/l/channel/19%3a39Q7HFuexXXE3Vh90woJRNQQBbZl1YyesJHIEquuQCw1%40thread.tacv2/General?groupId=bfd3bfb8-e0db-4e9e-9008-5d7ba8c996b0&tenantId=72f988bf-86f1-41af-91ab-2d7cd011db47).
active-directory Groups Settings Cmdlets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-settings-cmdlets.md
Here are the settings defined in the Group.Unified SettingsTemplate. Unless othe
| <ul><li>GuestUsageGuidelinesUrl<li>Type: String<li>Default: "" | The URL of a link to the guest usage guidelines. | | <ul><li>AllowToAddGuests<li>Type: Boolean<li>Default: True | A boolean indicating whether or not is allowed to add guests to this directory. <br>This setting may be overridden and become read-only if *EnableMIPLabels* is set to *True* and a guest policy is associated with the sensitivity label assigned to the group.<br>If the AllowToAddGuests setting is set to False at the organization level, any AllowToAddGuests setting at the group level is ignored. If you want to enable guest access for only a few groups, you must set AllowToAddGuests to be true at the organization level, and then selectively disable it for specific groups. | | <ul><li>ClassificationList<li>Type: String<li>Default: "" | A comma-delimited list of valid classification values that can be applied to Microsoft 365 groups. <br>This setting does not apply when EnableMIPLabels == True.|
-| <ul><li>EnableMIPLabels<li>Type: Boolean<li>Default: "False" |The flag indicating whether sensitivity labels published in Microsoft 365 Compliance Center can be applied to Microsoft 365 groups. For more information, see [Assign Sensitivity Labels for Microsoft 365 groups](groups-assign-sensitivity-labels.md). |
+| <ul><li>EnableMIPLabels<li>Type: Boolean<li>Default: "False" |The flag indicating whether sensitivity labels published in Microsoft Purview compliance portal can be applied to Microsoft 365 groups. For more information, see [Assign Sensitivity Labels for Microsoft 365 groups](groups-assign-sensitivity-labels.md). |
## Example: Configure Guest policy for groups at the directory level 1. Get all the setting templates:
active-directory 8 Secure Access Sensitivity Labels https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/8-secure-access-sensitivity-labels.md
Sensitivity labels on email and other content travel with the content. Sensitivi
## Permissions necessary to create and manage sensitivity levels
-Members of your compliance team who will create sensitivity labels need permissions to the Microsoft 365 Defender portal, Microsoft 365 Compliance Center, or Office 365 Security & Compliance Center.
+Members of your compliance team who will create sensitivity labels need permissions to the Microsoft 365 Defender portal, Microsoft Purview compliance portal, or Office 365 Security & Compliance Center.
By default, global administrators for your tenant have access to these admin centers and can give compliance officers and other people access, without giving them all the permissions of a tenant admin. For this delegated limited admin access, add users to the Compliance Data Administrator, Compliance Administrator, or Security Administrator role group.
active-directory Resilience App Development Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/resilience-app-development-overview.md
Title: Increase resilience of authentication and authorization applications you develop- description: Overview of our resilience guidance for application development using Azure Active Directory and the Microsoft identity platform
active-directory Resilience Client App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/resilience-client-app.md
Title: Increase the resilience of authentication and authorization in client applications you develop- description: Guidance for increasing resiliency of authentication and authorization in client application using the Microsoft identity platform
active-directory Resilience Daemon App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/resilience-daemon-app.md
Title: Increase the resilience of authentication and authorization in daemon applications you develop- description: Guidance for increasing resiliency of authentication and authorization in daemon application using the Microsoft identity platform
active-directory Choose Ad Authn https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/choose-ad-authn.md
Title: Authentication for Azure AD hybrid identity solutions- description: This guide helps CEOs, CIOs, CISOs, Chief Identity Architects, Enterprise Architects, and Security and IT decision makers responsible for choosing an authentication method for their Azure AD hybrid identity solution in medium to large organizations. keywords:
active-directory Access Panel Collections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/access-panel-collections.md
Title: Create collections for My Apps portals- description: Use My Apps collections to Customize My Apps pages for a simpler My Apps experience for your users. Organize applications into groups with separate tabs.
active-directory Add Application Portal Assign Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/add-application-portal-assign-users.md
Title: 'Quickstart: Create and assign a user account'- description: Create a user account in your Azure Active Directory tenant and assign it to an application.
active-directory Add Application Portal Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/add-application-portal-configure.md
Title: 'Configure enterprise application properties'- description: Configure the properties of an enterprise application in Azure Active Directory.
active-directory Add Application Portal Setup Oidc Sso https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/add-application-portal-setup-oidc-sso.md
Title: 'Add an OpenID Connect-based single sign-on application' description: Learn how to add OpenID Connect-based single sign-on application in Azure Active Directory.-
active-directory Add Application Portal Setup Sso https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/add-application-portal-setup-sso.md
Title: 'Quickstart: Enable single sign-on for an enterprise application'- description: Enable single sign-on for an enterprise application in Azure Active Directory.
active-directory Add Application Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/add-application-portal.md
Title: 'Quickstart: Add an enterprise application' description: Add an enterprise application in Azure Active Directory.-
active-directory Admin Consent Workflow Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/admin-consent-workflow-faq.md
Title: Frequently asked questions about the admin consent workflow- description: Find answers to frequently asked questions (FAQs) about the admin consent workflow.
active-directory Admin Consent Workflow Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/admin-consent-workflow-overview.md
Title: Overview of admin consent workflow- description: Learn about the admin consent workflow in Azure Active Directory
active-directory App Management Powershell Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/app-management-powershell-samples.md
Title: PowerShell samples in Application Management- description: These PowerShell samples are used for apps you manage in your Azure Active Directory tenant. You can use these sample scripts to find expiration information about secrets and certificates.
active-directory Application List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/application-list.md
Title: Viewing apps using your tenant for identity management- description: Understand how to view all applications using your Azure Active Directory tenant for identity management.
active-directory Application Management Certs Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/application-management-certs-faq.md
Title: Application Management certificates frequently asked questions- description: Learn answers to frequently asked questions (FAQ) about managing certificates for apps using Azure Active Directory as an Identity Provider (IdP).
active-directory Application Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/application-properties.md
Title: 'Properties of an enterprise application'- description: Learn about the properties of an enterprise application in Azure Active Directory.
active-directory Application Sign In Other Problem Access Panel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/application-sign-in-other-problem-access-panel.md
Title: Troubleshoot problems signing in to an application from My Apps portal- description: Troubleshoot problems signing in to an application from Azure AD My Apps
active-directory Application Sign In Problem Application Error https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/application-sign-in-problem-application-error.md
Title: Error message appears on app page after you sign in- description: How to resolve issues with Azure AD sign in when the app returns an error message.
active-directory Application Sign In Problem First Party Microsoft https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/application-sign-in-problem-first-party-microsoft.md
Title: Problems signing in to a Microsoft application- description: Troubleshoot common problems faced when signing in to first-party Microsoft Applications using Azure AD (like Microsoft 365).
active-directory Application Sign In Unexpected User Consent Error https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/application-sign-in-unexpected-user-consent-error.md
Title: Unexpected error when performing consent to an application- description: Discusses errors that can occur during the process of consenting to an application and what you can do about them
active-directory Application Sign In Unexpected User Consent Prompt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/application-sign-in-unexpected-user-consent-prompt.md
Title: Unexpected consent prompt when signing in to an application- description: How to troubleshoot when a user sees a consent prompt for an application you have integrated with Azure AD that you did not expect
active-directory Assign App Owners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/assign-app-owners.md
Title: Assign enterprise application owners- description: Learn how to assign owners to applications in Azure Active Directory documentationcenter: ''
active-directory Assign User Or Group Access Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/assign-user-or-group-access-portal.md
Title: Assign users and groups- description: Learn how to assign and unassign users, and groups, for an app using Azure Active Directory for identity management.
active-directory Certificate Signing Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/certificate-signing-options.md
Title: Advanced certificate signing options in a SAML token- description: Learn how to use advanced certificate signing options in the SAML token for pre-integrated apps in Azure Active Directory
active-directory Cloud App Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/cloud-app-security.md
Title: App visibility and control with Microsoft Defender for Cloud Apps- description: Learn ways to identify app risk levels, stop breaches and leaks in real time, and use app connectors to take advantage of provider APIs for visibility and governance.
active-directory Configure Admin Consent Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/configure-admin-consent-workflow.md
Title: Configure the admin consent workflow- description: Learn how to configure a way for end users to request access to applications that require admin consent.
active-directory Configure Authentication For Federated Users Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/configure-authentication-for-federated-users-portal.md
Title: Configure sign-in auto-acceleration using Home Realm Discovery- description: Learn how to force federated IdP acceleration for an application using Home Realm Discovery policy.
active-directory Configure Linked Sign On https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/configure-linked-sign-on.md
Title: Add linked single sign-on to an application description: Add linked single sign-on to an application in Azure Active Directory.-
active-directory Configure Password Single Sign On Non Gallery Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/configure-password-single-sign-on-non-gallery-applications.md
Title: Add password-based single sign-on to an application description: Add password-based single sign-on to an application in Azure Active Directory.-
active-directory Configure Permission Classifications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/configure-permission-classifications.md
Title: Configure permission classifications- description: Learn how to manage delegated permission classifications.
active-directory Configure Risk Based Step Up Consent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/configure-risk-based-step-up-consent.md
Title: Configure risk-based step-up consent- description: Learn how to disable and enable risk-based step-up consent to reduce user exposure to malicious apps that make illicit consent requests.
active-directory Configure User Consent Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/configure-user-consent-groups.md
Title: Configure group owner consent to apps accessing group data- description: Learn manage whether group and team owners can consent to applications that will have access to the group or team's data.
active-directory Configure User Consent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/configure-user-consent.md
Title: Configure how users consent to applications- description: Learn how to manage how and when users can consent to applications that will have access to your organization's data.
active-directory Consent And Permissions Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/consent-and-permissions-overview.md
Title: Overview of consent and permissions- description: Learn about the fundamental concepts of consents and permissions in Azure AD
active-directory Datawiza With Azure Ad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/datawiza-with-azure-ad.md
Title: Secure hybrid access with Datawiza- description: Learn how to integrate Datawiza with Azure AD. See how to use Datawiza and Azure AD to authenticate users and give them access to on-premises and cloud apps.
active-directory Debug Saml Sso Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/debug-saml-sso-issues.md
Title: Debug SAML-based single sign-on- description: Debug SAML-based single sign-on to applications in Azure Active Directory.
active-directory Delete Application Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/delete-application-portal.md
Title: 'Quickstart: Delete an enterprise application' description: Delete an enterprise application in Azure Active Directory.-
active-directory Disable User Sign In Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/disable-user-sign-in-portal.md
Title: Disable how a how a user signs in- description: How to disable an enterprise application so that no users may sign in to it in Azure Active Directory
active-directory End User Experiences https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/end-user-experiences.md
Title: End-user experiences for applications- description: Azure Active Directory (Azure AD) provides several customizable ways to deploy applications to end users in your organization.
active-directory F5 Aad Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-aad-integration.md
Title: Secure hybrid access with F5- description: F5 BIG-IP Access Policy Manager and Azure Active Directory integration for Secure Hybrid Access
active-directory F5 Aad Password Less Vpn https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-aad-password-less-vpn.md
Title: Configure F5 BIG-IP SSL-VPN solution in Azure AD- description: Tutorial to configure F5ΓÇÖs BIG-IP based Secure socket layer Virtual private network (SSL-VPN) solution with Azure Active Directory (AD) for Secure Hybrid Access (SHA)
active-directory F5 Bigip Deployment Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-bigip-deployment-guide.md
Title: Secure hybrid access with F5 deployment guide- description: Tutorial to deploy F5 BIG-IP Virtual Edition (VE) VM in Azure IaaS for Secure hybrid access
active-directory Grant Admin Consent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/grant-admin-consent.md
Title: Grant tenant-wide admin consent to an application - description: Learn how to grant tenant-wide consent to an application so that end-users are not prompted for consent when signing in to an application.
active-directory Grant Consent Single User https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/grant-consent-single-user.md
Title: Grant consent on behalf of a single user description: Learn how to grant consent on behalf of a single user when user consent is disabled or restricted.-
active-directory Hide Application From User Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/hide-application-from-user-portal.md
Title: Hide an Enterprise application- description: How to hide an Enterprise application from user's experience in Azure Active Directory access portals or Microsoft 365 launchers.
active-directory Home Realm Discovery Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/home-realm-discovery-policy.md
Title: Home Realm Discovery policy- description: Learn how to manage Home Realm Discovery policy for Azure Active Directory authentication for federated users, including auto-acceleration and domain hints.
active-directory Howto Saml Token Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/howto-saml-token-encryption.md
Title: SAML token encryption description: Learn how to configure Azure Active Directory SAML token encryption.-
active-directory Manage App Consent Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/manage-app-consent-policies.md
Title: Manage app consent policies description: Learn how to manage built-in and custom app consent policies to control when consent can be granted.-
active-directory Manage Application Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/manage-application-permissions.md
Title: Review permissions granted to applications- description: Learn how to review and manage permissions for an application in Azure Active Directory.
active-directory Manage Consent Requests https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/manage-consent-requests.md
Title: Manage consent to applications and evaluate consent requests description: Learn how to manage consent requests when user consent is disabled or restricted, and how to evaluate a request for tenant-wide admin consent to an application in Azure Active Directory.-
active-directory Manage Self Service Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/manage-self-service-access.md
Title: How to enable self-service application assignment- description: Enable self-service application access to allow users to find their own applications from their My Apps portal
active-directory Migrate Adfs Application Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/migrate-adfs-application-activity.md
Title: Use the activity report to move AD FS apps to Azure Active Directory description: The Active Directory Federation Services (AD FS) application activity report lets you quickly migrate applications from AD FS to Azure Active Directory (Azure AD). This migration tool for AD FS identifies compatibility with Azure AD and gives migration guidance.-
active-directory Migrate Adfs Apps To Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/migrate-adfs-apps-to-azure.md
Title: Moving application authentication from AD FS to Azure Active Directory description: Learn how to use Azure Active Directory to replace Active Directory Federation Services (AD FS), giving users single sign-on to all their applications.-
active-directory Migrate Applications From Okta To Azure Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/migrate-applications-from-okta-to-azure-active-directory.md
Title: Tutorial to migrate your applications from Okta to Azure Active Directory- description: Learn how to migrate your applications from Okta to Azure Active Directory.
active-directory Migrate Okta Federation To Azure Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/migrate-okta-federation-to-azure-active-directory.md
Title: Migrate Okta federation to Azure Active Directory- description: Learn how to migrate your Okta-federated applications to managed authentication under Azure AD. See how to migrate federation in a staged manner.
active-directory Migrate Okta Sign On Policies To Azure Active Directory Conditional Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/migrate-okta-sign-on-policies-to-azure-active-directory-conditional-access.md
Title: Tutorial to migrate Okta sign-on policies to Azure Active Directory Conditional Access- description: In this tutorial, you learn how to migrate Okta sign-on policies to Azure Active Directory Conditional Access.
active-directory Migrate Okta Sync Provisioning To Azure Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/migrate-okta-sync-provisioning-to-azure-active-directory.md
Title: Migrate Okta sync provisioning to Azure AD Connect- description: Learn how to migrate user provisioning from Okta to Azure Active Directory (Azure AD). See how to use Azure AD Connect server or Azure AD cloud provisioning.
active-directory Migration Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/migration-resources.md
Title: Resources for migrating apps to Azure Active Directory description: Resources to help you migrate application access and authentication to Azure Active Directory (Azure AD).-
active-directory Myapps Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/myapps-overview.md
Title: My Apps portal overview description: Learn about how to manage applications in the My Apps portal.-
active-directory One Click Sso Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/one-click-sso-tutorial.md
Title: One-click, single sign-on (SSO) configuration of your Azure Marketplace application description: Steps for one-click configuration of SSO for your application from the Azure Marketplace.-
active-directory Overview Application Gallery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/overview-application-gallery.md
Title: Overview of the Azure Active Directory application gallery description: An overview of using the Azure Active Directory application gallery.-
active-directory Overview Assign App Owners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/overview-assign-app-owners.md
Title: Overview of enterprise application ownership- description: Learn about enterprise application ownership in Azure Active Directory
active-directory Plan An Application Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/plan-an-application-integration.md
Title: Get started integrating Azure Active Directory with apps description: This article is a getting started guide for integrating Azure Active Directory (AD) with on-premises applications, and cloud applications.-
active-directory Plan Sso Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/plan-sso-deployment.md
Title: Plan a single sign-on deployment description: Plan the deployment of single sign-on in Azure Active Directory.-
The following SSO protocols are available to use:
- **OpenID Connect and OAuth** - Choose OpenID Connect and OAuth 2.0 if the application you're connecting to supports it. For more information, see [OAuth 2.0 and OpenID Connect protocols on the Microsoft identity platform](../develop/active-directory-v2-protocols.md). For steps to implement OpenID Connect SSO, see [Set up OIDC-based single sign-on for an application in Azure Active Directory](add-application-portal-setup-oidc-sso.md). -- **SAML** - Choose SAML whenever possible for existing applications that do not use OpenID Connect or OAuth. For more information, see [Single Sign-On SAML protocol](../develop/single-sign-on-saml-protocol.md). For a quick introduction to implementing SAML SSO, see [Quickstart: Set up SAML-based single sign-on for an application in Azure Active Directory](add-application-portal-setup-sso.md).
+- **SAML** - Choose SAML whenever possible for existing applications that do not use OpenID Connect or OAuth. For more information, see [Single Sign-On SAML protocol](../develop/single-sign-on-saml-protocol.md).
- **Password-based** - Choose password-based when the application has an HTML sign-in page. Password-based SSO is also known as password vaulting. Password-based SSO enables you to manage user access and passwords to web applications that don't support identity federation. It's also useful where several users need to share a single account, such as to your organization's social media app accounts.
The following SSO protocols are available to use:
- **Header-based** - Choose header-based single sign-on when the application uses headers for authentication. For more information, see [Header-based SSO](../app-proxy/application-proxy-configure-single-sign-on-with-headers.md). ## Next steps-- [Manage access to apps](what-is-access-management.md)+
+- Consider completing the single sign-on training in [Enable single sign-on for applications by using Azure Active Directory](/learn/modules/enable-single-sign-on).
active-directory Prevent Domain Hints With Home Realm Discovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/prevent-domain-hints-with-home-realm-discovery.md
Title: Prevent sign-in auto-acceleration using Home Realm Discovery policy- description: Learn how to prevent domain_hint auto-acceleration to federated IDPs.
active-directory Protect Against Consent Phishing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/protect-against-consent-phishing.md
Title: Protecting against consent phishing- description: Learn ways of mitigating against app-based consent phishing attacks using Azure AD.
active-directory Review Admin Consent Requests https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/review-admin-consent-requests.md
Title: Review and take action on admin consent requests- description: Learn how to review and take action on admin consent requests that were created after you were designated as a reviewer.
active-directory Secure Hybrid Access Integrations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/secure-hybrid-access-integrations.md
Title: Secure hybrid access with Azure AD partner integration description: Help customers discover and migrate SaaS applications into Azure AD and connect apps that use legacy authentication methods with Azure AD.-
active-directory Secure Hybrid Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/secure-hybrid-access.md
Title: Secure hybrid access description: This article describes partner solutions for integrating your legacy on-premises, public cloud, or private cloud applications with Azure AD. -
active-directory Silverfort Azure Ad Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/silverfort-azure-ad-integration.md
Title: Secure hybrid access with Azure AD and Silverfort description: In this tutorial, learn how to integrate Silverfort with Azure AD for secure hybrid access -
active-directory Tenant Restrictions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/tenant-restrictions.md
Title: Use tenant restrictions to manage access to SaaS apps description: How to use tenant restrictions to manage which users can access apps based on their Azure AD tenant.-
active-directory Troubleshoot App Publishing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/troubleshoot-app-publishing.md
Title: Your sign-in was blocked description: Troubleshoot a blocked sign-in to the Microsoft Application Network portal. -
active-directory Troubleshoot Password Based Sso https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/troubleshoot-password-based-sso.md
Title: Troubleshoot password-based single sign-on description: Troubleshoot issues with an Azure AD app that's configured for password-based single sign-on.-
active-directory Troubleshoot Saml Based Sso https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/troubleshoot-saml-based-sso.md
Title: Troubleshoot SAML-based single sign-on description: Troubleshoot issues with an Azure AD app that's configured for SAML-based single sign-on.-
active-directory Tutorial Govern Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/tutorial-govern-monitor.md
Title: "Tutorial: Govern and monitor applications"- description: In this tutorial, you learn how to govern and monitor an application in Azure Active Directory.
active-directory Tutorial Manage Access Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/tutorial-manage-access-security.md
Title: "Tutorial: Manage application access and security"- description: In this tutorial, you learn how to manage access to an application in Azure Active Directory and make sure it's secure.
active-directory Tutorial Manage Certificates For Federated Single Sign On https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/tutorial-manage-certificates-for-federated-single-sign-on.md
Title: "Tutorial: Manage federation certificates" description: In this tutorial, you'll learn how to customize the expiration date for your federation certificates, and how to renew certificates that will soon expire.-
Using the information in this tutorial, an administrator of the application lear
## Auto-generated certificate for gallery and non-gallery applications
-When you add a new application from the gallery and configure a SAML-based sign-on (by selecting **Single sign-on** > **SAML** from the application overview page), Azure AD generates a certificate for the application that is valid for three years. To download the active certificate as a security certificate (**.cer**) file, return to that page (**SAML-based sign-on**) and select a download link in the **SAML Signing Certificate** heading. You can choose between the raw (binary) certificate or the Base64 (base 64-encoded text) certificate. For gallery applications, this section might also show a link to download the certificate as federation metadata XML (an **.xml** file), depending on the requirement of the application.
+When you add a new application from the gallery and configure a SAML-based sign-on (by selecting **Single sign-on** > **SAML** from the application overview page), Azure AD generates a self-signed certificate for the application that is valid for three years. To download the active certificate as a security certificate (**.cer**) file, return to that page (**SAML-based sign-on**) and select a download link in the **SAML Signing Certificate** heading. You can choose between the raw (binary) certificate or the Base64 (base 64-encoded text) certificate. For gallery applications, this section might also show a link to download the certificate as federation metadata XML (an **.xml** file), depending on the requirement of the application.
You can also download an active or inactive certificate by selecting the **SAML Signing Certificate** heading's **Edit** icon (a pencil), which displays the **SAML Signing Certificate** page. Select the ellipsis (**...**) next to the certificate you want to download, and then choose which certificate format you want. You have the additional option to download the certificate in privacy-enhanced mail (PEM) format. This format is identical to Base64 but with a **.pem** file name extension, which isn't recognized in Windows as a certificate format.
active-directory V2 Howto App Gallery Listing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/v2-howto-app-gallery-listing.md
Title: Publish your application description: Learn how to publish your application in the Azure Active Directory application gallery. -
active-directory View Applications Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/view-applications-portal.md
Title: 'Quickstart: View enterprise applications' description: View the enterprise applications that are registered to use your Azure Active Directory tenant.-
active-directory Ways Users Get Assigned To Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/ways-users-get-assigned-to-applications.md
Title: Understand how users are assigned to apps description: Understand how users get assigned to an app that is using Azure Active Directory for identity management.-
active-directory What Is Access Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/what-is-access-management.md
Title: Manage access to apps- description: Describes how Azure Active Directory enables organizations to specify the apps to which each user has access.
active-directory What Is Application Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/what-is-application-management.md
Title: What is application management? description: An overview of managing the lifecycle of an application in Azure Active Directory.-
If you develop your own business application, you can register it with Azure AD
If you want to make your application available through the gallery, you can [submit a request to have it added](../manage-apps/v2-howto-app-gallery-listing.md). - ### On-premises applications If you want to continue using an on-premises application, but take advantage of what Azure AD offers, connect it with Azure AD using [Azure AD Application Proxy](../app-proxy/application-proxy.md). Application Proxy can be implemented when you want to publish on-premises applications externally. Remote users who need access to internal applications can then access them in a secure manner.
As an administrator, you can [grant tenant-wide admin consent](grant-admin-conse
### Single sign-on
-Consider implementing SSO in your application. You can manually configure most applications for SSO. The most popular options in Azure AD are [SAML-based SSO and OpenID Connect-based SSO](../develop/active-directory-v2-protocols.md). Before you start, make sure that you understand the requirements for SSO and how to [plan for deployment](plan-sso-deployment.md). For a simple example of how to configure SAML-based SSO for an enterprise application in your Azure AD tenant, see [Quickstart: Enable single sign-on for an enterprise application](add-application-portal-setup-sso.md).
+Consider implementing SSO in your application. You can manually configure most applications for SSO. The most popular options in Azure AD are [SAML-based SSO and OpenID Connect-based SSO](../develop/active-directory-v2-protocols.md). Before you start, make sure that you understand the requirements for SSO and how to [plan for deployment](plan-sso-deployment.md). For training related to configuring SAML-based SSO for an enterprise application in your Azure AD tenant, see [Enable single sign-on for an application by using Azure Active Directory](/learn/modules/enable-single-sign-on).
### User, group, and owner assignment By default, all users can access your enterprise applications without being assigned to them. However, if you want to assign the application to a set of users, your application requires user assignment. For a simple example of how to create and assign a user account to an application, see [Quickstart: Create and assign a user account](add-application-portal-assign-users.md).
-If included in your subscription, [assign groups to an application](assign-user-or-group-access-portal.md) so that you can delegate ongoing access management to the group owner.
+If included in your subscription, [assign groups to an application](assign-user-or-group-access-portal.md) so that you can delegate ongoing access management to the group owner.
[Assigning owners](assign-app-owners.md) is a simple way to grant the ability to manage all aspects of Azure AD configuration for an application. As an owner, a user can manage the organization-specific configuration of the application.
active-directory What Is Single Sign On https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/what-is-single-sign-on.md
Title: What is single sign-on? description: Learn about single sign-on for enterprise applications in Azure Active Directory.-
If you're a user of an application, you likely don't care much about SSO details
## Next steps -- [Quickstart: Enable single sign on](add-application-portal-setup-sso.md)
+- [Plan for single sign-on deployment](plan-sso-deployment.md)
active-directory Managed Identities Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/managed-identities-faq.md
Moving a user-assigned managed identity to a different resource group isn't supp
Managed identity tokens are cached by the underlying Azure infrastructure for performance and resiliency purposes: the back-end services for managed identities maintain a cache per resource URI for around 24 hours. It can take several hours for changes to a managed identity's permissions to take effect, for example. Today, it is not possible to force a managed identity's token to be refreshed before its expiry. For more information, see [Limitation of using managed identities for authorization](managed-identity-best-practice-recommendations.md#limitation-of-using-managed-identities-for-authorization).
+### What happens to tokens after a managed identity is deleted?
+When a managed identity is deleted, an Azure resource that was previously associated with that identity can no longer request new tokens for that identity. Tokens that were issued before the identity was deleted will still be valid until their original expiry. Some target endpoints' authorization systems may carry out additional checks in the directory for the identity, in which case the request will fail as the object can't be found. However some systems, like Azure RBAC, will continue to accept requests from that token until it expires.
+ ## Next steps - Learn [how managed identities work with virtual machines](how-managed-identities-work-vm.md)
active-directory Permissions Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/permissions-reference.md
Users in this role can enable, disable, and delete devices in Azure AD and read
## Compliance Administrator
-Users with this role have permissions to manage compliance-related features in the Microsoft 365 compliance center, Microsoft 365 admin center, Azure, and Office 365 Security & Compliance Center. Assignees can also manage all features within the Exchange admin center and Teams & Skype for Business admin centers and create support tickets for Azure and Microsoft 365. More information is available at [About Microsoft 365 admin roles](https://support.office.com/article/About-Office-365-admin-roles-da585eea-f576-4f55-a1e0-87090b6aaa9d).
+Users with this role have permissions to manage compliance-related features in the Microsoft Purview compliance portal, Microsoft 365 admin center, Azure, and Office 365 Security & Compliance Center. Assignees can also manage all features within the Exchange admin center and Teams & Skype for Business admin centers and create support tickets for Azure and Microsoft 365. More information is available at [About Microsoft 365 admin roles](https://support.office.com/article/About-Office-365-admin-roles-da585eea-f576-4f55-a1e0-87090b6aaa9d).
In | Can do -- | -
-[Microsoft 365 compliance center](https://protection.office.com) | Protect and manage your organization's data across Microsoft 365 services<br>Manage compliance alerts
+[Microsoft Purview compliance portal](https://protection.office.com) | Protect and manage your organization's data across Microsoft 365 services<br>Manage compliance alerts
[Compliance Manager](/office365/securitycompliance/meet-data-protection-and-regulatory-reqs-using-microsoft-cloud) | Track, assign, and verify your organization's regulatory compliance activities [Office 365 Security & Compliance Center](https://support.office.com/article/About-Office-365-admin-roles-da585eea-f576-4f55-a1e0-87090b6aaa9d) | Manage data governance<br>Perform legal and data investigation<br>Manage Data Subject Request<br><br>This role has the same permissions as the [Compliance Administrator RoleGroup](/microsoft-365/security/office-365-security/permissions-in-the-security-and-compliance-center#permissions-needed-to-use-features-in-the-security--compliance-center) in Office 365 Security & Compliance Center role-based access control. [Intune](/intune/role-based-access-control) | View all Intune audit data
In | Can do
## Compliance Data Administrator
-Users with this role have permissions to track data in the Microsoft 365 compliance center, Microsoft 365 admin center, and Azure. Users can also track compliance data within the Exchange admin center, Compliance Manager, and Teams & Skype for Business admin center and create support tickets for Azure and Microsoft 365. [This documentation](/microsoft-365/security/office-365-security/permissions-in-the-security-and-compliance-center#permissions-needed-to-use-features-in-the-security--compliance-center) has details on differences between Compliance Administrator and Compliance Data Administrator.
+Users with this role have permissions to track data in the Microsoft Purview compliance portal, Microsoft 365 admin center, and Azure. Users can also track compliance data within the Exchange admin center, Compliance Manager, and Teams & Skype for Business admin center and create support tickets for Azure and Microsoft 365. [This documentation](/microsoft-365/security/office-365-security/permissions-in-the-security-and-compliance-center#permissions-needed-to-use-features-in-the-security--compliance-center) has details on differences between Compliance Administrator and Compliance Data Administrator.
In | Can do -- | -
-[Microsoft 365 compliance center](https://protection.office.com) | Monitor compliance-related policies across Microsoft 365 services<br>Manage compliance alerts
+[Microsoft Purview compliance portal](https://protection.office.com) | Monitor compliance-related policies across Microsoft 365 services<br>Manage compliance alerts
[Compliance Manager](/office365/securitycompliance/meet-data-protection-and-regulatory-reqs-using-microsoft-cloud) | Track, assign, and verify your organization's regulatory compliance activities [Office 365 Security & Compliance Center](https://support.office.com/article/About-Office-365-admin-roles-da585eea-f576-4f55-a1e0-87090b6aaa9d) | Manage data governance<br>Perform legal and data investigation<br>Manage Data Subject Request<br><br>This role has the same permissions as the [Compliance Data Administrator RoleGroup](/microsoft-365/security/office-365-security/permissions-in-the-security-and-compliance-center#permissions-needed-to-use-features-in-the-security--compliance-center) in Office 365 Security & Compliance Center role-based access control. [Intune](/intune/role-based-access-control) | View all Intune audit data
This administrator manages federation between Azure AD organizations and externa
## Global Administrator
-Users with this role have access to all administrative features in Azure Active Directory, as well as services that use Azure Active Directory identities like the Microsoft 365 Defender portal, the Microsoft 365 compliance center, Exchange Online, SharePoint Online, and Skype for Business Online. Furthermore, Global Administrators can [elevate their access](../../role-based-access-control/elevate-access-global-admin.md) to manage all Azure subscriptions and management groups. This allows Global Administrators to get full access to all Azure resources using the respective Azure AD Tenant. The person who signs up for the Azure AD organization becomes a Global Administrator. There can be more than one Global Administrator at your company. Global Administrators can reset the password for any user and all other administrators.
+Users with this role have access to all administrative features in Azure Active Directory, as well as services that use Azure Active Directory identities like the Microsoft 365 Defender portal, the Microsoft Purview compliance portal, Exchange Online, SharePoint Online, and Skype for Business Online. Furthermore, Global Administrators can [elevate their access](../../role-based-access-control/elevate-access-global-admin.md) to manage all Azure subscriptions and management groups. This allows Global Administrators to get full access to all Azure resources using the respective Azure AD Tenant. The person who signs up for the Azure AD organization becomes a Global Administrator. There can be more than one Global Administrator at your company. Global Administrators can reset the password for any user and all other administrators.
> [!NOTE] > As a best practice, Microsoft recommends that you assign the Global Administrator role to fewer than five people in your organization. For more information, see [Best practices for Azure AD roles](best-practices.md).
active-directory Capriza Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/capriza-tutorial.md
Title: 'Tutorial: Azure Active Directory integration with Capriza Platform | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with Capriza Platform'
description: Learn how to configure single sign-on between Azure Active Directory and Capriza Platform.
Previously updated : 02/12/2019 Last updated : 06/01/2022
-# Tutorial: Azure Active Directory integration with Capriza Platform
+# Tutorial: Azure AD SSO integration with Capriza Platform
-In this tutorial, you learn how to integrate Capriza Platform with Azure Active Directory (Azure AD).
-Integrating Capriza Platform with Azure AD provides you with the following benefits:
+In this tutorial, you'll learn how to integrate Capriza Platform with Azure Active Directory (Azure AD). When you integrate Capriza Platform with Azure AD, you can:
-* You can control in Azure AD who has access to Capriza Platform.
-* You can enable your users to be automatically signed-in to Capriza Platform (Single Sign-On) with their Azure AD accounts.
-* You can manage your accounts in one central location - the Azure portal.
-
-If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
+* Control in Azure AD who has access to Capriza Platform.
+* Enable your users to be automatically signed-in to Capriza Platform with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
## Prerequisites
-To configure Azure AD integration with Capriza Platform, you need the following items:
+To get started, you need the following items:
-* An Azure AD subscription. If you don't have an Azure AD environment, you can get one-month trial [here](https://azure.microsoft.com/pricing/free-trial/)
-* Capriza Platform single sign-on enabled subscription
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Capriza Platform single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
## Scenario description In this tutorial, you configure and test Azure AD single sign-on in a test environment.
-* Capriza Platform supports **SP** initiated SSO
-* Capriza Platform supports **Just In Time** user provisioning
-
-## Adding Capriza Platform from the gallery
-
-To configure the integration of Capriza Platform into Azure AD, you need to add Capriza Platform from the gallery to your list of managed SaaS apps.
-
-**To add Capriza Platform from the gallery, perform the following steps:**
-
-1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon.
-
- ![The Azure Active Directory button](common/select-azuread.png)
-
-2. Navigate to **Enterprise Applications** and then select the **All Applications** option.
-
- ![The Enterprise applications blade](common/enterprise-applications.png)
-
-3. To add new application, click **New application** button on the top of dialog.
+* Capriza Platform supports **SP** initiated SSO.
+* Capriza Platform supports **Just In Time** user provisioning.
- ![The New application button](common/add-new-app.png)
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
-4. In the search box, type **Capriza Platform**, select **Capriza Platform** from result panel then click **Add** button to add the application.
+## Add Capriza Platform from the gallery
- ![Capriza Platform in the results list](common/search-new-app.png)
-
-## Configure and test Azure AD single sign-on
-
-In this section, you configure and test Azure AD single sign-on with Capriza Platform based on a test user called **Britta Simon**.
-For single sign-on to work, a link relationship between an Azure AD user and the related user in Capriza Platform needs to be established.
-
-To configure and test Azure AD single sign-on with Capriza Platform, you need to complete the following building blocks:
-
-1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature.
-2. **[Configure Capriza Platform Single Sign-On](#configure-capriza-platform-single-sign-on)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-5. **[Create Capriza Platform test user](#create-capriza-platform-test-user)** - to have a counterpart of Britta Simon in Capriza Platform that is linked to the Azure AD representation of user.
-6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works.
-
-### Configure Azure AD single sign-on
+To configure the integration of Capriza Platform into Azure AD, you need to add Capriza Platform from the gallery to your list of managed SaaS apps.
-In this section, you enable Azure AD single sign-on in the Azure portal.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Capriza Platform** in the search box.
+1. Select **Capriza Platform** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-To configure Azure AD single sign-on with Capriza Platform, perform the following steps:
+## Configure and test Azure AD SSO for Capriza Platform
-1. In the [Azure portal](https://portal.azure.com/), on the **Capriza Platform** application integration page, select **Single sign-on**.
+Configure and test Azure AD SSO with Capriza Platform using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Capriza Platform.
- ![Configure single sign-on link](common/select-sso.png)
+To configure and test Azure AD SSO with Capriza Platform, perform the following steps:
-2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Capriza Platform SSO](#configure-capriza-platform-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Capriza Platform test user](#create-capriza-platform-test-user)** - to have a counterpart of B.Simon in Capriza Platform that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
- ![Single sign-on select mode](common/select-saml-option.png)
+## Configure Azure AD SSO
-3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog.
+Follow these steps to enable Azure AD SSO in the Azure portal.
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+1. In the Azure portal, on the **Capriza Platform** application integration page, find the **Manage** section and select **single sign-on**.
+2. On the **Select a single sign-on method** page, select **SAML**.
+3. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
-4. On the **Basic SAML Configuration** section, perform the following steps:
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
- ![Capriza Platform Domain and URLs single sign-on information](common/sp-signonurl.png)
+4. On the **Basic SAML Configuration** section, perform the following step:
In the **Sign-on URL** text box, type a URL using the following pattern: `https://<companyname>.capriza.com/<tenantid>`
To configure Azure AD single sign-on with Capriza Platform, perform the followin
5. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Certificate (Base64)** from the given options as per your requirement and save it on your computer.
- ![The Certificate download link](common/certificatebase64.png)
+ ![Screenshot shows the Certificate download link.](common/certificatebase64.png "Certificate")
6. On the **Set up Capriza Platform** section, copy the appropriate URL(s) as per your requirement.
- ![Copy configuration URLs](common/copy-configuration-urls.png)
-
- a. Login URL
-
- b. Azure Ad Identifier
-
- c. Logout URL
-
-### Configure Capriza Platform Single Sign-On
-
-To configure single sign-on on **Capriza Platform** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [Capriza Platform support team](mailto:support@capriza.com). They set this setting to have the SAML SSO connection set properly on both sides.
+ ![Screenshot shows to copy appropriate configuration U R L.](common/copy-configuration-urls.png "Configuration")
### Create an Azure AD test user
-The objective of this section is to create a test user in the Azure portal called Britta Simon.
-
-1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**.
-
- ![The "Users and groups" and "All users" links](common/users.png)
-
-2. Select **New user** at the top of the screen.
-
- ![New user Button](common/new-user.png)
-
-3. In the User properties, perform the following steps.
-
- ![The User dialog box](common/user-properties.png)
-
- a. In the **Name** field enter **BrittaSimon**.
-
- b. In the **User name** field type **brittasimon\@yourcompanydomain.extension**
- For example, BrittaSimon@contoso.com
-
- c. Select **Show password** check box, and then write down the value that's displayed in the Password box.
+In this section, you'll create a test user in the Azure portal called B.Simon.
- d. Click **Create**.
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
### Assign the Azure AD test user
-In this section, you enable Britta Simon to use Azure single sign-on by granting access to Capriza Platform.
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Capriza Platform.
-1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **Capriza Platform**.
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Capriza Platform**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
- ![Enterprise applications blade](common/enterprise-applications.png)
+## Configure Capriza Platform SSO
-2. In the applications list, select **Capriza Platform**.
-
- ![The Capriza Platform link in the Applications list](common/all-applications.png)
-
-3. In the menu on the left, select **Users and groups**.
-
- ![The "Users and groups" link](common/users-groups-blade.png)
-
-4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
-
- ![The Add Assignment pane](common/add-assign-user.png)
-
-5. In the **Users and groups** dialog select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen.
-
-6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen.
-
-7. In the **Add Assignment** dialog click the **Assign** button.
+To configure single sign-on on **Capriza Platform** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [Capriza Platform support team](mailto:support@capriza.com). They set this setting to have the SAML SSO connection set properly on both sides.
### Create Capriza Platform test user
The objective of this section is to create a user called Britta Simon in Capriza
There is no action item for you in this section. A new user will be created during an attempt to access Capriza if it doesn't exist yet.
-### Test single sign-on
+## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
-When you click the Capriza Platform tile in the Access Panel, you should be automatically signed in to the Capriza Platform for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+* Click on **Test this application** in Azure portal. This will redirect to Capriza Platform Sign-on URL where you can initiate the login flow.
-## Additional Resources
+* Go to Capriza Platform Sign-on URL directly and initiate the login flow from there.
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+* You can use Microsoft My Apps. When you click the Capriza Platform tile in the My Apps, this will redirect to Capriza Platform Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+Once you configure Capriza Platform you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Competencyiq Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/competencyiq-tutorial.md
Title: 'Tutorial: Azure Active Directory integration with CompetencyIQ | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with CompetencyIQ'
description: Learn how to configure single sign-on between Azure Active Directory and CompetencyIQ.
Previously updated : 01/23/2019 Last updated : 06/01/2022
-# Tutorial: Azure Active Directory integration with CompetencyIQ
+# Tutorial: Azure AD SSO integration with CompetencyIQ
-In this tutorial, you learn how to integrate CompetencyIQ with Azure Active Directory (Azure AD).
-Integrating CompetencyIQ with Azure AD provides you with the following benefits:
+In this tutorial, you'll learn how to integrate CompetencyIQ with Azure Active Directory (Azure AD). When you integrate CompetencyIQ with Azure AD, you can:
-* You can control in Azure AD who has access to CompetencyIQ.
-* You can enable your users to be automatically signed-in to CompetencyIQ (Single Sign-On) with their Azure AD accounts.
-* You can manage your accounts in one central location - the Azure portal.
-
-If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
+* Control in Azure AD who has access to CompetencyIQ.
+* Enable your users to be automatically signed-in to CompetencyIQ with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
## Prerequisites
-To configure Azure AD integration with CompetencyIQ, you need the following items:
+To get started, you need the following items:
-* An Azure AD subscription. If you don't have an Azure AD environment, you can get one-month trial [here](https://azure.microsoft.com/pricing/free-trial/)
-* CompetencyIQ single sign-on enabled subscription
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* CompetencyIQ single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
## Scenario description In this tutorial, you configure and test Azure AD single sign-on in a test environment.
-* CompetencyIQ supports **SP** initiated SSO
-
-## Adding CompetencyIQ from the gallery
-
-To configure the integration of CompetencyIQ into Azure AD, you need to add CompetencyIQ from the gallery to your list of managed SaaS apps.
-
-**To add CompetencyIQ from the gallery, perform the following steps:**
-
-1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon.
-
- ![The Azure Active Directory button](common/select-azuread.png)
-
-2. Navigate to **Enterprise Applications** and then select the **All Applications** option.
-
- ![The Enterprise applications blade](common/enterprise-applications.png)
-
-3. To add new application, click **New application** button on the top of dialog.
-
- ![The New application button](common/add-new-app.png)
+* CompetencyIQ supports **SP** initiated SSO.
-4. In the search box, type **CompetencyIQ**, select **CompetencyIQ** from result panel then click **Add** button to add the application.
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
- ![CompetencyIQ in the results list](common/search-new-app.png)
+## Add CompetencyIQ from the gallery
-## Configure and test Azure AD single sign-on
-
-In this section, you configure and test Azure AD single sign-on with CompetencyIQ based on a test user called **Britta Simon**.
-For single sign-on to work, a link relationship between an Azure AD user and the related user in CompetencyIQ needs to be established.
-
-To configure and test Azure AD single sign-on with CompetencyIQ, you need to complete the following building blocks:
-
-1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature.
-2. **[Configure CompetencyIQ Single Sign-On](#configure-competencyiq-single-sign-on)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-5. **[Create CompetencyIQ test user](#create-competencyiq-test-user)** - to have a counterpart of Britta Simon in CompetencyIQ that is linked to the Azure AD representation of user.
-6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works.
+To configure the integration of CompetencyIQ into Azure AD, you need to add CompetencyIQ from the gallery to your list of managed SaaS apps.
-### Configure Azure AD single sign-on
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **CompetencyIQ** in the search box.
+1. Select **CompetencyIQ** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-In this section, you enable Azure AD single sign-on in the Azure portal.
+## Configure and test Azure AD SSO for CompetencyIQ
-To configure Azure AD single sign-on with CompetencyIQ, perform the following steps:
+Configure and test Azure AD SSO with CompetencyIQ using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in CompetencyIQ.
-1. In the [Azure portal](https://portal.azure.com/), on the **CompetencyIQ** application integration page, select **Single sign-on**.
+To configure and test Azure AD SSO with CompetencyIQ, perform the following steps:
- ![Configure single sign-on link](common/select-sso.png)
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure CompetencyIQ SSO](#configure-competencyiq-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create CompetencyIQ test user](#create-competencyiq-test-user)** - to have a counterpart of B.Simon in CompetencyIQ that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
+## Configure Azure AD SSO
- ![Single sign-on select mode](common/select-saml-option.png)
+Follow these steps to enable Azure AD SSO in the Azure portal.
-3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog.
+1. In the Azure portal, on the **CompetencyIQ** application integration page, find the **Manage** section and select **single sign-on**.
+2. On the **Select a single sign-on method** page, select **SAML**.
+3. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
4. On the **Basic SAML Configuration** section, perform the following steps:
- ![CompetencyIQ Domain and URLs single sign-on information](common/sp-identifier.png)
+ a. In the **Identifier (Entity ID)** text box, type the URL:
+ `https://www.competencyiq.com/`
- a. In the **Sign on URL** text box, type a URL using the following pattern:
+ b. In the **Sign on URL** text box, type a URL using the following pattern:
`https://<customer>.competencyiq.com/`
- b. In the **Identifier (Entity ID)** text box, type a URL:
- `https://www.competencyiq.com/`
- > [!NOTE] > The Sign on URL value is not real. Update the value with the actual Sign on URL. Contact [CompetencyIQ Client support team](https://www.competencyiq.com/) to get the value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal. 5. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Federation Metadata XML** from the given options as per your requirement and save it on your computer.
- ![The Certificate download link](common/metadataxml.png)
+ ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
6. On the **Set up CompetencyIQ** section, copy the appropriate URL(s) as per your requirement.
- ![Copy configuration URLs](common/copy-configuration-urls.png)
-
- a. Login URL
-
- b. Azure Ad Identifier
-
- c. Logout URL
-
-### Configure CompetencyIQ Single Sign-On
-
-To configure single sign-on on **CompetencyIQ** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [CompetencyIQ support team](https://www.competencyiq.com/). They set this setting to have the SAML SSO connection set properly on both sides.
+ ![Screenshot shows to copy appropriate configuration U R L.](common/copy-configuration-urls.png "Configuration")
### Create an Azure AD test user
-The objective of this section is to create a test user in the Azure portal called Britta Simon.
-
-1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**.
-
- ![The "Users and groups" and "All users" links](common/users.png)
-
-2. Select **New user** at the top of the screen.
-
- ![New user Button](common/new-user.png)
-
-3. In the User properties, perform the following steps.
-
- ![The User dialog box](common/user-properties.png)
-
- a. In the **Name** field enter **BrittaSimon**.
-
- b. In the **User name** field type **brittasimon\@yourcompanydomain.extension**
- For example, BrittaSimon@contoso.com
+In this section, you'll create a test user in the Azure portal called B.Simon.
- c. Select **Show password** check box, and then write down the value that's displayed in the Password box.
-
- d. Click **Create**.
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
### Assign the Azure AD test user
-In this section, you enable Britta Simon to use Azure single sign-on by granting access to CompetencyIQ.
-
-1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **CompetencyIQ**.
-
- ![Enterprise applications blade](common/enterprise-applications.png)
-
-2. In the applications list, select **CompetencyIQ**.
-
- ![The CompetencyIQ link in the Applications list](common/all-applications.png)
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to CompetencyIQ.
-3. In the menu on the left, select **Users and groups**.
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **CompetencyIQ**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
- ![The "Users and groups" link](common/users-groups-blade.png)
+## Configure CompetencyIQ SSO
-4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
-
- ![The Add Assignment pane](common/add-assign-user.png)
-
-5. In the **Users and groups** dialog select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen.
-
-6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen.
-
-7. In the **Add Assignment** dialog click the **Assign** button.
+To configure single sign-on on **CompetencyIQ** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [CompetencyIQ support team](https://www.competencyiq.com/). They set this setting to have the SAML SSO connection set properly on both sides.
### Create CompetencyIQ test user In this section, you create a user called Britta Simon in CompetencyIQ. Work with [CompetencyIQ support team](https://www.competencyiq.com/) to add the users in the CompetencyIQ platform. Users must be created and activated before you use single sign-on.
-### Test single sign-on
+## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
-When you click the CompetencyIQ tile in the Access Panel, you should be automatically signed in to the CompetencyIQ for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+* Click on **Test this application** in Azure portal. This will redirect to CompetencyIQ Sign-on URL where you can initiate the login flow.
-## Additional Resources
+* Go to CompetencyIQ Sign-on URL directly and initiate the login flow from there.
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+* You can use Microsoft My Apps. When you click the CompetencyIQ tile in the My Apps, this will redirect to CompetencyIQ Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+Once you configure CompetencyIQ you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Edcor Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/edcor-tutorial.md
Title: 'Tutorial: Azure Active Directory integration with Edcor | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with Edcor'
description: Learn how to configure single sign-on between Azure Active Directory and Edcor.
Previously updated : 02/04/2019 Last updated : 06/01/2022
-# Tutorial: Azure Active Directory integration with Edcor
+# Tutorial: Azure AD SSO integration with Edcor
-In this tutorial, you learn how to integrate Edcor with Azure Active Directory (Azure AD).
-Integrating Edcor with Azure AD provides you with the following benefits:
+In this tutorial, you'll learn how to integrate Edcor with Azure Active Directory (Azure AD). When you integrate Edcor with Azure AD, you can:
-* You can control in Azure AD who has access to Edcor.
-* You can enable your users to be automatically signed-in to Edcor (Single Sign-On) with their Azure AD accounts.
-* You can manage your accounts in one central location - the Azure portal.
-
-If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
+* Control in Azure AD who has access to Edcor.
+* Enable your users to be automatically signed-in to Edcor with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
## Prerequisites
-To configure Azure AD integration with Edcor, you need the following items:
+To get started, you need the following items:
-* An Azure AD subscription. If you don't have an Azure AD environment, you can get one-month trial [here](https://azure.microsoft.com/pricing/free-trial/)
-* Edcor single sign-on enabled subscription
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Edcor single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
## Scenario description In this tutorial, you configure and test Azure AD single sign-on in a test environment.
+* Edcor supports **IDP** initiated SSO.
-* Edcor supports **IDP** initiated SSO
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
-## Adding Edcor from the gallery
+## Add Edcor from the gallery
To configure the integration of Edcor into Azure AD, you need to add Edcor from the gallery to your list of managed SaaS apps.
-**To add Edcor from the gallery, perform the following steps:**
-
-1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon.
-
- ![The Azure Active Directory button](common/select-azuread.png)
-
-2. Navigate to **Enterprise Applications** and then select the **All Applications** option.
-
- ![The Enterprise applications blade](common/enterprise-applications.png)
-
-3. To add new application, click **New application** button on the top of dialog.
-
- ![The New application button](common/add-new-app.png)
-
-4. In the search box, type **Edcor**, select **Edcor** from result panel then click **Add** button to add the application.
-
- ![Edcor in the results list](common/search-new-app.png)
-
-## Configure and test Azure AD single sign-on
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Edcor** in the search box.
+1. Select **Edcor** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-In this section, you configure and test Azure AD single sign-on with Edcor based on a test user called **Britta Simon**.
-For single sign-on to work, a link relationship between an Azure AD user and the related user in Edcor needs to be established.
+## Configure and test Azure AD SSO for Edcor
-To configure and test Azure AD single sign-on with Edcor, you need to complete the following building blocks:
+Configure and test Azure AD SSO with Edcor using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Edcor.
-1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature.
-2. **[Configure Edcor Single Sign-On](#configure-edcor-single-sign-on)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-5. **[Create Edcor test user](#create-edcor-test-user)** - to have a counterpart of Britta Simon in Edcor that is linked to the Azure AD representation of user.
-6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works.
+To configure and test Azure AD SSO with Edcor, perform the following steps:
-### Configure Azure AD single sign-on
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Edcor SSO](#configure-edcor-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Edcor test user](#create-edcor-test-user)** - to have a counterpart of B.Simon in Edcor that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-In this section, you enable Azure AD single sign-on in the Azure portal.
+## Configure Azure AD SSO
-To configure Azure AD single sign-on with Edcor, perform the following steps:
+Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **Edcor** application integration page, select **Single sign-on**.
+1. In the Azure portal, on the **Edcor** application integration page, find the **Manage** section and select **single sign-on**.
+2. On the **Select a single sign-on method** page, select **SAML**.
+3. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Configure single sign-on link](common/select-sso.png)
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
-2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
+4. On the **Basic SAML Configuration** section, perform the following step:
- ![Single sign-on select mode](common/select-saml-option.png)
-
-3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog.
-
- ![Edit Basic SAML Configuration](common/edit-urls.png)
-
-4. On the **Basic SAML Configuration** section, perform the following steps:
-
- ![Edcor Domain and URLs single sign-on information](common/idp-identifier.png)
-
- In the **Identifier** text box, type a URL:
+ In the **Identifier** text box, type the URL:
`https://sso.edcor.com/sp/ACS.saml2` 5. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Federation Metadata XML** from the given options as per your requirement and save it on your computer.
- ![The Certificate download link](common/metadataxml.png)
+ ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
6. On the **Set up Edcor** section, copy the appropriate URL(s) as per your requirement.
- ![Copy configuration URLs](common/copy-configuration-urls.png)
-
- a. Login URL
-
- b. Azure Ad Identifier
-
- c. Logout URL
-
-### Configure Edcor Single Sign-On
-
-To configure single sign-on on **Edcor** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [Edcor support team](https://www.edcor.com/contact-us/). They set this setting to have the SAML SSO connection set properly on both sides.
+ ![Screenshot shows to copy appropriate configuration U R L.](common/copy-configuration-urls.png "Configuration")
### Create an Azure AD test user
-The objective of this section is to create a test user in the Azure portal called Britta Simon.
-
-1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**.
-
- ![The "Users and groups" and "All users" links](common/users.png)
-
-2. Select **New user** at the top of the screen.
-
- ![New user Button](common/new-user.png)
+In this section, you'll create a test user in the Azure portal called B.Simon.
-3. In the User properties, perform the following steps.
-
- ![The User dialog box](common/user-properties.png)
-
- a. In the **Name** field enter **BrittaSimon**.
-
- b. In the **User name** field type **brittasimon\@yourcompanydomain.extension**
- For example, BrittaSimon@contoso.com
-
- c. Select **Show password** check box, and then write down the value that's displayed in the Password box.
-
- d. Click **Create**.
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
### Assign the Azure AD test user
-In this section, you enable Britta Simon to use Azure single sign-on by granting access to Edcor.
-
-1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **Edcor**.
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Edcor.
- ![Enterprise applications blade](common/enterprise-applications.png)
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Edcor**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
-2. In the applications list, select **Edcor**.
+## Configure Edcor SSO
- ![The Edcor link in the Applications list](common/all-applications.png)
-
-3. In the menu on the left, select **Users and groups**.
-
- ![The "Users and groups" link](common/users-groups-blade.png)
-
-4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
-
- ![The Add Assignment pane](common/add-assign-user.png)
-
-5. In the **Users and groups** dialog select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen.
-
-6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen.
-
-7. In the **Add Assignment** dialog click the **Assign** button.
+To configure single sign-on on **Edcor** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [Edcor support team](https://www.edcor.com/contact-us/). They set this setting to have the SAML SSO connection set properly on both sides.
### Create Edcor test user In this section, you create a user called Britta Simon in Edcor. Work with [Edcor support team](https://www.edcor.com/contact-us/) to add the users in the Edcor platform. Users must be created and activated before you use single sign-on.
-### Test single sign-on
-
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+## Test SSO
-When you click the Edcor tile in the Access Panel, you should be automatically signed in to the Edcor for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+In this section, you test your Azure AD single sign-on configuration with following options.
-## Additional resources
+* Click on Test this application in Azure portal and you should be automatically signed in to the Edcor for which you set up the SSO.
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+* You can use Microsoft My Apps. When you click the Edcor tile in the My Apps, you should be automatically signed in to the Edcor for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+Once you configure Edcor you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Eluminate Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/eluminate-tutorial.md
Title: 'Tutorial: Azure Active Directory integration with eLuminate | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with eLuminate'
description: Learn how to configure single sign-on between Azure Active Directory and eLuminate.
Previously updated : 02/05/2019 Last updated : 06/01/2022
-# Tutorial: Azure Active Directory integration with eLuminate
+# Tutorial: Azure AD SSO integration with eLuminate
-In this tutorial, you learn how to integrate eLuminate with Azure Active Directory (Azure AD).
-Integrating eLuminate with Azure AD provides you with the following benefits:
+In this tutorial, you'll learn how to integrate eLuminate with Azure Active Directory (Azure AD). When you integrate eLuminate with Azure AD, you can:
-* You can control in Azure AD who has access to eLuminate.
-* You can enable your users to be automatically signed-in to eLuminate (Single Sign-On) with their Azure AD accounts.
-* You can manage your accounts in one central location - the Azure portal.
-
-If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
+* Control in Azure AD who has access to eLuminate.
+* Enable your users to be automatically signed-in to eLuminate with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
## Prerequisites
-To configure Azure AD integration with eLuminate, you need the following items:
+To get started, you need the following items:
-* An Azure AD subscription. If you don't have an Azure AD environment, you can get one-month trial [here](https://azure.microsoft.com/pricing/free-trial/)
-* eLuminate single sign-on enabled subscription
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* eLuminate single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
## Scenario description In this tutorial, you configure and test Azure AD single sign-on in a test environment.
-* eLuminate supports **SP** initiated SSO
-
-## Adding eLuminate from the gallery
-
-To configure the integration of eLuminate into Azure AD, you need to add eLuminate from the gallery to your list of managed SaaS apps.
-
-**To add eLuminate from the gallery, perform the following steps:**
-
-1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon.
-
- ![The Azure Active Directory button](common/select-azuread.png)
-
-2. Navigate to **Enterprise Applications** and then select the **All Applications** option.
-
- ![The Enterprise applications blade](common/enterprise-applications.png)
-
-3. To add new application, click **New application** button on the top of dialog.
+* eLuminate supports **SP** initiated SSO.
- ![The New application button](common/add-new-app.png)
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
-4. In the search box, type **eLuminate**, select **eLuminate** from result panel then click **Add** button to add the application.
+## Add eLuminate from the gallery
- ![eLuminate in the results list](common/search-new-app.png)
-
-## Configure and test Azure AD single sign-on
-
-In this section, you configure and test Azure AD single sign-on with eLuminate based on a test user called **Britta Simon**.
-For single sign-on to work, a link relationship between an Azure AD user and the related user in eLuminate needs to be established.
-
-To configure and test Azure AD single sign-on with eLuminate, you need to complete the following building blocks:
-
-1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature.
-2. **[Configure eLuminate Single Sign-On](#configure-eluminate-single-sign-on)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-5. **[Create eLuminate test user](#create-eluminate-test-user)** - to have a counterpart of Britta Simon in eLuminate that is linked to the Azure AD representation of user.
-6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works.
+To configure the integration of eLuminate into Azure AD, you need to add eLuminate from the gallery to your list of managed SaaS apps.
-### Configure Azure AD single sign-on
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **eLuminate** in the search box.
+1. Select **eLuminate** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-In this section, you enable Azure AD single sign-on in the Azure portal.
+## Configure and test Azure AD SSO for eLuminate
-To configure Azure AD single sign-on with eLuminate, perform the following steps:
+Configure and test Azure AD SSO with eLuminate using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in eLuminate.
-1. In the [Azure portal](https://portal.azure.com/), on the **eLuminate** application integration page, select **Single sign-on**.
+To configure and test Azure AD SSO with eLuminate, perform the following steps:
- ![Configure single sign-on link](common/select-sso.png)
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure eLuminate SSO](#configure-eluminate-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create eLuminate test user](#create-eluminate-test-user)** - to have a counterpart of B.Simon in eLuminate that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
+## Configure Azure AD SSO
- ![Single sign-on select mode](common/select-saml-option.png)
+Follow these steps to enable Azure AD SSO in the Azure portal.
-3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog.
+1. In the Azure portal, on the **eLuminate** application integration page, find the **Manage** section and select **single sign-on**.
+2. On the **Select a single sign-on method** page, select **SAML**.
+3. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
4. On the **Basic SAML Configuration** section, perform the following steps:
- ![eLuminate Domain and URLs single sign-on information](common/sp-identifier.png)
+ a. In the **Identifier (Entity ID)** text box, type the value:
+ `Eluminate/ClientShortName`
- a. In the **Sign on URL** text box, type a URL using the following pattern:
+ b. In the **Sign on URL** text box, type the URL:
`https://ClientShortName.eluminate.ca/azuresso/account/SignIn`
- b. In the **Identifier (Entity ID)** text box, type a URL using the following pattern:
- `Eluminate/ClientShortName`
- > [!NOTE]
- > These values are not real. Update these values with the actual Sign on URL and Identifier. Contact [eLuminate Client support team](mailto:support@intellimedia.ca) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
-
-4. On the **Set up Single Sign-On with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
+ > These values are not real. Update these values with the actual Identifier and Sign on URL. Contact [eLuminate Client support team](mailto:support@intellimedia.ca) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
- ![The Certificate download link](common/copy-metadataurl.png)
+5. On the **Set up Single Sign-On with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
-### Configure eLuminate Single Sign-On
-
-To configure single sign-on on **eLuminate** side, you need to send the **App Federation Metadata Url** to [eLuminate support team](mailto:support@intellimedia.ca). They set this setting to have the SAML SSO connection set properly on both sides.
+ ![Screenshot shows the Certificate download link.](common/copy-metadataurl.png "Certificate")
### Create an Azure AD test user
-The objective of this section is to create a test user in the Azure portal called Britta Simon.
-
-1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**.
-
- ![The "Users and groups" and "All users" links](common/users.png)
-
-2. Select **New user** at the top of the screen.
-
- ![New user Button](common/new-user.png)
-
-3. In the User properties, perform the following steps.
-
- ![The User dialog box](common/user-properties.png)
-
- a. In the **Name** field enter **BrittaSimon**.
-
- b. In the **User name** field type **brittasimon\@yourcompanydomain.extension**
- For example, BrittaSimon@contoso.com
-
- c. Select **Show password** check box, and then write down the value that's displayed in the Password box.
+In this section, you'll create a test user in the Azure portal called B.Simon.
- d. Click **Create**.
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
### Assign the Azure AD test user
-In this section, you enable Britta Simon to use Azure single sign-on by granting access to eLuminate.
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to eLuminate.
-1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **eLuminate**.
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **eLuminate**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
- ![Enterprise applications blade](common/enterprise-applications.png)
+## Configure eLuminate SSO
-2. In the applications list, select **eLuminate**.
-
- ![The eLuminate link in the Applications list](common/all-applications.png)
-
-3. In the menu on the left, select **Users and groups**.
-
- ![The "Users and groups" link](common/users-groups-blade.png)
-
-4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
-
- ![The Add Assignment pane](common/add-assign-user.png)
-
-5. In the **Users and groups** dialog select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen.
-
-6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen.
-
-7. In the **Add Assignment** dialog click the **Assign** button.
+To configure single sign-on on **eLuminate** side, you need to send the **App Federation Metadata Url** to [eLuminate support team](mailto:support@intellimedia.ca). They set this setting to have the SAML SSO connection set properly on both sides.
### Create eLuminate test user In this section, you create a user called Britta Simon in eLuminate. Work with [eLuminate support team](mailto:support@intellimedia.ca) to add the users in the eLuminate platform. Users must be created and activated before you use single sign-on.
-### Test single sign-on
+## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
-When you click the eLuminate tile in the Access Panel, you should be automatically signed in to the eLuminate for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+* Click on **Test this application** in Azure portal. This will redirect to eLuminate Sign-on URL where you can initiate the login flow.
-## Additional Resources
+* Go to eLuminate Sign-on URL directly and initiate the login flow from there.
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+* You can use Microsoft My Apps. When you click the eLuminate tile in the My Apps, this will redirect to eLuminate Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+Once you configure eLuminate you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Ethicspoint Incident Management Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/ethicspoint-incident-management-tutorial.md
Follow these steps to enable Azure AD SSO in the Azure portal.
| > [!NOTE]
- > These values are not real. Update these values with the actual Identifier,Reply URL and Sign-On URL. Contact [EthicsPoint Incident Management (EPIM) Client support team](https://www.navexglobal.com/company/contact-us) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > These values are not real. Update these values with the actual Identifier,Reply URL and Sign-On URL. Contact [EthicsPoint Incident Management (EPIM) Client support team](https://www.navex.com/en-us/products/navex-ethics-compliance/ethicspoint-hotline-incident-management/) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
5. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Federation Metadata XML** from the given options as per your requirement and save it on your computer.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
## Configure EthicsPoint Incident Management (EPIM) SSO
-To configure single sign-on on **EthicsPoint Incident Management (EPIM)** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [EthicsPoint Incident Management (EPIM) support team](https://www.navexglobal.com/company/contact-us). They set this setting to have the SAML SSO connection set properly on both sides.
+To configure single sign-on on **EthicsPoint Incident Management (EPIM)** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [EthicsPoint Incident Management (EPIM) support team](https://www.navex.com/en-us/products/navex-ethics-compliance/ethicspoint-hotline-incident-management/). They set this setting to have the SAML SSO connection set properly on both sides.
### Create EthicsPoint Incident Management (EPIM) test user
-In this section, you create a user called Britta Simon in EthicsPoint Incident Management (EPIM). Work with [EthicsPoint Incident Management (EPIM) support team](https://www.navexglobal.com/company/contact-us) to add the users in the EthicsPoint Incident Management (EPIM) platform. Users must be created and activated before you use single sign-on.
+In this section, you create a user called Britta Simon in EthicsPoint Incident Management (EPIM). Work with [EthicsPoint Incident Management (EPIM) support team](https://www.navex.com/en-us/products/navex-ethics-compliance/ethicspoint-hotline-incident-management/) to add the users in the EthicsPoint Incident Management (EPIM) platform. Users must be created and activated before you use single sign-on.
## Test SSO
active-directory Openlearning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/openlearning-tutorial.md
Previously updated : 05/24/2022 Last updated : 05/31/2022
Follow these steps to enable Azure AD SSO in the Azure portal.
> [!Note] > If the **Identifier** value does not get auto populated, then please fill in the value manually according to your requirement. The Sign-on URL value is not real. Update this value with the actual Sign-on URL. Contact [OpenLearning Client support team](mailto:dev@openlearning.com) to get this value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+1. OpenLearning Identity Authentication application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+
+ ![image](common/default-attributes.png)
+
+1. In addition to above, OpenLearning Identity Authentication application expects few more attributes to be passed back in SAML response which are shown below. These attributes are also pre populated but you can review them as per your requirements.
+
+ | Name | Source Attribute|
+ | | |
+ | urn:oid:0.9.2342.19200300.100.1.3 | user.mail |
+ | urn:oid:2.16.840.1.113730.3.1.241 | user.displayname |
+ | urn:oid:1.3.6.1.4.1.5923.1.1.1.9 | user.extensionattribute1 |
+ | urn:oid:1.3.6.1.4.1.5923.1.1.1.6 | user.objectid |
+ | urn:oid:2.5.4.10 | user.companyname |
+ 1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer. ![Screenshot shows the Certificate download link.](common/certificatebase64.png "Certificate")
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
### Create OpenLearning test user
-1. In a different web browser window, log in to your OpenLearning website as an administrator.
-
-1. Navigate to **People** and select **Invite People**.
-
-1. Enter the valid **Email Addresses** in the textbox and click **INVITE ALL USERS**.
-
- ![Screenshot shows inviting all users](./media/openlearning-tutorial/users.png "SAML USERS")
+In this section, a user called Britta Simon is created in OpenLearning. OpenLearning supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in OpenLearning, a new one is created after authentication.
## Test SSO
active-directory Seculio Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/seculio-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with Seculio'
+description: Learn how to configure single sign-on between Azure Active Directory and Seculio.
++++++++ Last updated : 05/30/2022++++
+# Tutorial: Azure AD SSO integration with Seculio
+
+In this tutorial, you'll learn how to integrate Seculio with Azure Active Directory (Azure AD). When you integrate Seculio with Azure AD, you can:
+
+* Control in Azure AD who has access to Seculio.
+* Enable your users to be automatically signed-in to Seculio with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Seculio single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Seculio supports **SP** and **IDP** initiated SSO.
+
+## Add Seculio from the gallery
+
+To configure the integration of Seculio into Azure AD, you need to add Seculio from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Seculio** in the search box.
+1. Select **Seculio** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for Seculio
+
+Configure and test Azure AD SSO with Seculio using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Seculio.
+
+To configure and test Azure AD SSO with Seculio, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Seculio SSO](#configure-seculio-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Seculio test user](#create-seculio-test-user)** - to have a counterpart of B.Simon in Seculio that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Seculio** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** textbox, type a URL using the following pattern:
+ `https://seculio.com/saml/<ID>`
+
+ b. In the **Reply URL** textbox, type a URL using the following pattern:
+ `https://seculio.com/saml/acs/<ID>`
+
+1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
+
+ In the **Sign-on URL** text box, type the URL:
+ `https://seculio.com/`
+
+ > [!Note]
+ > These values are not real. Update these values with the actual Identifier and Reply URL. Contact [Seculio support team](mailto:seculio@lrm.jp) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set-up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
+
+1. On the **Set up Seculio** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows to copy configuration appropriate U R L.](common/copy-configuration-urls.png "Attributes")
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Seculio.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Seculio**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Seculio SSO
+
+To configure single sign-on on **Seculio** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [Seculio support team](mailto:seculio@lrm.jp). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Seculio test user
+
+In this section, you create a user called Britta Simon in Seculio. Work with [Seculio support team](mailto:seculio@lrm.jp) to add the users in the Seculio platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Seculio Sign on URL where you can initiate the login flow.
+
+* Go to Seculio Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Seculio for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the Seculio tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Seculio for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure Seculio you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Tvu Service Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/tvu-service-tutorial.md
Previously updated : 05/21/2022 Last updated : 06/01/2022
Follow these steps to enable Azure AD SSO in the Azure portal.
1. On the **Basic SAML Configuration** section, the application is pre-configured and the necessary URLs are already pre-populated with Azure. The user needs to save the configuration by clicking the **Save** button.
-1. TVU Service application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+1. Your TVU Service application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows an example for this. The default value of **Unique User Identifier** is **user.userprincipalname** but TVU Service expects this to be mapped with the user's email address. For that you can use **user.mail** attribute from the list or use the appropriate attribute value based on your organization configuration.
![Screenshot shows the image of TVU Service application.](common/default-attributes.png "Attributes")
active-directory Xcarrier Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/xcarrier-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with xCarrier®'
+description: Learn how to configure single sign-on between Azure Active Directory and xCarrier®.
++++++++ Last updated : 05/30/2022++++
+# Tutorial: Azure AD SSO integration with xCarrier®
+
+In this tutorial, you'll learn how to integrate xCarrier® with Azure Active Directory (Azure AD). When you integrate xCarrier® with Azure AD, you can:
+
+* Control in Azure AD who has access to xCarrier®.
+* Enable your users to be automatically signed-in to xCarrier® with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* xCarrier® single sign-on (SSO) enabled subscription.
+* Along with Cloud Application Administrator, Application Administrator can also add or manage applications in Azure AD.
+For more information, see [Azure built-in roles](../roles/permissions-reference.md).
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* xCarrier® supports **SP** and **IDP** initiated SSO.
+
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+
+## Add xCarrier® from the gallery
+
+To configure the integration of xCarrier® into Azure AD, you need to add xCarrier® from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **xCarrier®** in the search box.
+1. Select **xCarrier®** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for xCarrier®
+
+Configure and test Azure AD SSO with xCarrier® using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in xCarrier®.
+
+To configure and test Azure AD SSO with xCarrier®, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure xCarrier® SSO](#configure-xcarrier-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create xCarrier® test user](#create-xcarrier-test-user)** - to have a counterpart of B.Simon in xCarrier® that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **xCarrier®** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows to edit Basic S A M L Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, the user does not have to perform any step as the app is already pre-integrated with Azure.
+
+1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
+
+ In the **Sign-on URL** text box, type the URL:
+ `https://msdev.myxcarrier.com/Home/Index`
+
+1. On the **Set-up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
+
+1. On the **Set up xCarrier®** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows to copy configuration appropriate U R L.](common/copy-configuration-urls.png "Metadata")
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to xCarrier®.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **xCarrier®**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure xCarrier® SSO
+
+To configure single sign-on on **xCarrier®** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [xCarrier® support team](mailto:pw_support@elemica.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create xCarrier® test user
+
+In this section, a user called B.Simon is created in xCarrier®. xCarrier® supports just-in-time user provisioning, which is enabled by default. There's no action item for you in this section. If a user doesn't already exist in xCarrier®, a new one is created after authentication.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to xCarrier® Sign on URL where you can initiate the login flow.
+
+* Go to xCarrier® Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the xCarrier® for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the xCarrier® tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the xCarrier® for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure xCarrier® you can enforce session control, which protects exfiltration and infiltration of your organization’s sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Configure Azure Active Directory For Fedramp High Impact https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/configure-azure-active-directory-for-fedramp-high-impact.md
The following is a list of FedRAMP resources:
* [FedRAMP High Azure Policy built-in initiative definition](../../governance/policy/samples/fedramp-high.md)
-* [Microsoft 365 compliance center](/microsoft-365/compliance/microsoft-365-compliance-center)
+* [Microsoft Purview compliance portal](/microsoft-365/compliance/microsoft-365-compliance-center)
* [Microsoft Compliance Manager](/microsoft-365/compliance/compliance-manager)
active-directory Get Started Request Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/get-started-request-api.md
Title: How to call the Request Service REST API (preview)- description: Learn how to issue and verify by using the Request Service REST API documentationCenter: ''
active-directory Issuance Request Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/issuance-request-api.md
Title: Specify the Request Service REST API issuance request (preview)- description: Learn how to issue a verifiable credential that you've issued. documentationCenter: ''
active-directory Presentation Request Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/presentation-request-api.md
Title: Specify the Request Service REST API verify request (preview)- description: Learn how to start a presentation request in Verifiable Credentials documentationCenter: ''
aks Configure Kubenet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/configure-kubenet.md
Title: Configure kubenet networking in Azure Kubernetes Service (AKS)
description: Learn how to configure kubenet (basic) network in Azure Kubernetes Service (AKS) to deploy an AKS cluster into an existing virtual network and subnet. Previously updated : 06/02/2020 Last updated : 06/02/2022
This article shows you how to use *kubenet* networking to create and use a virtu
* The virtual network for the AKS cluster must allow outbound internet connectivity. * Don't create more than one AKS cluster in the same subnet. * AKS clusters may not use `169.254.0.0/16`, `172.30.0.0/16`, `172.31.0.0/16`, or `192.0.2.0/24` for the Kubernetes service address range, pod address range or cluster virtual network address range.
-* The cluster identity used by the AKS cluster must have at least [Network Contributor](../role-based-access-control/built-in-roles.md#network-contributor) role on the subnet within your virtual network. You must also have the appropriate permissions, such as the subscription owner, to create a cluster identity and assign it permissions. If you wish to define a [custom role](../role-based-access-control/custom-roles.md) instead of using the built-in Network Contributor role, the following permissions are required:
+* The cluster identity used by the AKS cluster must have at least [Network Contributor](../role-based-access-control/built-in-roles.md#network-contributor) role on the subnet within your virtual network. CLI helps do the role assignment automatically. If you are using ARM template or other clients, the role assignment needs to be done manually. You must also have the appropriate permissions, such as the subscription owner, to create a cluster identity and assign it permissions. If you wish to define a [custom role](../role-based-access-control/custom-roles.md) instead of using the built-in Network Contributor role, the following permissions are required:
* `Microsoft.Network/virtualNetworks/subnets/join/action` * `Microsoft.Network/virtualNetworks/subnets/read`
The following example output shows the application ID and password for your serv
To assign the correct delegations in the remaining steps, use the [az network vnet show][az-network-vnet-show] and [az network vnet subnet show][az-network-vnet-subnet-show] commands to get the required resource IDs. These resource IDs are stored as variables and referenced in the remaining steps:
+> [!NOTE]
+> If you are using CLI, you can skip this step. With ARM template or other clients, you need to do the below role assignment.
+ ```azurecli-interactive VNET_ID=$(az network vnet show --resource-group myResourceGroup --name myAKSVnet --query id -o tsv) SUBNET_ID=$(az network vnet subnet show --resource-group myResourceGroup --vnet-name myAKSVnet --name myAKSSubnet --query id -o tsv)
aks Draft https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/draft.md
Draft has the following commands to help ease your development on Kubernetes:
- If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. - Install the latest version of the [Azure CLI](/cli/azure/install-azure-cli-windows) and the *aks-preview* extension.-- If you don't have one already, you need to create an [AKS cluster][deploy-cluster].
+- If you don't have one already, you need to create an [AKS cluster][deploy-cluster] and an Azure Container Registry instance.
### Install the `aks-preview` Azure CLI extension
aks Integrations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/integrations.md
The below table shows the available add-ons.
| open-service-mesh | Use Open Service Mesh with your AKS cluster. | [Open Service Mesh AKS add-on][osm] | | azure-keyvault-secrets-provider | Use Azure Keyvault Secrets Provider addon.| [Use the Azure Key Vault Provider for Secrets Store CSI Driver in an AKS cluster][keyvault-secret-provider] | | web_application_routing | Use a managed NGINX ingress Controller with your AKS cluster.| [Web Application Routing Overview][web-app-routing] |
+| keda | Event-driven autoscaling for the applications on your AKS cluster. | [Simplified application autoscaling with Kubernetes Event-driven Autoscaling (KEDA) add-on][keda]|
## Extensions
The below table shows a few examples of open-source and third-party integrations
|||| | [Helm][helm] | An open-source packaging tool that helps you install and manage the lifecycle of Kubernetes applications. | [Quickstart: Develop on Azure Kubernetes Service (AKS) with Helm][helm-qs] | | [Prometheus][prometheus] | An open source monitoring and alerting toolkit. | [Container insights with metrics in Prometheus format][prometheus-az-monitor], [Prometheus Helm chart][prometheus-helm-chart] |
-| [Grafana][grafana] | An open-source dashboard for observability. | [Deploy Grafana on Kubernetes][grafana-install] |
+| [Grafana][grafana] | An open-source dashboard for observability. | [Deploy Grafana on Kubernetes][grafana-install] or use [Managed Grafana][managed-grafana]|
| [Couchbase][couchdb] | A distributed NoSQL cloud database. | [Install Couchbase and the Operator on AKS][couchdb-install] | | [OpenFaaS][open-faas]| An open-source framework for building serverless functions by using containers. | [Use OpenFaaS with AKS][open-faas-aks] | | [Apache Spark][apache-spark] | An open source, fast engine for large-scale data processing. | Running Apache Spark jobs requires a minimum node size of *Standard_D3_v2*. See [running Spark on Kubernetes][spark-kubernetes] for more details on running Spark jobs on Kubernetes. |
The below table shows a few examples of open-source and third-party integrations
[spark-kubernetes]: https://spark.apache.org/docs/latest/running-on-kubernetes.html [dapr-overview]: ./dapr.md [gitops-overview]: ../azure-arc/kubernetes/conceptual-gitops-flux2.md
+[managed-grafana]: ../managed-grafan
+[keda]: keda-about.md
+[web-app-routing]: web-app-routing.md
aks Use Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-managed-identity.md
Title: Use managed identities in Azure Kubernetes Service description: Learn how to use managed identities in Azure Kubernetes Service (AKS) Previously updated : 01/25/2022 Last updated : 06/01/2022 # Use managed identities in Azure Kubernetes Service
You must have the following resource installed:
- The Azure CLI, version 2.23.0 or later
+> [!NOTE]
+> AKS will create a kubelet MI in the Node resource group if you do not bring your own kubelet MI.
+ ## Limitations * Tenants move / migrate of managed identity enabled clusters isn't supported.
az aks show -g <RGName> -n <ClusterName> --query "identity"
``` > [!NOTE]
-> For creating and using your own VNet, static IP address, or attached Azure disk where the resources are outside of the worker node resource group, use the PrincipalID of the cluster System Assigned Managed Identity to perform a role assignment. For more information on role assignment, see [Delegate access to other Azure resources](kubernetes-service-principal.md#delegate-access-to-other-azure-resources).
+> For creating and using your own VNet, static IP address, or attached Azure disk where the resources are outside of the worker node resource group, CLI will add the role assignement automatically. If you are using ARM template or other clients, you need to use the PrincipalID of the cluster System Assigned Managed Identity to perform a role assignment. For more information on role assignment, see [Delegate access to other Azure resources](kubernetes-service-principal.md#delegate-access-to-other-azure-resources).
> > Permission grants to cluster Managed Identity used by Azure Cloud provider may take up 60 minutes to populate.
az aks show -g <RGName> -n <ClusterName> --query "identity"
## Bring your own control plane MI A custom control plane identity enables access to be granted to the existing identity prior to cluster creation. This feature enables scenarios such as using a custom VNET or outboundType of UDR with a pre-created managed identity. + You must have the Azure CLI, version 2.15.1 or later installed. ### Limitations
If you don't have a managed identity yet, you should go ahead and create one for
az identity create --name myIdentity --resource-group myResourceGroup ```
-Assign "Managed Identity Operator" role to the identity.
-
+Azure CLI will automatically add required role assignment for control plane MI. If you are using ARM template or other clients, you need to create the role assignment manually.
```azurecli-interactive
-az role assignment create --assignee <id> --role "Managed Identity Operator" --scope <id>
--
-The result should look like:
-
-```output
-{
- "canDelegate": null,
- "condition": null,
- "conditionVersion": null,
- "description": null,
- "id": "/subscriptions/<subscriptionid>/resourcegroups/myResourceGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/myIdentity",
- "name": "myIdentity,
- "principalId": "<principalId>",
- "principalType": "ServicePrincipal",
- "resourceGroup": "myResourceGroup",
- "roleDefinitionId": "/subscriptions/<subscriptionid>/providers/Microsoft.Authorization/roleDefinitions/<definitionid>",
- "scope": "<resourceid>",
- "type": "Microsoft.Authorization/roleAssignments"
-}
+az role assignment create --assignee <control-plane-identity-object-id> --role "Managed Identity Operator" --scope <kubelet-identity-resource-id>
``` If your managed identity is part of your subscription, you can use [az identity CLI command][az-identity-list] to query it.
A Kubelet identity enables access to be granted to the existing identity prior t
> [!WARNING] > Updating kubelet MI will upgrade Nodepool, which causes downtime for your AKS cluster as the nodes in the nodepools will be cordoned/drained and then reimaged. + ### Prerequisites - You must have the Azure CLI, version 2.26.0 or later installed.
az identity list --query "[].{Name:name, Id:id, Location:location}" -o table
### Create a cluster using kubelet identity
-Now you can use the following command to create your cluster with your existing identities. Provide the control plane identity id via `assign-identity` and the kubelet managed identity via `assign-kubelet-identity`:
+Now you can use the following command to create your cluster with your existing identities. Provide the control plane identity resource ID via `assign-identity` and the kubelet managed identity via `assign-kubelet-identity`:
```azurecli-interactive az aks create \
az aks create \
--dns-service-ip 10.2.0.10 \ --service-cidr 10.2.0.0/24 \ --enable-managed-identity \
- --assign-identity <identity-id> \
- --assign-kubelet-identity <kubelet-identity-id>
+ --assign-identity <identity-resource-id> \
+ --assign-kubelet-identity <kubelet-identity-resource-id>
``` A successful cluster creation using your own kubelet managed identity contains the following output:
az upgrade
``` #### Updating your cluster with kubelet identity
-Now you can use the following command to update your cluster with your existing identities. Provide the control plane identity id via `assign-identity` and the kubelet managed identity via `assign-kubelet-identity`:
+Now you can use the following command to update your cluster with your existing identities. Provide the control plane identity resource ID via `assign-identity` and the kubelet managed identity via `assign-kubelet-identity`:
```azurecli-interactive az aks update \ --resource-group myResourceGroup \ --name myManagedCluster \ --enable-managed-identity \
- --assign-identity <identity-id> \
- --assign-kubelet-identity <kubelet-identity-id>
+ --assign-identity <identity-resource-id> \
+ --assign-kubelet-identity <kubelet-identity-resource-id>
``` A successful cluster update using your own kubelet managed identity contains the following output:
app-service Configure Language Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-language-java.md
App Service supports clustering for JBoss EAP versions 7.4.1 and greater. To ena
When clustering is enabled, the JBoss EAP instances use the FILE_PING JGroups discovery protocol to discover new instances and persist the cluster information like the cluster members, their identifiers, and their IP addresses. On App Service, these files are under `/home/clusterinfo/`. The first EAP instance to start will obtain read/write permissions on the cluster membership file. Other instances will read the file, find the primary node, and coordinate with that node to be included in the cluster and added to the file.
+The Premium V3 and Isolated V2 App Service Plan types can optionally be distributed across Availability Zones to improve resiliency and reliability for your business-critical workloads. This architecture is also known as [zone redundancy](how-to-zone-redundancy.md). The JBoss EAP clustering feature is compatabile with the zone redundancy feature.
+ ### JBoss EAP App Service Plans <a id="jboss-eap-hardware-options"></a>
app-service Tutorial Connect Msi Azure Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-msi-azure-database.md
For Azure Database for MySQL and Azure Database for PostgreSQL, the database use
The `if` statement sets the MySQL username based on which identity the token applies to. The token is then passed in to the [standard MySQL connection](../mysql/connect-python.md) as the password of the Azure identity.
- The `LIBMYSQL_ENABLE_CLEARTEXT_PLUGIN` environment variable enables the [Cleartext plugin](https://dev.mysql.com/doc/refman/8.0/cleartext-pluggable-authentication.html) in the MySQL Connector (see [Use Azure Active Directory for authentication with MySQL](../mysql/howto-configure-sign-in-azure-ad-authentication.md#compatibility-with-application-drivers)).
+ The `LIBMYSQL_ENABLE_CLEARTEXT_PLUGIN` environment variable enables the [Cleartext plugin](https://dev.mysql.com/doc/refman/8.0/en/cleartext-pluggable-authentication.html) in the MySQL Connector (see [Use Azure Active Directory for authentication with MySQL](../mysql/howto-configure-sign-in-azure-ad-authentication.md#compatibility-with-application-drivers)).
# [Azure Database for PostgreSQL](#tab/postgresql)
What you learned:
> [Tutorial: Connect to Azure services that don't support managed identities (using Key Vault)](tutorial-connect-msi-key-vault.md) > [!div class="nextstepaction"]
-> [Tutorial: Isolate back-end communication with Virtual Network integration](tutorial-networking-isolate-vnet.md)
+> [Tutorial: Isolate back-end communication with Virtual Network integration](tutorial-networking-isolate-vnet.md)
app-service Tutorial Dotnetcore Sqldb App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-dotnetcore-sqldb-app.md
Title: Deploy an ASP.NET Core and Azure SQL Database app to Azure App Service description: Learn how to deploy an ASP.NET Core web app to Azure App Service and connect to an Azure SQL Database. Previously updated : 03/02/2022 Last updated : 06/01/2022 ms.devlang: csharp
Sign in to the [Azure portal](https://portal.azure.com/) and follow these steps
| [!INCLUDE [Create database step 1](<./includes/tutorial-dotnetcore-sqldb-app/azure-portal-sql-db-create-01.md>)] | :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/azure-portal-create-sql-01-240px.png" alt-text="A screenshot showing how to use the search box in the top tool bar to find Azure SQL in Azure." lightbox="./media/tutorial-dotnetcore-sqldb-app/azure-portal-create-sql-01.png"::: | | [!INCLUDE [Create database step 2](<./includes/tutorial-dotnetcore-sqldb-app/azure-portal-sql-db-create-02.md>)] | :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/azure-portal-create-sql-02-240px.png" alt-text="A screenshot showing the create button on the SQL Servers page used to create a new database server." lightbox="./media/tutorial-dotnetcore-sqldb-app/azure-portal-create-sql-02.png"::: | | [!INCLUDE [Create database step 3](<./includes/tutorial-dotnetcore-sqldb-app/azure-portal-sql-db-create-03.md>)] | :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/azure-portal-create-sql-03-240px.png" alt-text="A screenshot showing the form to fill out to create a SQL Server in Azure." lightbox="./media/tutorial-dotnetcore-sqldb-app/azure-portal-create-sql-03.png"::: |
-| [!INCLUDE [Create database step 4](<./includes/tutorial-dotnetcore-sqldb-app/azure-portal-sql-db-create-04.md>)] | :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/azure-portal-create-sql-04-240px.png" alt-text="A screenshot showing the form used to allow other Azure services to connect to the database." lightbox="./media/tutorial-dotnetcore-sqldb-app/azure-portal-create-sql-04.png"::: |
| [!INCLUDE [Create database step 5](<./includes/tutorial-dotnetcore-sqldb-app/azure-portal-sql-db-create-05.md>)] | :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/azure-portal-create-sql-05-240px.png" alt-text="A screenshot showing how to use the search box to find the SQL databases item in Azure." lightbox="./media/tutorial-dotnetcore-sqldb-app/azure-portal-create-sql-05.png"::: | | [!INCLUDE [Create database step 6](<./includes/tutorial-dotnetcore-sqldb-app/azure-portal-sql-db-create-06.md>)] | :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/azure-portal-create-sql-06-240px.png" alt-text="A screenshot showing the create button in on the SQL databases page." lightbox="./media/tutorial-dotnetcore-sqldb-app/azure-portal-create-sql-06.png"::: | | [!INCLUDE [Create database step 7](<./includes/tutorial-dotnetcore-sqldb-app/azure-portal-sql-db-create-07.md>)] | :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/azure-portal-create-sql-07-240px.png" alt-text="A screenshot showing the form to fill out to create a new SQL database in Azure." lightbox="./media/tutorial-dotnetcore-sqldb-app/azure-portal-create-sql-07.png"::: |
az sql db create \
--name coreDb ```
-We also need to add the following firewall rule to our database server to allow other Azure resources to connect to it.
-
-```azurecli-interactive
-az sql server firewall-rule create \
- --resource-group msdocs-core-sql \
- --server <server-name> \
- --name AzureAccess \
- --start-ip-address 0.0.0.0 \
- --end-ip-address 0.0.0.0
-```
- ## 4 - Deploy to the App Service
We're now ready to deploy our .NET app to the App Service.
## 5 - Connect the App to the Database
-Next, we must connect the App hosted in our App Service to our database using a Connection String.
+Next, we must connect the App hosted in our App Service to our database using a Connection String. You can use [Service Connector](../service-connector/overview.md) to create the connection.
### [Azure portal](#tab/azure-portal)
Sign in to the [Azure portal](https://portal.azure.com/) and follow the steps to
| Instructions | Screenshot | |:-|--:|
-| [!INCLUDE [Connect Service step 1](<./includes/tutorial-dotnetcore-sqldb-app/azure-portal-connect-database-01.md>)] | :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/azure-portal-connect-sql-db-01-240px.png" alt-text="A screenshot showing how to locate the database used by the App in the Azure portal." lightbox="./media/tutorial-dotnetcore-sqldb-app/azure-portal-connect-sql-db-01.png"::: |
-| [!INCLUDE [Connect Service step 2](<./includes/tutorial-dotnetcore-sqldb-app/azure-portal-connect-database-02.md>)] | :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/azure-portal-connect-sql-db-02-240px.png" alt-text="A screenshot showing how to get the connection string used to connect to the database from the Azure portal." lightbox="./media/tutorial-dotnetcore-sqldb-app/azure-portal-connect-sql-db-02.png"::: |
-| [!INCLUDE [Connect Service step 3](<./includes/tutorial-dotnetcore-sqldb-app/azure-portal-connect-database-03.md>)] | :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/azure-portal-connect-sql-db-03-240px.png" alt-text="A screenshot showing how to use the search box to find the App Service instance for the app in the Azure portal." lightbox="./media/tutorial-dotnetcore-sqldb-app/azure-portal-connect-sql-db-03.png"::: |
-| [!INCLUDE [Connect Service step 4](<./includes/tutorial-dotnetcore-sqldb-app/azure-portal-connect-database-04.md>)] | :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/azure-portal-connect-sql-db-04-240px.png" alt-text="A screenshot showing how to enter the connection string as an app setting for the web app in the Azure portal." lightbox="./media/tutorial-dotnetcore-sqldb-app/azure-portal-connect-sql-db-04.png"::: |
+| [!INCLUDE [Connect Service step 1](<./includes/tutorial-dotnetcore-sqldb-app/azure-portal-connect-database-01.md>)] | :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/azure-portal-connect-sql-db-01-240px.png" alt-text="A screenshot showing how to locate the app service in the Azure portal." lightbox="./media/tutorial-dotnetcore-sqldb-app/azure-portal-connect-sql-db-01.png"::: |
+| [!INCLUDE [Connect Service step 2](<./includes/tutorial-dotnetcore-sqldb-app/azure-portal-connect-database-02.md>)] | :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/azure-portal-connect-sql-db-02-240px.png" alt-text="A screenshot showing how to locate Service Connector from the Azure portal." lightbox="./media/tutorial-dotnetcore-sqldb-app/azure-portal-connect-sql-db-02.png"::: |
+| [!INCLUDE [Connect Service step 3](<./includes/tutorial-dotnetcore-sqldb-app/azure-portal-connect-database-03.md>)] | :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/azure-portal-connect-sql-db-03-240px.png" alt-text="A screenshot showing how to create a connection to the SQL database for the app in the Azure portal." lightbox="./media/tutorial-dotnetcore-sqldb-app/azure-portal-connect-sql-db-03.png"::: |
+| [!INCLUDE [Connect Service step 4](<./includes/tutorial-dotnetcore-sqldb-app/azure-portal-connect-database-04.md>)] | :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/azure-portal-connect-sql-db-04-240px.png" alt-text="A screenshot showing how to enter username and password of SQL Database during service connection in the Azure portal." lightbox="./media/tutorial-dotnetcore-sqldb-app/azure-portal-connect-sql-db-04.png"::: |
+| [!INCLUDE [Connect Service step 5](<./includes/tutorial-dotnetcore-sqldb-app/azure-portal-connect-database-05.md>)] | :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/azure-portal-connect-sql-db-05-240px.png" alt-text="A screenshot showing how to review and create the connection in the Azure portal." lightbox="./media/tutorial-dotnetcore-sqldb-app/azure-portal-connect-sql-db-05.png"::: |
+| [!INCLUDE [Connect Service step 6](<./includes/tutorial-dotnetcore-sqldb-app/azure-portal-connect-database-06.md>)] | :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/azure-portal-connect-sql-db-06-240px.png" alt-text="A screenshot showing how to get the connection string for a service connector in the Azure portal." lightbox="./media/tutorial-dotnetcore-sqldb-app/azure-portal-connect-sql-db-06.png"::: |
### [Azure CLI](#tab/azure-cli)
-Run Azure CLI commands in the [Azure Cloud Shell](https://shell.azure.com) or on a workstation with the [Azure CLI installed](/cli/azure/install-azure-cli).
-
-We can retrieve the Connection String for our database using the [az sql db show-connection-string](/cli/azure/sql/db#az-sql-db-show-connection-string) command. This command allows us to add the Connection String to our App Service configuration settings. Copy this Connection String value for later use.
+Configure the connection between your app and the SQL database by using the [az webapp connection create sql](/cli/azure/webapp/connection/create#az-webapp-connection-create-sql) command.
```azurecli-interactive
-az sql db show-connection-string \
- --client ado.net \
- --name coreDb \
- --server <your-server-name>
+az webapp connection create sql \
+ --resource-group msdocs-core-sql \
+ --name <your-app-service-name> \
+ --target-resource-group msdocs-core-sql \
+ --server <server-name> \
+ --database coreDB \
+ --query configurations
```
-Next, let's assign the Connection String to our App Service using the command below. `MyDbConnection` is the name of the Connection String in our appsettings.json file, which means it gets loaded by our app during startup.
+When prompted, provide the administrator username and password for the SQL database.
-Replace the username and password in the connection string with your own before running the command.
+> [!NOTE]
+> The CLI command does everything the app needs to successfully connect to the database, including:
+>
+> - In your App Service app, adds a connection string with the name `AZURE_SQL_CONNECTIONSTRING`, which your code can use for its database connection. If the connection string is already in use, `AZURE_SQL_<connection-name>_CONNECTIONSTRING` is used for the name instead.
+> - In your SQL database server, allows Azure services to access the SQL database server.
-```azurecli-interactive
-az webapp config connection-string set \
- -g msdocs-core-sql \
- -n <your-app-name> \
- -t SQLServer \
- --settings MyDbConnection=<your-connection-string>
+Copy this connection string value from the output for later.
-```
+To see the entirety of the command output, drop the `--query` in the command.
## 6 - Generate the Database Schema
-To generate our database schema, we need to set up a firewall rule on our Database Server. This rule allows our local computer to connect to Azure. For this step, you'll need to know your local computer's IP address. For more information about how to find the IP address, [see here](https://whatismyipaddress.com/).
+To generate our database schema, set up a firewall rule on the SQL database server. This rule lets your local computer connect to Azure. For this step, you'll need to know your local computer's IP address. For more information about how to find the IP address, [see here](https://whatismyipaddress.com/).
### [Azure portal](#tab/azure-portal)
az sql server firewall-rule create --resource-group msdocs-core-sql --server <yo
-Next, update the appsettings.json file in our local app code with the [Connection String of our Azure SQL Database](#5connect-the-app-to-the-database). The update allows us to run migrations locally against our database hosted in Azure. Replace the username and password placeholders with the values you chose when creating your database.
+Next, update the *appsettings.json* file in the sample project with the [connection string Azure SQL Database](#5connect-the-app-to-the-database). The update allows us to run migrations locally against our database hosted in Azure. Replace the username and password placeholders with the values you chose when creating your database.
```json
-"ConnectionStrings": {
- "MyDbConnection": "Server=tcp:<your-server-name>.database.windows.net,1433;
- Initial Catalog=coredb;
- Persist Security Info=False;
- User ID=<username>;Password=<password>;
- Encrypt=True;
- TrustServerCertificate=False;"
- }
+"AZURE_SQL_CONNECTIONSTRING": "Data Source=<your-server-name>.database.windows.net,1433;Initial Catalog=coreDb;User ID=<username>;Password=<password>"
+```
+
+Next, update the *Startup.cs* file the sample project by updating the existing connection string name `MyDbConnection` to `AZURE_SQL_CONNECTIONSTRING`:
+
+```csharp
+services.AddDbContext<MyDatabaseContext>(options =>
+ options.UseSqlServer(Configuration.GetConnectionString("AZURE_SQL_CONNECTIONSTRING")));
``` Finally, run the following commands to install the necessary CLI tools for Entity Framework Core. Create an initial database migration file and apply those changes to update the database:
dotnet ef database update
After the migration finishes, the correct schema is created.
-If you receive an error stating `Client with IP address xxx.xxx.xxx.xxx is not allowed to access the server`, that means the IP address you entered into your Azure firewall rule is incorrect. To fix this issue, update the Azure firewall rule with the IP address provided in the error message.
+If you receive the error `Client with IP address xxx.xxx.xxx.xxx is not allowed to access the server`, that means the IP address you entered into your Azure firewall rule is incorrect. To fix this issue, update the Azure firewall rule with the IP address provided in the error message.
## 7 - Browse the Deployed Application and File Directory
applied-ai-services Try V3 Python Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/quickstarts/try-v3-python-sdk.md
def analyze_general_documents():
docUrl = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/sample-layout.pdf" # create your `DocumentAnalysisClient` instance and `AzureKeyCredential` variable
- document_analysis_client = DocumentAnalysisClient(endpoint=endpoint, credential=AzureKeyCredential(key))
+ document_analysis_client = DocumentAnalysisClient(endpoint=endpoint, credential=AzureKeyCredential(key))
poller = document_analysis_client.begin_analyze_document_from_url( "prebuilt-document", docUrl)
azure-arc Quick Enable Hybrid Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/learn/quick-enable-hybrid-vm.md
Title: Quickstart - Connect hybrid machine with Azure Arc-enabled servers description: In this quickstart, you connect and register a hybrid machine with Azure Arc-enabled servers. Previously updated : 03/23/2022 Last updated : 05/20/2022
In this quickstart, you'll deploy and configure the Azure Connected Machine agen
## Generate installation script
-Use the Azure portal to create a script that automates the agent download and installation, and establishes the connection with Azure Arc.
+Use the Azure portal to create a script that automates the agent download and installation and establishes the connection with Azure Arc.
-1. Launch the Azure Arc service in the Azure portal by searching for and selecting **Servers - Azure Arc**.
+<!--1. Launch the Azure Arc service in the Azure portal by searching for and selecting **Servers - Azure Arc**.
:::image type="content" source="media/quick-enable-hybrid-vm/search-machines.png" alt-text="Search for Azure Arc-enabled servers in the Azure portal.":::
-1. On the **Servers - Azure Arc** page, select **Add** near the upper left.
+1. On the **Servers - Azure Arc** page, select **Add** near the upper left.-->
-1. On the next page, from the **Add a single server** tile, select **Generate script**.
+1. [Go to the Azure portal page for adding servers with Azure Arc](https://ms.portal.azure.com/#view/Microsoft_Azure_HybridCompute/HybridVmAddBlade). Select the **Add a single server** tile, then select **Generate script**.
+
+ :::image type="content" source="media/quick-enable-hybrid-vm/add-single-server.png" alt-text="Screenshot of Azure portal's add server page." lightbox="media/quick-enable-hybrid-vm/add-single-server-expanded.png":::
+ > [!NOTE]
+ > In the portal, you can also reach the page for adding servers by searching for and selecting "Servers - Azure Arc" and then selecting **+Add**.
1. Review the information on the **Prerequisites** page, then select **Next**.
azure-arc Manage Automatic Vm Extension Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/manage-automatic-vm-extension-upgrade.md
Title: Automatic Extension Upgrade (preview) for Azure Arc-enabled servers
-description: Learn how to enable the Automatic Extension Upgrade (preview) for your Azure Arc-enabled servers.
+ Title: Automatic extension upgrade (preview) for Azure Arc-enabled servers
+description: Learn how to enable the automatic extension upgrades for your Azure Arc-enabled servers.
Previously updated : 12/09/2021 Last updated : 06/02/2021
-# Automatic Extension Upgrade (preview) for Azure Arc-enabled servers
+# Automatic extension upgrade (preview) for Azure Arc-enabled servers
-Automatic Extension Upgrade (preview) is available for Azure Arc-enabled servers that have supported VM extensions installed. When Automatic Extension Upgrade (preview) is enabled on a machine, the extension is upgraded automatically whenever the extension publisher releases a new version for that extension.
+Automatic extension upgrade (preview) is available for Azure Arc-enabled servers that have supported VM extensions installed. When automatic extension upgrade is enabled on a machine, the extension is upgraded automatically whenever the extension publisher releases a new version for that extension.
- Automatic Extension Upgrade has the following features:
+ Automatic extension upgrade has the following features:
- You can opt in and out of automatic upgrades at any time. - Each supported extension is enrolled individually, and you can choose which extensions to upgrade automatically. - Supported in all public cloud regions. > [!NOTE]
-> In this release, it is only possible to configure Automatic Extension Upgrade with the Azure CLI and Azure PowerShell module.
+> In this release, it is only possible to configure automatic extension upgrade with the Azure CLI and Azure PowerShell module.
-## How does Automatic Extension Upgrade work?
+## How does automatic extension upgrade work?
-The extension upgrade process replaces the existing Azure VM extension version supported by Azure Arc-enabled servers with a new version of the same extension when published by the extension publisher.
-
-A failed extension update is automatically retried. A retry is attempted every few days automatically without user intervention.
+The extension upgrade process replaces the existing Azure VM extension version supported by Azure Arc-enabled servers with a new version of the same extension when published by the extension publisher. This feature is enabled by default for all extensions you deploy the Azure Arc-enabled servers unless you explicitly opt-out of automatic upgrades.
### Availability-first updates
For a group of Arc-enabled servers undergoing an update, the Azure platform will
**Across regions:** -- Geo-paired regions are not applicable.
+- Geo-paired regions aren't applicable.
**Within a region:** -- Availability Zones are not applicable.-- Machines are batched on a best effort basis to avoid concurrent updates for all machines registered with Arc-enabled servers in a subscription.
+- Availability Zones aren't applicable.
+- Machines are batched on a best effort basis to avoid concurrent updates for all machines registered with Arc-enabled servers in a subscription.
+
+### Automatic rollback and retries
+
+If an extension upgrade fails, Azure will try to repair the extension by performing the following actions:
+
+1. The Azure Connected Machine agent will automatically reinstall the last known good version of the extension to attempt to restore functionality.
+1. If the rollback is successful, the extension status will show as **Succeeded** and the extension will be added to the automatic upgrade queue again. The next upgrade attempt can be as soon as the next hour and will continue until the upgrade is successful.
+1. If the rollback fails, the extension status will show as **Failed** and the extension will no longer function as intended. You'll need to [remove](manage-vm-extensions-cli.md#remove-extensions) and [reinstall](manage-vm-extensions-cli.md#enable-extension) the extension to restore functionality.
+
+If you continue to have trouble upgrading an extension, you can [disable automatic extension upgrade](#disable-automatic-extension-upgrade) to prevent the system from trying again while you troubleshoot the issue. You can [enable automatic extension upgrade](#enable-automatic-extension-upgrade) again when you're ready.
## Supported extensions
-Automatic Extension Upgrade (preview) supports the following extensions (and more are added periodically):
+Automatic extension upgrade supports the following extensions (and more are added periodically):
- Azure Monitor Agent - Linux and Windows - Azure Security agent - Linux and Windows
Automatic Extension Upgrade (preview) supports the following extensions (and mor
- Key Vault Extension - Linux only - Log Analytics agent (OMS agent) - Linux only
-## Enabling Automatic Extension Upgrade (preview)
+## Enable automatic extension upgrade
-To enable Automatic Extension Upgrade (preview) for an extension, you must ensure the property `enable-auto-upgrade` is set to `true` and added to every extension definition individually.
+Automatic extension upgrade is enabled by default when you install extensions on Azure Arc-enabled servers. To enable automatic extension upgrade for an existing extension, you can use Azure CLI or Azure PowerShell to set the `enableAutomaticUpgrade` property on the extension to `true`. You'll need to repeat this process for every extension where you'd like to enable automatic upgrades.
-Use the [az connectedmachine extension update](/cli/azure/connectedmachine/extension) command with the `--name`, `--machine-name`, `--enable-auto-upgrade`, and `--resource-group` parameters.
+Use the [az connectedmachine extension update](/cli/azure/connectedmachine/extension) command to enable automatic upgrade on an extension:
```azurecli az connectedmachine extension update \
az connectedmachine extension update \
--enable-auto-upgrade true ```
-To check the status of Automatic Extension Upgrade (preview) for all extensions on an Arc-enabled server, run the following command:
+To check the status of automatic extension upgrade for all extensions on an Arc-enabled server, run the following command:
```azurecli az connectedmachine extension list --resource-group resourceGroupName --machine-name machineName --query "[].{Name:name, AutoUpgrade:properties.enableAutoUpgrade}" --output table ```
-To enable Automatic Extension Upgrade (preview) for an extension using Azure PowerShell, use the [Update-AzConnectedMachineExtension](/powershell/module/az.connectedmachine/update-azconnectedmachineextension) cmdlet with the `-Name`, `-MachineName`, `-ResourceGroup`, and `-EnableAutomaticUpgrade` parameters.
+To enable automatic extension upgrade for an extension using Azure PowerShell, use the [Update-AzConnectedMachineExtension](/powershell/module/az.connectedmachine/update-azconnectedmachineextension) cmdlet:
```azurepowershell Update-AzConnectedMachineExtension -ResourceGroup resourceGroupName -MachineName machineName -Name DependencyAgentLinux -EnableAutomaticUpgrade ```
-To check the status of Automatic Extension Upgrade (preview) for all extensions on an Arc-enabled server, run the following command:
+To check the status of automatic extension upgrade for all extensions on an Arc-enabled server, run the following command:
```azurepowershell Get-AzConnectedMachineExtension -ResourceGroup resourceGroupName -MachineName machineName | Format-Table Name, EnableAutomaticUpgrade ``` - ## Extension upgrades with multiple extensions A machine managed by Arc-enabled servers can have multiple extensions with automatic extension upgrade enabled. The same machine can also have other extensions without automatic extension upgrade enabled.
-If multiple extension upgrades are available for a machine, the upgrades may be batched together, but each extension upgrade is applied individually on a machine. A failure on one extension does not impact the other extension(s) to be upgraded. For example, if two extensions are scheduled for an upgrade, and the first extension upgrade fails, the second extension will still be upgraded.
+If multiple extension upgrades are available for a machine, the upgrades may be batched together, but each extension upgrade is applied individually on a machine. A failure on one extension doesn't impact the other extension(s) to be upgraded. For example, if two extensions are scheduled for an upgrade, and the first extension upgrade fails, the second extension will still be upgraded.
-## Disable Automatic Extension Upgrade
+## Disable automatic extension upgrade
-To disable Automatic Extension Upgrade (preview) for an extension, you must ensure the property `enable-auto-upgrade` is set to `false` and added to every extension definition individually.
+To disable automatic extension upgrade for an extension, set the `enable-auto-upgrade` property to `false`.
-### Using the Azure CLI
-
-Use the [az connectedmachine extension update](/cli/azure/connectedmachine/extension) command with the `--name`, `--machine-name`, `--enable-auto-upgrade`, and `--resource-group` parameters.
+With Azure CLI, use the [az connectedmachine extension update](/cli/azure/connectedmachine/extension) command to disable automatic upgrade on an extension:
```azurecli az connectedmachine extension update \
az connectedmachine extension update \
--enable-auto-upgrade false ```
-### Using Azure PowerShell
-
-Use the [Update-AzConnectedMachineExtension](/powershell/module/az.connectedmachine/update-azconnectedmachineextension) cmdlet with the `-Name`, `-MachineName`, `-ResourceGroup`, and `-EnableAutomaticUpgrade` parameters.
+With Azure PowerShell, use the [Update-AzConnectedMachineExtension](/powershell/module/az.connectedmachine/update-azconnectedmachineextension) cmdlet:
```azurepowershell Update-AzConnectedMachineExtension -ResourceGroup resourceGroupName -MachineName machineName -Name DependencyAgentLinux -EnableAutomaticUpgrade:$false ```
+## Check automatic extension upgrade history
+
+You can use the Azure Activity Log to identify extensions that were automatically upgraded. You can find the Activity Log tab on individual Azure Arc-enabled server resources, resource groups, and subscriptions. Extension upgrades are identified by the `Upgrade Extensions on Azure Arc machines (Microsoft.HybridCompute/machines/upgradeExtensions/action)` operation.
+
+To view automatic extension upgrade history, search for the **Azure Activity Log** in the Azure Portal. Select **Add filter** and choose the Operation filter. For the filter criteria, search for "Upgrade Extensions on Azure Arc machines" and select that option. You can optionally add a second filter for **Event initiated by** and set "Azure Regional Service Manager" as the filter criteria to only see automatic upgrade attempts and exclude upgrades manually initiated by users.
++ ## Next steps - You can deploy, manage, and remove VM extensions using the [Azure CLI](manage-vm-extensions-cli.md), [PowerShell](manage-vm-extensions-powershell.md), or [Azure Resource Manager templates](manage-vm-extensions-template.md).
azure-arc Manage Vm Extensions Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/manage-vm-extensions-template.md
Title: Enable VM extension using Azure Resource Manager template description: This article describes how to deploy virtual machine extensions to Azure Arc-enabled servers running in hybrid cloud environments using an Azure Resource Manager template. Previously updated : 07/16/2021 Last updated : 06/02/2022
To easily deploy the Log Analytics agent, the following sample is provided to in
"name": "[concat(parameters('vmName'),'/OMSAgentForLinux')]", "type": "Microsoft.HybridCompute/machines/extensions", "location": "[parameters('location')]",
- "apiVersion": "2019-08-02-preview",
+ "apiVersion": "2022-03-10",
"properties": { "publisher": "Microsoft.EnterpriseCloud.Monitoring", "type": "OmsAgentForLinux",
+ "enableAutomaticUpgrade": true,
"settings": { "workspaceId": "[parameters('workspaceId')]" },
To easily deploy the Log Analytics agent, the following sample is provided to in
"name": "[concat(parameters('vmName'),'/MicrosoftMonitoringAgent')]", "type": "Microsoft.HybridCompute/machines/extensions", "location": "[parameters('location')]",
- "apiVersion": "2019-08-02-preview",
+ "apiVersion": "2022-03-10",
"properties": { "publisher": "Microsoft.EnterpriseCloud.Monitoring", "type": "MicrosoftMonitoringAgent", "autoUpgradeMinorVersion": true,
+ "enableAutomaticUpgrade": true,
"settings": { "workspaceId": "[parameters('workspaceId')]" },
The Custom Script extension configuration specifies things like script location
"name": "[concat(parameters('vmName'),'/CustomScript')]", "type": "Microsoft.HybridCompute/machines/extensions", "location": "[parameters('location')]",
- "apiVersion": "2019-08-02-preview",
+ "apiVersion": "2022-03-10",
"properties": { "publisher": "Microsoft.Azure.Extensions", "type": "CustomScript",
The Custom Script extension configuration specifies things like script location
"name": "[concat(parameters('vmName'),'/CustomScriptExtension')]", "type": "Microsoft.HybridCompute/machines/extensions", "location": "[parameters('location')]",
- "apiVersion": "2019-08-02-preview",
+ "apiVersion": "2022-03-10",
"properties": { "publisher": "Microsoft.Compute", "type": "CustomScriptExtension",
To use the Azure Monitor Dependency agent extension, the following sample is pro
```json {
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
+ "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
"parameters": { "vmName": { "type": "string",
To use the Azure Monitor Dependency agent extension, the following sample is pro
} } },
- "variables": {
- "vmExtensionsApiVersion": "2017-03-30"
- },
"resources": [ { "type": "Microsoft.HybridCompute/machines/extensions", "name": "[concat(parameters('vmName'),'/DAExtension')]",
- "apiVersion": "[variables('vmExtensionsApiVersion')]",
+ "apiVersion": "2022-03-10",
"location": "[resourceGroup().location]", "dependsOn": [ ], "properties": { "publisher": "Microsoft.Azure.Monitoring.DependencyAgent", "type": "DependencyAgentLinux",
- "autoUpgradeMinorVersion": true
+ "enableAutomaticUpgrade": true
} } ],
- "outputs": {
- }
+ "outputs": {
+ }
} ```
To use the Azure Monitor Dependency agent extension, the following sample is pro
```json {
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
+ "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
"parameters": { "vmName": { "type": "string",
To use the Azure Monitor Dependency agent extension, the following sample is pro
} } },
- "variables": {
- "vmExtensionsApiVersion": "2017-03-30"
- },
"resources": [ { "type": "Microsoft.HybridCompute/machines/extensions", "name": "[concat(parameters('vmName'),'/DAExtension')]",
- "apiVersion": "[variables('vmExtensionsApiVersion')]",
+ "apiVersion": "2022-03-10",
"location": "[resourceGroup().location]", "dependsOn": [ ], "properties": { "publisher": "Microsoft.Azure.Monitoring.DependencyAgent", "type": "DependencyAgentWindows",
- "autoUpgradeMinorVersion": true
+ "enableAutomaticUpgrade": true
} } ],
- "outputs": {
- }
+ "outputs": {
+ }
} ```
The following JSON shows the schema for the Key Vault VM extension (preview). Th
{ "type": "Microsoft.HybridCompute/machines/extensions", "name": "[concat(parameters('vmName'),'/KVVMExtensionForLinux')]",
- "apiVersion": "2019-12-12",
+ "apiVersion": "2022-03-10",
"location": "[parameters('location')]", "properties": { "publisher": "Microsoft.Azure.KeyVault", "type": "KeyVaultForLinux",
- "autoUpgradeMinorVersion": true,
+ "enableAutomaticUpgrade": true,
"settings": { "secretsManagementSettings": { "pollingIntervalInS": <polling interval in seconds, e.g. "3600">,
The following JSON shows the schema for the Key Vault VM extension (preview). Th
"observedCertificates": <list of KeyVault URIs representing monitored certificates, e.g.: "https://myvault.vault.azure.net/secrets/mycertificate" }, "authenticationSettings": {
- "msiEndpoint": <MSI endpoint e.g.: "http://localhost:40342/metadata/identity">,
- "msiClientId": <MSI identity e.g.: "c7373ae5-91c2-4165-8ab6-7381d6e75619">
+ "msiEndpoint": "http://localhost:40342/metadata/identity"
} } }
The following JSON shows the schema for the Key Vault VM extension (preview). Th
{ "type": "Microsoft.HybridCompute/machines/extensions", "name": "[concat(parameters('vmName'),'/KVVMExtensionForWindows')]",
- "apiVersion": "2019-12-12",
+ "apiVersion": "2022-03-10",
"location": "[parameters('location')]", "properties": { "publisher": "Microsoft.Azure.KeyVault", "type": "KeyVaultForWindows",
- "autoUpgradeMinorVersion": true,
+ "enableAutomaticUpgrade": true,
"settings": { "secretsManagementSettings": { "pollingIntervalInS": "3600", "certificateStoreName": <certificate store name, e.g.: "MY">, "linkOnRenewal": <Only Windows. This feature ensures s-channel binding when certificate renews, without necessitating a re-deployment. e.g.: false>, "certificateStoreLocation": <certificate store location, currently it works locally only e.g.: "LocalMachine">,
- "requireInitialSync": <initial synchronization of certificates e..g: true>,
+ "requireInitialSync": <initial synchronization of certificates e.g.: true>,
"observedCertificates": <list of KeyVault URIs representing monitored certificates, e.g.: "https://myvault.vault.azure.net" }, "authenticationSettings": {
- "msiEndpoint": <MSI endpoint e.g.: "http://localhost:40342/metadata/identity">,
- "msiClientId": <MSI identity e.g.: "c7373ae5-91c2-4165-8ab6-7381d6e75619">
+ "msiEndpoint": "http://localhost:40342/metadata/identity"
} } }
Save the template file to disk. You can then deploy the extension to the connect
New-AzResourceGroupDeployment -ResourceGroupName "ContosoEngineering" -TemplateFile "D:\Azure\Templates\KeyVaultExtension.json" ```
-## Deploy the Microsoft Defender for Cloud integrated scanner
-
-To use the Microsoft Defender for Cloud integrated scanner extension, the following sample is provided to run on Windows and Linux. If you are unfamiliar with the integrated scanner, see [Overview of Microsoft Defender for Cloud's vulnerability assessment solution](../../security-center/deploy-vulnerability-assessment-vm.md) for hybrid machines.
-
-### Template file for Windows
-
-```json
-{
- "properties": {
- "mode": "Incremental",
- "template": {
- "contentVersion": "1.0.0.0",
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
- "parameters": {
- "vmName": {
- "type": "string"
- },
- "apiVersionByEnv": {
- "type": "string"
- }
- },
- "resources": [
- {
- "type": "Microsoft.HybridCompute/machines/providers/serverVulnerabilityAssessments",
- "name": "[concat(parameters('vmName'), '/Microsoft.Security/default')]",
- "apiVersion": "[parameters('apiVersionByEnv')]"
- }
- ]
- },
- "parameters": {
- "vmName": {
- "value": "resourceName"
- },
- "apiVersionByEnv": {
- "value": "2015-06-01-preview"
- }
- }
- }
-}
-```
-
-### Template file for Linux
-
-```json
-{
- "properties": {
- "mode": "Incremental",
- "template": {
- "contentVersion": "1.0.0.0",
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
- "parameters": {
- "vmName": {
- "type": "string"
- },
- "apiVersionByEnv": {
- "type": "string"
- }
- },
- "resources": [
- {
- "type": "Microsoft.HybridCompute/machines/providers/serverVulnerabilityAssessments",
- "name": "[concat(parameters('vmName'), '/Microsoft.Security/default')]",
- "apiVersion": "[parameters('apiVersionByEnv')]"
- }
- ]
- },
- "parameters": {
- "vmName": {
- "value": "resourceName"
- },
- "apiVersionByEnv": {
- "value": "2015-06-01-preview"
- }
- }
- }
-}
-```
-
-### Template deployment
-
-Save the template file to disk. You can then deploy the extension to the connected machine with the following command.
-
-```powershell
-New-AzResourceGroupDeployment -ResourceGroupName "ContosoEngineering" -TemplateFile "D:\Azure\Templates\AzureDefenderScanner.json"
-```
- ## Next steps * You can deploy, manage, and remove VM extensions using the [Azure PowerShell](manage-vm-extensions-powershell.md), from the [Azure portal](manage-vm-extensions-portal.md), or the [Azure CLI](manage-vm-extensions-cli.md).
azure-arc Onboard Ansible Playbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/onboard-ansible-playbooks.md
Before you can run the script to connect your machines, you'll need to do the fo
If you are onboarding machines to Azure Arc-enabled servers, copy the following Ansible playbook template and save the playbook as `arc-server-onboard-playbook.yml`.
-```
+```yaml
- name: Onboard Linux and Windows Servers to Azure Arc-enabled servers with public endpoint connectivity hosts: <INSERT-HOSTS>
+ vars:
+ azure:
+ service_principal_id: 'INSERT-SERVICE-PRINCIPAL-CLIENT-ID'
+ service_principal_secret: 'INSERT-SERVICE-PRINCIPAL-SECRET'
+ resource_group: 'INSERT-RESOURCE-GROUP'
+ tenant_id: 'INSERT-TENANT-ID'
+ subscription_id: 'INSERT-SUBSCRIPTION-ID'
+ location: 'INSERT-LOCATION'
tasks:
- - name: Download the Connected Machine Agent on Linux servers
+ - name: Download the Connected Machine Agent on Linux servers
become: yes get_url: url: https://aka.ms/azcmagent dest: ~/install_linux_azcmagent.sh mode: '700' when: ansible_system == 'Linux'
- - name: Download the Connected Machine Agent on Windows servers
- win_get_url:
- url: https://aka.ms/AzureConnectedMachineAgent
- dest: C:\AzureConnectedMachineAgent.msi
+ - name: Download the Connected Machine Agent on Windows servers
+ win_get_url:
+ url: https://aka.ms/AzureConnectedMachineAgent
+ dest: C:\AzureConnectedMachineAgent.msi
when: ansible_os_family == 'Windows' - name: Install the Connected Machine Agent on Linux servers become: yes shell: bash ~/install_linux_azcmagent.sh when: ansible_system == 'Linux' - name: Install the Connected Machine Agent on Windows servers
- path: C:\AzureConnectedMachineAgent.msi
+ win_package:
+ path: C:\AzureConnectedMachineAgent.msi
when: ansible_os_family == 'Windows' - name: Connect the Connected Machine Agent on Linux servers to Azure Arc become: yes
- shell: sudo azcmagent connect --service-principal-id <INSERT-SERVICE-PRINCIPAL-CLIENT-ID> --service-principal-secret <INSERT-SERVICE-PRINCIPAL-SECRET> --resource-group <INSERT-RESOURCE-GROUP> --tenant-id <INSERT-TENANT-ID> --location <INSERT-REGION> --subscription-id <INSERT-SUBSCRIPTION-ID>
+ shell: sudo azcmagent connect --service-principal-id {{ azure.service_principal_id }} --service-principal-secret {{ azure.service_principal_secret }} --resource-group {{ azure.resource_group }} --tenant-id {{ azure.tenant_id }} --location {{ azure.location }} --subscription-id {{ azure.subscription_id }}
when: ansible_system == 'Linux' - name: Connect the Connected Machine Agent on Windows servers to Azure
- win_shell: '& $env:ProgramFiles\AzureConnectedMachineAgent\azcmagent.exe connect --service-principal-id <INSERT-SERVICE-PRINCIPAL-CLIENT-ID> --service-principal-secret <INSERT-SERVICE-PRINCIPAL-SECRET> --resource-group <INSERT-RESOURCE-GROUP> --tenant-id <INSERT-TENANT-ID> --location <INSERT-REGION> --subscription-id <INSERT-SUBSCRIPTION-ID>'
+ win_shell: '& $env:ProgramFiles\AzureConnectedMachineAgent\azcmagent.exe connect --service-principal-id "{{ azure.service_principal_id }}" --service-principal-secret "{{ azure.service_principal_secret }}" --resource-group "{{ azure.resource_group }}" --tenant-id "{{ azure.tenant_id }}" --location "{{ azure.location }}" --subscription-id "{{ azure.subscription_id }}"'
when: ansible_os_family == 'Windows' ```
-<!--If you are onboarding Linux servers to Azure Arc-enabled servers, download the following Ansible playbook template and save the playbook as `arc-server-onboard-playbook.yml`.
-
-```
--- name: Onboard Linux Server to Azure Arc-enabled servers with public endpoint
- hosts: <INSERT-HOSTS>
- tasks:
- - name: Download the Connected Machine Agent
- become: yes
- get_url:
- url: https://aka.ms/azcmagent
- dest: ~/install_linux_azcmagent.sh
- mode: '700'
- when: ansible_system == 'Linux'
- - name: Install the Connected Machine Agent
- become: yes
- shell: bash ~/install_linux_azcmagent.sh
- when: ansible_system == 'Linux'
- - name: Connect the Connected Machine Agent to Azure
- become: yes
- shell: sudo azcmagent connect --service-principal-id <INSERT-SERVICE-PRINCIPAL-CLIENT-ID> --service-principal-secret <INSERT-SERVICE-PRINCIPAL-SECRET> --resource-group <INSERT-RESOURCE-GROUP> --tenant-id <INSERT-TENANT-ID> --location <INSERT-REGION> --subscription-id <INSERT-SUBSCRIPTION-ID>
- when: ansible_system == 'Linux'
-```-->
- ## Modify the Ansible playbook After downloading the Ansible playbook, complete the following steps:
-1. Within the Ansible playbook, modify the fields under the task **Connect the Connected Machine Agent to Azure** with the service principal and Azure details collected earlier:
+1. Within the Ansible playbook, modify the variables under the **vars section** with the service principal and Azure details collected earlier:
* Service Principal Id * Service Principal Secret
After downloading the Ansible playbook, complete the following steps:
* Subscription Id * Region
-1. Enter the correct hosts field capturing the target servers for onboarding to Azure Arc. You can employ Ansible patterns to selectively target which hybrid machines to onboard.
+1. Enter the correct hosts field capturing the target servers for onboarding to Azure Arc. You can employ [Ansible patterns](https://docs.ansible.com/ansible/latest/user_guide/intro_patterns.html#common-patterns) to selectively target which hybrid machines to onboard.
## Run the Ansible playbook
azure-government Azure Services In Fedramp Auditscope https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/compliance/azure-services-in-fedramp-auditscope.md
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
**&ast;** Authorizations for edge devices (such as Azure Data Box and Azure Stack Edge) apply only to Azure services that support on-premises, customer-managed devices. You are wholly responsible for the authorization package that covers the physical devices. For assistance with accelerating your onboarding and authorization of devices, contact your Microsoft account representative.
-**&ast;&ast;** Azure Information Protection (AIP) is part of the Microsoft Information Protection (MIP) solution - it extends the labeling and classification functionality provided by Microsoft 365. Before AIP can be used for DoD workloads at a given impact level (IL), the corresponding Microsoft 365 services must be authorized at the same IL.
+**&ast;&ast;** Azure Information Protection (AIP) is part of the Microsoft Purview Information Protection solution - it extends the labeling and classification functionality provided by Microsoft 365. Before AIP can be used for DoD workloads at a given impact level (IL), the corresponding Microsoft 365 services must be authorized at the same IL.
## Next steps
azure-government Documentation Government Csp List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-csp-list.md
Below you can find a list of all the authorized Cloud Solution Providers (CSPs),
|[Deborgem Enterprises Incorporated](https://deborgem.com)| |[Definitive Logic Corporation](https://www.definitivelogic.com/)| |[Dell Federal Services](https://www.dellemc.com/en-us/industry/federal/federal-government-it.htm#)|
-|[Dell Marketing LP](https://www.dell.com/learn/us/en/rc1009777/fed)|
+|[Dell Marketing LP](https://www.dell.com/)|
|[Delphi Technology Solutions](https://delphi-ts.com/)| |[Developing Today LLC](https://www.developingtoday.net/)| |[DevHawk, LLC](https://www.devhawk.io)|
Below you can find a list of all the authorized Cloud Solution Providers (CSPs),
|[DirectApps, Inc. D.B.A. Direct Technology](https://directtechnology.com)| |[DominionTech Inc.](https://www.dominiontech.com)| |[DOT Personable Inc](http://solutions.personable.com/)|
-|[Doublehorn, LLC](https://doublehorn.com/)|
+|Doublehorn, LLC|
|[DXC Technology Services LLC](https://www.dxc.technology/services)| |[DXL Enterprises, Inc.](https://mahwahnjcoc.wliinc31.com/Supply-Chain-Management/DXL-Enterprises,-Inc-1349)| |[DynTek](https://www.dyntek.com)|
azure-monitor Azure Monitor Agent Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-manage.md
Update-AzConnectedExtension -ResourceGroupName $env.ResourceGroupName -MachineNa
```
-The **recommendation** is to enable automatic update of the agent by enabling the [Automatic Extension Upgrade (preview)](../../azure-arc/servers/manage-automatic-vm-extension-upgrade.md#enabling-automatic-extension-upgrade-preview) feature, using the following PowerShell commands.
+The **recommendation** is to enable automatic update of the agent by enabling the [Automatic Extension Upgrade (preview)](../../azure-arc/servers/manage-automatic-vm-extension-upgrade.md#enable-automatic-extension-upgrade) feature, using the following PowerShell commands.
# [Windows](#tab/PowerShellWindowsArc) ```powershell Update-AzConnectedMachineExtension -ResourceGroup <resource-group-name> -MachineName <arc-server-name> -Name AMAWindows -EnableAutomaticUpgrade
az connectedmachine upgrade-extension --extension-targets "{\"Microsoft.Azure.Mo
```
-The **recommendation** is to enable automatic update of the agent by enabling the [Automatic Extension Upgrade (preview)](../../azure-arc/servers/manage-automatic-vm-extension-upgrade.md#enabling-automatic-extension-upgrade-preview) feature, using the following PowerShell commands.
+The **recommendation** is to enable automatic update of the agent by enabling the [Automatic Extension Upgrade (preview)](../../azure-arc/servers/manage-automatic-vm-extension-upgrade.md#enable-automatic-extension-upgrade) feature, using the following PowerShell commands.
# [Windows](#tab/CLIWindowsArc) ```azurecli az connectedmachine extension update --name AzureMonitorWindowsAgent --machine-name <arc-server-name> --resource-group <resource-group-name> --enable-auto-upgrade true
azure-monitor Annotations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/annotations.md
Title: Release annotations for Application Insights | Microsoft Docs
description: Learn how to create annotations to track deployment or other significant events with Application Insights. Last updated 07/20/2021
+ms.reviwer: casocha
# Release annotations for Application Insights
azure-monitor Api Custom Events Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/api-custom-events-metrics.md
Last updated 05/11/2020 ms.devlang: csharp, java, javascript, vb + # Application Insights API for custom events and metrics
azure-monitor Api Filtering Sampling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/api-filtering-sampling.md
Last updated 11/23/2016 ms.devlang: csharp, javascript, python
+ms.reviwer: cithomas
# Filter and preprocess telemetry in the Application Insights SDK
azure-monitor App Map https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/app-map.md
Last updated 05/16/2022 ms.devlang: csharp, java, javascript, python -+ # Application Map: Triage Distributed Applications
azure-monitor Worker Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/worker-service.md
Full example is shared [here](https://github.com/microsoft/ApplicationInsights-d
IServiceCollection services = new ServiceCollection(); // Being a regular console app, there is no appsettings.json or configuration providers enabled by default.
- // Hence connection string and any changes to default logging level must be specified here.
+ // Hence instrumentation key/ connection string and any changes to default logging level must be specified here.
services.AddLogging(loggingBuilder => loggingBuilder.AddFilter<Microsoft.Extensions.Logging.ApplicationInsights.ApplicationInsightsLoggerProvider>("Category", LogLevel.Information));
- services.AddApplicationInsightsTelemetryWorkerService("connection string here");
+ services.AddApplicationInsightsTelemetryWorkerService("instrumentation key here");
+
+ // To pass a connection string
+ // - aiserviceoptions must be created
+ // - set connectionstring on it
+ // - pass it to AddApplicationInsightsTelemetryWorkerService()
// Build ServiceProvider. IServiceProvider serviceProvider = services.BuildServiceProvider();
azure-monitor Container Insights Onboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-onboard.md
To enable Container insights, use one of the methods that's described in the fol
| New Kubernetes cluster | [Enable monitoring for a new AKS cluster using the Azure CLI](../../aks/learn/quick-kubernetes-deploy-cli.md)| | | [Enable for a new AKS cluster by using the open-source tool Terraform](container-insights-enable-new-cluster.md#enable-using-terraform)| | | [Enable for a new OpenShift cluster by using an Azure Resource Manager template](container-insights-azure-redhat-setup.md#enable-for-a-new-cluster-using-an-azure-resource-manager-template) |
-| | [Enable for a new OpenShift cluster by using the Azure CLI](/cli/azure/openshift#az-openshift-create) |
+| | [Enable for a new OpenShift cluster by using the Azure CLI](/azure/openshift/#az-openshift-create) |
| Existing AKS cluster | [Enable monitoring for an existing AKS cluster using the Azure CLI](container-insights-enable-existing-clusters.md#enable-using-azure-cli) | | |[Enable for an existing AKS cluster using Terraform](container-insights-enable-existing-clusters.md#enable-using-terraform) | | | [Enable for an existing AKS cluster from Azure Monitor](container-insights-enable-existing-clusters.md#enable-from-azure-monitor-in-the-portal)|
azure-monitor Container Insights Region Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-region-mapping.md
Title: Container insights region mappings description: Describes the region mappings supported between Container insights, Log Analytics Workspace, and custom metrics. Previously updated : 09/22/2020 Last updated : 05/27/2022
Supported AKS regions are listed in [Products available by region](https://azure
|**Korea** | | |KoreaSouth |KoreaCentral | |**US** | |
-|WestCentralUS<sup>1</sup>|EastUS<sup>1</sup>|
+|WestCentralUS<sup>1</sup>|EastUS |
-<sup>1</sup> Due to capacity restraints, the region isn't available when creating new resources. This includes a Log Analytics workspace. However, preexisting linked resources in the region should continue to work.
## Custom metrics supported regions
azure-monitor Activity Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/activity-log.md
Before using Activity log insights, you'll have to [enable sending logs to your
### How does Activity log insights work?
-Activity logs you send to a [Log Analytics workspace](/articles/azure-monitor/logs/log-analytics-workspace-overview.md) are stored in a table called AzureActivity.
+Activity logs you send to a [Log Analytics workspace](/azure/azure-monitor/logs/log-analytics-workspace-overview) are stored in a table called AzureActivity.
-Activity log insights are a curated [Log Analytics workbook](/articles/azure-monitor/visualize/workbooks-overview.md) with dashboards that visualize the data in the AzureActivity table. For example, which administrators deleted, updated or created resources, and whether the activities failed or succeeded.
+Activity log insights are a curated [Log Analytics workbook](/azure/azure-monitor/visualize/workbooks-overview) with dashboards that visualize the data in the AzureActivity table. For example, which administrators deleted, updated or created resources, and whether the activities failed or succeeded.
:::image type="content" source="media/activity-log/activity-logs-insights-main-screen.png" lightbox= "media/activity-log/activity-logs-insights-main-screen.png" alt-text="A screenshot showing Azure Activity logs insights dashboards.":::
To view Activity log insights on a resource level:
1. At the top of the **Activity Logs Insights** page, select: 1. A time range for which to view data from the **TimeRange** dropdown.
- * **Azure Activity Log Entries** shows the count of Activity log records in each [activity log category](/articles/azure-monitor/essentials/activity-log-schema#categories).
+ * **Azure Activity Log Entries** shows the count of Activity log records in each [activity log category](/azure/azure-monitor/essentials/activity-log#categories).
:::image type="content" source="media/activity-log/activity-logs-insights-category-value.png" lightbox= "media/activity-log/activity-logs-insights-category-value.png" alt-text="Screenshot of Azure Activity Logs by Category Value":::
azure-monitor Metrics Supported https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-supported.md
This latest update adds a new column and reorders the metrics to be alphabetical
|||||||| |AddRegion|Yes|Region Added|Count|Count|Region Added|Region| |AutoscaleMaxThroughput|No|Autoscale Max Throughput|Count|Maximum|Autoscale Max Throughput|DatabaseName, CollectionName|
-|AvailableStorage|No|(deprecated) Available Storage|Bytes|Total|"Available Storage"will be removed from Azure Monitor at the end of September 2023. Cosmos DB collection storage size is now unlimited. The only restriction is that the storage size for each logical partition key is 20GB. You can enable PartitionKeyStatistics in Diagnostic Log to know the storage consumption for top partition keys. For more info about Cosmos DB storage quota, please check this doc https://docs.microsoft.com/azure/cosmos-db/concepts-limits. After deprecation, the remaining alert rules still defined on the deprecated metric will be automatically disabled post the deprecation date.|CollectionName, DatabaseName, Region|
+|AvailableStorage|No|(deprecated) Available Storage|Bytes|Total|"Available Storage"will be removed from Azure Monitor at the end of September 2023. Cosmos DB collection storage size is now unlimited. The only restriction is that the storage size for each logical partition key is 20GB. You can enable PartitionKeyStatistics in Diagnostic Log to know the storage consumption for top partition keys. For more info about Cosmos DB storage quota, see [Azure Cosmos DB service quotas](../../cosmos-db/concepts-limits.md). After deprecation, the remaining alert rules still defined on the deprecated metric will be automatically disabled post the deprecation date.|CollectionName, DatabaseName, Region|
|CassandraConnectionClosures|No|Cassandra Connection Closures|Count|Total|Number of Cassandra connections that were closed, reported at a 1 minute granularity|APIType, Region, ClosureReason| |CassandraConnectorAvgReplicationLatency|No|Cassandra Connector Average ReplicationLatency|MilliSeconds|Average|Cassandra Connector Average ReplicationLatency|No Dimensions| |CassandraConnectorReplicationHealthStatus|No|Cassandra Connector Replication Health Status|Count|Count|Cassandra Connector Replication Health Status|NotStarted, ReplicationInProgress, Error|
azure-monitor Sql Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/sql-insights-overview.md
Last updated 04/14/2022
SQL Insights (preview) is a comprehensive solution for monitoring any product in the [Azure SQL family](/azure/azure-sql/index). SQL Insights uses [dynamic management views](/azure/azure-sql/database/monitoring-with-dmvs) to expose the data that you need to monitor health, diagnose problems, and tune performance. SQL Insights performs all monitoring remotely. Monitoring agents on dedicated virtual machines connect to your SQL resources and remotely gather data. The gathered data is stored in [Azure Monitor Logs](../logs/data-platform-logs.md) to enable easy aggregation, filtering, and trend analysis. You can view the collected data from the SQL Insights [workbook template](../visualize/workbooks-overview.md), or you can delve directly into the data by using [log queries](../logs/get-started-queries.md).
-The following diagram details the steps taken by information from the database engine and Azure resource logs, and how they can be surfaced. For a more detailed diagram of Azure SQL logging, see [Monitoring and diagnostic telemetry](/azure/azure-sql/database/monitor-tune-overview.md#monitoring-and-diagnostic-telemetry).
+The following diagram details the steps taken by information from the database engine and Azure resource logs, and how they can be surfaced. For a more detailed diagram of Azure SQL logging, see [Monitoring and diagnostic telemetry](/azure/azure-sql/database/monitor-tune-overview#monitoring-and-diagnostic-telemetry).
:::image type="content" source="media/sql-insights/azure-sql-insights-horizontal-analytics.svg" alt-text="Diagram showing how database engine information and resource logs are surfaced through AzureDiagnostics and Log Analytics.":::
The tables have the following columns:
## Next steps - For frequently asked questions about SQL Insights (preview), see [Frequently asked questions](../faq.yml).-- [Monitoring and performance tuning in Azure SQL Database and Azure SQL Managed Instance](/azure/azure-sql/database/monitor-tune-overview)
+- [Monitoring and performance tuning in Azure SQL Database and Azure SQL Managed Instance](/azure/azure-sql/database/monitor-tune-overview)
azure-portal Set Preferences https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/set-preferences.md
Title: Manage Azure portal settings and preferences description: Change Azure portal settings such as default subscription/directory, timeouts, menu mode, contrast, theme, notifications, language/region and more. Previously updated : 03/23/2022 Last updated : 06/01/2022
To create a new filter, select **Create a filter**. You can create up to ten fil
Each filter must have a unique name that is between 8 and 50 characters long and contains only letters, numbers, and hyphens. After you've named your filter, enter at least one condition. In the **Filter type** field, select either **Subscription name**, **Subscription ID**, or **Subscription state**. Then select an operator and enter a value to filter on.
The theme that you choose affects the background and font colors that appear in
Alternatively, you can choose a theme from the **High contrast theme** section. These themes can make the Azure portal easier to read, especially if you have a visual impairment. Selecting either the white or black high-contrast theme will override any other theme selections.
+### Focus navigation
+
+Choose whether or not to enable focus navigation.
+
+If enabled, only one screen at a time will be visible as you step through a process in the portal. If disabled, as you move through the steps of a process, you'll be able to move between them through a horizontal scroll bar.
+ ### Startup page Choose one of the following options for the page you'll see when you first sign in to the Azure portal.
Choose one of the following options for the page you'll see when you first sign
Choose one of the following options for the directory to work in when you first sign in to the Azure portal. - **Sign in to your last visited directory**: When you sign in to the Azure portal, you'll start in whichever directory you'd been working in last time.-- **Select a directory**: Choose this option to select one of your directory. You'll start in that directory every time you sign in to the Azure portal, even if you had been working in a different directory last time.
+- **Select a directory**: Choose this option to select one of your directories. You'll start in that directory every time you sign in to the Azure portal, even if you had been working in a different directory last time.
:::image type="content" source="media/set-preferences/azure-portal-settings-startup-views.png" alt-text="Screenshot showing the Startup section of Appearance + startup views.":::
To confirm that the inactivity timeout policy is set correctly, select **Notific
### Enable or disable pop-up notifications
-Notifications are system messages related to your current session. They provide information such as showing your current credit balance, confirming your last action, or letting you know when resources you created become . When pop-up notifications are turned on, the messages briefly display in the top corner of your screen.
+Notifications are system messages related to your current session. They provide information such as showing your current credit balance, confirming your last action, or letting you know when resources you created become available. When pop-up notifications are turned on, the messages briefly display in the top corner of your screen.
To enable or disable pop-up notifications, select or clear **Enable pop-up notifications**.
azure-portal How To Create Azure Support Request https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/supportability/how-to-create-azure-support-request.md
Title: How to create an Azure support request
description: Customers who need assistance can use the Azure portal to find self-service solutions and to create and manage support requests. Previously updated : 02/01/2022 Last updated : 06/02/2022 # Create an Azure support request
You can get to **Help + support** in the Azure portal. It's available from the A
### Azure role-based access control
-To create a support request, you must have the [Owner](../../role-based-access-control/built-in-roles.md#owner), [Contributor](../../role-based-access-control/built-in-roles.md#contributor), [Support Request Contributor](../../role-based-access-control/built-in-roles.md#support-request-contributor) role, or a custom role with [Microsoft.Support/*](../../role-based-access-control/resource-provider-operations.md#microsoftsupport), at the subscription level.
+You must have the appropriate access to a subscription before you can create a support request for it. This means you must have the [Owner](../../role-based-access-control/built-in-roles.md#owner), [Contributor](../../role-based-access-control/built-in-roles.md#contributor), or [Support Request Contributor](../../role-based-access-control/built-in-roles.md#support-request-contributor) role, or a custom role with [Microsoft.Support/*](../../role-based-access-control/resource-provider-operations.md#microsoftsupport), at the subscription level.
To create a support request without a subscription, for example an Azure Active Directory scenario, you must be an [Admin](../../active-directory/roles/permissions-reference.md).
We'll walk you through some steps to gather information about your problem and h
The first step of the support request process is to select an issue type. You'll then be prompted for more information, which can vary depending on what type of issue you selected. If you select **Technical**, you'll need to specify the service that your issue relates to. Depending on the service, you'll see additional options for **Problem type** and **Problem subtype**. > [!IMPORTANT]
-> In most cases, you'll need to specify a subscription. Choose the subscription where you are experiencing the problem. The support engineer assigned to your case will be able to access the subscription you specify here. You can tell them about additional subscriptions in your description (or by [sending a message](how-to-manage-azure-support-request.md#send-a-message) later), but the support engineer will only be able to work on [subscriptions to which you have access](#azure-role-based-access-control).
+> In most cases, you'll need to specify a subscription. Be sure to choose the subscription where you are experiencing the problem. The support engineer assigned to your case will only be able to access resources in the subscription you specify. The access requirement serves as a point of confirmation that the support engineer is sharing information to the right audience, which is a key factor for ensuring the security and privacy of customer data. For details on how Azure treats customer data, see [Data Privacy in the Trusted Cloud](https://azure.microsoft.com/overview/trusted-cloud/privacy/).
+>
+> If the issue applies to multiple subscriptions, you can mention additional subscriptions in your description, or by [sending a message](how-to-manage-azure-support-request.md#send-a-message) later. However, the support engineer will only be able to work on [subscriptions to which you have access](#azure-role-based-access-control). If you don't have the required access for a subscription, we won't be able to work on it as part of your request.
:::image type="content" source="media/how-to-create-azure-support-request/basics2lower.png" alt-text="Screenshot of the Problem description step of the support request process.":::
azure-video-indexer Connect To Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/connect-to-azure.md
When creating an Azure Video Indexer account, you can choose a free trial accoun
To read more on how to create a **new ARM-Based** Azure Video Indexer account, read this [article](create-video-analyzer-for-media-account.md)
+For more details, see [pricing](https://azure.microsoft.com/pricing/details/video-indexer/).
+ ## How to create classic accounts This article shows how to create an Azure Video Indexer classic account. The topic provides steps for connecting to Azure using the automatic (default) flow. It also shows how to connect to Azure manually (advanced).
azure-video-indexer Video Indexer Use Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/video-indexer-use-apis.md
Title: Use the Azure Video Indexer API description: This article describes how to get started with Azure Video Indexer API. Previously updated : 01/07/2021 Last updated : 06/01/2022
This section lists some recommendations when using Azure Video Indexer API.
The following C# code snippet demonstrates the usage of all the Azure Video Indexer APIs together.
+> [!NOTE]
+> The following sample is intended for Classic accounts only and not compatible with ARM accounts. For an updated sample for ARM please see [this ARM sample repo](https://github.com/Azure-Samples/media-services-video-indexer/blob/master/ApiUsage/ArmBased/Program.cs).
+ ```csharp var apiUrl = "https://api.videoindexer.ai"; var accountId = "...";
azure-web-pubsub Tutorial Serverless Static Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/tutorial-serverless-static-web-app.md
+
+ Title: Tutorial - Create a serverless chat app using Azure Web PubSub service and Azure Static Web Apps
+description: A tutorial for how to use Azure Web PubSub service and Azure Static Web Apps to build a serverless chat application.
++++ Last updated : 06/01/2022++
+# Tutorial: Create a serverless chat app using Azure Web PubSub service and Azure Static Web Apps
+
+The Azure Web PubSub service helps you build real-time messaging web applications using WebSockets. And with Azure Static Web Apps, you can automatically build and deploy full stack web apps to Azure from a code repository conveniently. In this tutorial, you learn how to use Azure Web PubSub service and Azure Static Web Apps to build a serverless real-time messaging application under chat room scenario.
+
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+> * Build a serverless chat app
+> * Work with Web PubSub function input and output bindings
+> * Work with Static Web Apps
+
+## Overview
++
+* GitHub along with DevOps provide source control and continuous delivery. So whenever there's code change to the source repo, Azure DevOps pipeline will soon apply it to Azure Static Web App and present to endpoint user.
+* When a new user is login, Functions `login` API will be triggered and generate Azure Web PubSub service client connection url.
+* When client init the connection request to Azure Web PubSub service, service will send a system `connect` event and Functions `connect` API will be triggered to auth the user.
+* When client send message to Azure Web PubSub service, service will send a user `message` event and Functions `message` API will be triggered and broadcast the message to all the connected clients.
+* Functions `validate` API will be triggered periodically for [CloudEvents Abuse Protection](https://github.com/cloudevents/spec/blob/v1.0/http-webhook.md#4-abuse-protection) purpose, when the events in Azure Web PubSub are configured with predefined parameter `{event}`, that is, https://$STATIC_WEB_APP/api/{event}.
+
+> [!NOTE]
+> Functions APIs `connect` and `message` will be triggered when Azure Web PubSub service is configured with these 2 events.
+
+## Prerequisites
+
+* [GitHub](https://github.com/) account
+* [Azure](https://portal.azure.com/) account
+* [Azure CLI](/cli/azure) (version 2.29.0 or higher) or [Azure Cloud Shell](../cloud-shell/quickstart.md) to manage Azure resources
+
+## Create a Web PubSub resource
+
+1. Sign in to the Azure CLI by using the following command.
+
+ ```azurecli-interactive
+ az login
+ ```
+
+1. Create a resource group.
+
+ ```azurecli-interactive
+ az group create \
+ --name my-awps-swa-group \
+ --location "eastus2"
+ ```
+
+1. Create a Web PubSub resource.
+
+ ```azurecli-interactive
+ az webpubsub create \
+ --name my-awps-swa \
+ --resource-group my-awps-swa-group \
+ --location "eastus2" \
+ --sku Free_F1
+ ```
+
+1. Get and hold the access key for later use.
+
+ ```azurecli-interactive
+ az webpubsub key show \
+ --name my-awps-swa \
+ --resource-group my-awps-swa-group
+ ```
+
+ ```azurecli-interactive
+ AWPS_ACCESS_KEY=<YOUR_AWPS_ACCESS_KEY>
+ ```
+ Replace the placeholder `<YOUR_AWPS_ACCESS_KEY>` from previous result `primaryConnectionString`.
+
+## Create a repository
+
+This article uses a GitHub template repository to make it easy for you to get started. The template features a starter app used to deploy using Azure Static Web Apps.
+
+1. Navigate to the following template to create a new repository under your repo:
+ 1. [https://github.com/Azure/awps-swa-sample/generate](https://github.com/login?return_to=/Azure/awps-swa-sample/generate)
+1. Name your repository **my-awps-swa-app**
+
+Select **`Create repository from template`**.
+
+## Create a static web app
+
+Now that the repository is created, you can create a static web app from the Azure CLI.
+
+1. Create a variable to hold your GitHub user name.
+
+ ```azurecli-interactive
+ GITHUB_USER_NAME=<YOUR_GITHUB_USER_NAME>
+ ```
+
+ Replace the placeholder `<YOUR_GITHUB_USER_NAME>` with your GitHub user name.
+
+1. Create a new static web app from your repository. As you execute this command, the CLI starts GitHub interactive login experience. Following the message to complete authorization.
+
+ ```azurecli-interactive
+ az staticwebapp create \
+ --name my-awps-swa-app \
+ --resource-group my-awps-swa-group \
+ --source https://github.com/$GITHUB_USER_NAME/my-awps-swa-app \
+ --location "eastus2" \
+ --branch main \
+ --app-location "src" \
+ --api-location "api" \
+ --login-with-github
+ ```
+ > [!IMPORTANT]
+ > The URL passed to the `--source` parameter must not include the `.git` suffix.
+
+1. Navigate to **https://github.com/login/device**.
+
+1. Enter the user code as displayed your console's message.
+
+1. Select the **Continue** button.
+
+1. Select the **Authorize AzureAppServiceCLI** button.
+
+1. Configure the static web app settings.
+
+ ```azurecli-interactive
+ az staticwebapp appsettings set \
+ -n my-awps-swa-app \
+ --setting-names WebPubSubConnectionString=$AWPS_ACCESS_KEY WebPubSubHub=sample_swa
+ ```
+
+## View the website
+
+There are two aspects to deploying a static app. The first operation creates the underlying Azure resources that make up your app. The second is a GitHub Actions workflow that builds and publishes your application.
+
+Before you can navigate to your new static site, the deployment build must first finish running.
+
+1. Return to your console window and run the following command to list the URLs associated with your app.
+
+ ```azurecli-interactive
+ az staticwebapp show \
+ --name my-awps-swa-app \
+ --query "repositoryUrl"
+ ```
+
+ The output of this command returns the URL to your GitHub repository.
+
+1. Copy the **repository URL** and paste it into the browser.
+
+1. Select the **Actions** tab.
+
+ At this point, Azure is creating the resources to support your static web app. Wait until the icon next to the running workflow turns into a check mark with green background ✅. This operation may take a few minutes to complete.
+
+ Once the success icon appears, the workflow is complete and you can return back to your console window.
+
+2. Run the following command to query for your website's URL.
+
+ ```azurecli-interactive
+ az staticwebapp show \
+ --name my-awps-swa-app \
+ --query "defaultHostname"
+ ```
+
+ Hold the url to set in the Web PubSub event handler.
+
+ ```azurecli-interactive
+ STATIC_WEB_APP=<YOUR_STATIC_WEB_APP>
+ ```
+
+## Configure the Web PubSub event handler
+
+Now you're very close to complete. The last step is to configure Web PubSub transfer client requests to your function APIs.
+
+1. Run command to configure Web PubSub service events. It's mapping to some functions under the `api` folder in your repo.
+
+ ```azurecli-interactive
+ az webpubsub hub create \
+ -n "my-awps-swa" \
+ -g "my-awps-swa-group" \
+ --hub-name "sample_swa" \
+ --event-handler url-template=https://$STATIC_WEB_APP/api/{event} user-event-pattern="*" \
+ --event-handler url-template=https://$STATIC_WEB_APP/api/{event} system-event="connect"
+ ```
+
+Now you're ready to play with your website **<YOUR_STATIC_WEB_APP>**. Copy it to browser and click continue to start chatting with your friends.
+
+## Clean up resources
+
+If you're not going to continue to use this application, you can delete the resource group and the static web app by running the following command.
+
+```azurecli-interactive
+az group delete --name my-awps-swa-group
+```
+
+## Next steps
+
+In this quickstart, you learned how to run a serverless chat application. Now, you could start to build your own application.
+
+> [!div class="nextstepaction"]
+> [Tutorial: Client streaming using subprotocol](tutorial-subprotocol.md)
+
+> [!div class="nextstepaction"]
+> [Azure Web PubSub bindings for Azure Functions](reference-functions-bindings.md)
+
+> [!div class="nextstepaction"]
+> [Explore more Azure Web PubSub samples](https://github.com/Azure/azure-webpubsub/tree/main/samples)
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/language-support.md
Use the following table to determine supported styles and roles for each neural
Custom Neural Voice lets you create synthetic voices that are rich in speaking styles. You can create a unique brand voice in multiple languages and styles by using a small set of recording data.
-Select the right locale that matches the training data you have to train a custom neural voice model. For example, if the recording data you have is spoken in English with a British accent, select `en-GB`.
-
-With the cross-lingual feature (preview), you can transfer your custom neural voice model to speak a second language. For example, with the `zh-CN` data, you can create a voice that speaks `en-AU` or any of the languages marked "Yes" in the Cross-lingual column in the following table.
-
-| Language | Locale | Cross-lingual (preview) |
-|--|--|--|
-| Arabic (Egypt) | `ar-EG` | No |
-| Arabic (Saudi Arabia) | `ar-SA` | No |
-| Bulgarian (Bulgaria) | `bg-BG` | No |
-| Catalan (Spain) | `ca-ES` | No |
-| Chinese (Cantonese, Traditional) | `zh-HK` | No |
-| Chinese (Mandarin, Simplified) | `zh-CN` | Yes |
-| Chinese (Mandarin, Simplified), English bilingual | `zh-CN` bilingual | Yes |
-| Chinese (Taiwanese Mandarin) | `zh-TW` | No |
-| Croatian (Croatia) | `hr-HR` | No |
-| Czech (Czech) | `cs-CZ` | No |
-| Danish (Denmark) | `da-DK` | No |
-| Dutch (Netherlands) | `nl-NL` | No |
-| English (Australia) | `en-AU` | Yes |
-| English (Canada) | `en-CA` | No |
-| English (India) | `en-IN` | No |
-| English (Ireland) | `en-IE` | No |
-| English (United Kingdom) | `en-GB` | Yes |
-| English (United States) | `en-US` | Yes |
-| Finnish (Finland) | `fi-FI` | No |
-| French (Canada) | `fr-CA` | Yes |
-| French (France) | `fr-FR` | Yes |
-| French (Switzerland) | `fr-CH` | No |
-| German (Austria) | `de-AT` | No |
-| German (Germany) | `de-DE` | Yes |
-| German (Switzerland) | `de-CH` | No |
-| Greek (Greece) | `el-GR` | No |
-| Hebrew (Israel) | `he-IL` | No |
-| Hindi (India) | `hi-IN` | No |
-| Hungarian (Hungary) | `hu-HU` | No |
-| Indonesian (Indonesia) | `id-ID` | No |
-| Italian (Italy) | `it-IT` | Yes |
-| Japanese (Japan) | `ja-JP` | Yes |
-| Korean (Korea) | `ko-KR` | Yes |
-| Malay (Malaysia) | `ms-MY` | No |
-| Norwegian (Bokmål, Norway) | `nb-NO` | No |
-| Polish (Poland) | `pl-PL` | No |
-| Portuguese (Brazil) | `pt-BR` | Yes |
-| Portuguese (Portugal) | `pt-PT` | No |
-| Romanian (Romania) | `ro-RO` | No |
-| Russian (Russia) | `ru-RU` | Yes |
-| Slovak (Slovakia) | `sk-SK` | No |
-| Slovenian (Slovenia) | `sl-SI` | No |
-| Spanish (Mexico) | `es-MX` | Yes |
-| Spanish (Spain) | `es-ES` | Yes |
-| Swedish (Sweden) | `sv-SE` | No |
-| Tamil (India) | `ta-IN` | No |
-| Telugu (India) | `te-IN` | No |
-| Thai (Thailand) | `th-TH` | No |
-| Turkish (Turkey) | `tr-TR` | No |
-| Vietnamese (Vietnam) | `vi-VN` | No |
+Select the right locale that matches your training data to train a custom neural voice model. For example, if the recording data is spoken in English with a British accent, select `en-GB`.
+
+With the cross-lingual feature (preview), you can transfer your custom neural voice model to speak a second language. For example, with the `zh-CN` data, you can create a voice that speaks `en-AU` or any of the languages marked with "Yes" in the Cross-lingual column in the following table.
+
+There are two Custom Neural Voice (CNV) project types: CNV Pro and CNV Lite (preview). In the following table, all the languages are supported by CNV Pro, and the languages marked with "Yes" in the Custom Neural Voice Lite column are supported by CNV Lite.
+
+| Language | Locale | Cross-lingual (preview) |Custom Neural Voice Lite (preview)|
+|--|--|--|--|
+| Arabic (Egypt) | `ar-EG` | No |No|
+| Arabic (Saudi Arabia) | `ar-SA` | No |No|
+| Bulgarian (Bulgaria) | `bg-BG` | No |No|
+| Catalan (Spain) | `ca-ES` | No |No|
+| Chinese (Cantonese, Traditional) | `zh-HK` | No |No|
+| Chinese (Mandarin, Simplified) | `zh-CN` | Yes |Yes|
+| Chinese (Mandarin, Simplified), English bilingual | `zh-CN` bilingual | Yes |No|
+| Chinese (Taiwanese Mandarin) | `zh-TW` | No |No|
+| Croatian (Croatia) | `hr-HR` | No |No|
+| Czech (Czech) | `cs-CZ` | No |No|
+| Danish (Denmark) | `da-DK` | No |No|
+| Dutch (Netherlands) | `nl-NL` | No |No|
+| English (Australia) | `en-AU` | Yes |No|
+| English (Canada) | `en-CA` | No |Yes|
+| English (India) | `en-IN` | No |No|
+| English (Ireland) | `en-IE` | No |No|
+| English (United Kingdom) | `en-GB` | Yes |Yes|
+| English (United States) | `en-US` | Yes |Yes|
+| Finnish (Finland) | `fi-FI` | No |No|
+| French (Canada) | `fr-CA` | Yes |No|
+| French (France) | `fr-FR` | Yes |Yes|
+| French (Switzerland) | `fr-CH` | No |No|
+| German (Austria) | `de-AT` | No |No|
+| German (Germany) | `de-DE` | Yes |Yes|
+| German (Switzerland) | `de-CH` | No |No|
+| Greek (Greece) | `el-GR` | No |No|
+| Hebrew (Israel) | `he-IL` | No |No|
+| Hindi (India) | `hi-IN` | No |No|
+| Hungarian (Hungary) | `hu-HU` | No |No|
+| Indonesian (Indonesia) | `id-ID` | No |No|
+| Italian (Italy) | `it-IT` | Yes |Yes|
+| Japanese (Japan) | `ja-JP` | Yes |No|
+| Korean (Korea) | `ko-KR` | Yes |Yes|
+| Malay (Malaysia) | `ms-MY` | No |No|
+| Norwegian (Bokmål, Norway) | `nb-NO` | No |No|
+| Polish (Poland) | `pl-PL` | No |No|
+| Portuguese (Brazil) | `pt-BR` | Yes |Yes|
+| Portuguese (Portugal) | `pt-PT` | No |No|
+| Romanian (Romania) | `ro-RO` | No |No|
+| Russian (Russia) | `ru-RU` | Yes |No|
+| Slovak (Slovakia) | `sk-SK` | No |No|
+| Slovenian (Slovenia) | `sl-SI` | No |No|
+| Spanish (Mexico) | `es-MX` | Yes |Yes|
+| Spanish (Spain) | `es-ES` | Yes |No|
+| Swedish (Sweden) | `sv-SE` | No |No|
+| Tamil (India) | `ta-IN` | No |No |
+| Telugu (India) | `te-IN` | No |No |
+| Thai (Thailand) | `th-TH` | No |No |
+| Turkish (Turkey) | `tr-TR` | No |No|
+| Vietnamese (Vietnam) | `vi-VN` | No |No|
## Language identification
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/document-translation/overview.md
The following document file types are supported by Document Translation:
### Legacy file types
-Source file types will be preserved during the document translation with the following exceptions:
+Source file types will be preserved during the document translation with the following **exceptions**:
| Source file extension | Translated file extension| | | |
cognitive-services Data Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/concepts/data-limits.md
Previously updated : 02/25/2022 Last updated : 06/02/2022
Exceeding the following document limits will generate an HTTP 400 error code.
| Key Phrase Extraction | 10 | | Named Entity Recognition (NER) | 5 | | Personally Identifying Information (PII) detection | 5 |
-| Text summarization | 25 |
+| Document summarization | 25 |
| Entity Linking | 5 | | Text Analytics for health | 10 for the web-based API, 1000 for the container. |
cognitive-services Data Formats https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-named-entity-recognition/concepts/data-formats.md
Your Labels file should be in the `json` format below to be used in [importing](
"language": "en-us" }, "assets": {
- "projectKind": "CustomEntityRecognition",
"entities": [ { "category": "Entity1"
cognitive-services Create Project https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-named-entity-recognition/how-to/create-project.md
Before you start using custom NER, you will need:
Before you start using custom NER, you will need an Azure Language resource. It is recommended to create your Language resource and connect a storage account to it in the Azure portal. Creating a resource in the Azure portal lets you create an Azure storage account at the same time, with all of the required permissions pre-configured. You can also read further in the article to learn how to use a pre-existing resource, and configure it to work with custom named entity recognition.
-You also will need an Azure storage account where you will upload your `.txt` files that will be used to train a model to extract entities.
+You also will need an Azure storage account where you will upload your `.txt` documents that will be used to train a model to extract entities.
> [!NOTE] > * You need to have an **owner** role assigned on the resource group to create a Language resource.
You can create a resource in the following ways:
* Language Studio * PowerShell
+> [!Note]
+> You shouldn't move the storage account to a different resource group or subscription once it's linked with the Language resource.
+ [!INCLUDE [create a new resource from the Azure portal](../includes/resource-creation-azure-portal.md)] [!INCLUDE [create a new resource from the Language Studio](../includes/language-studio/resource-creation-language-studio.md)]
cognitive-services Create Project https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-classification/how-to/create-project.md
You also will need an Azure storage account where you will upload your `.txt` do
## Create Language resource and connect storage account +
+> [!Note]
+> You shouldn't move the storage account to a different resource group or subscription once it's linked with the Language resource.
+ ### [Using the Azure portal](#tab/azure-portal) [!INCLUDE [create a new resource from the Azure portal](../includes/resource-creation-azure-portal.md)]
cognitive-services Conversation Summarization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/summarization/how-to/conversation-summarization.md
# How to use conversation summarization (preview) > [!IMPORTANT]
-> conversation summarization feature is a preview capability provided ΓÇ£AS ISΓÇ¥ and ΓÇ£WITH ALL FAULTS.ΓÇ¥ As such, Conversation Summarization (preview) should not be implemented or deployed in any production use. The customer is solely responsible for any use of conversation summarization.
+> The conversation summarization feature is a preview capability provided ΓÇ£AS ISΓÇ¥ and ΓÇ£WITH ALL FAULTS.ΓÇ¥ As such, Conversation Summarization (preview) should not be implemented or deployed in any production use. The customer is solely responsible for any use of conversation summarization.
Conversation summarization is designed to summarize text chat logs between customers and customer-service agents. This feature is capable of providing both issues and resolutions present in these logs.
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/summarization/language-support.md
Title: Document summarization language support
+ Title: Summarization language support
description: Learn about which languages are supported by document summarization.
Previously updated : 05/26/2022 Last updated : 06/02/2022
Document summarization supports the following languages:
| Portuguese (Brazil) | `pt-BR` | 2021-08-01 | | | Portuguese (Portugal) | `pt-PT` | 2021-08-01 | `pt` also accepted |
-# [Conversation summarization](#tab/conversation-summarization)
+# [Conversation summarization (preview)](#tab/conversation-summarization)
-## Languages supported by conversation summarization
+## Languages supported by conversation summarization (preview)
Conversation summarization supports the following languages:
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/summarization/overview.md
Previously updated : 05/26/2022 Last updated : 06/02/2022 # What is document and conversation summarization (preview)?
-Document summarization is one of the features offered by [Azure Cognitive Service for Language](../overview.md), a collection of machine learning and AI algorithms in the cloud for developing intelligent applications that involve written language. Use this article to learn more about this feature, and how to use it in your applications.
+Summarization is one of the features offered by [Azure Cognitive Service for Language](../overview.md), a collection of machine learning and AI algorithms in the cloud for developing intelligent applications that involve written language. Use this article to learn more about this feature, and how to use it in your applications.
# [Document summarization](#tab/document-summarization)
To use this feature, you submit raw text for analysis and handle the API output
# [Document summarization](#tab/document-summarization) * Summarization takes raw unstructured text for analysis. See [Data and service limits](../concepts/data-limits.md) in the how-to guide for more information.
-* Summarization works with a variety of written languages. See [language support](language-support.md) for more information.
+* Summarization works with a variety of written languages. See [language support](language-support.md?tabs=document-summarization) for more information.
# [Conversation summarization](#tab/conversation-summarization) * Conversation summarization takes structured text for analysis. See the [data and service limits](../concepts/data-limits.md) for more information.
-* Conversation summarization accepts text in English. See [language support](language-support.md) for more information.
+* Conversation summarization accepts text in English. See [language support](language-support.md?tabs=conversation-summarization) for more information.
communication-services Network Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/network-requirements.md
Communication Services connections require internet connectivity to specific por
| Media traffic | Range of Azure public cloud IP addresses 20.202.0.0/16 The range provided above is the range of IP addresses on either Media processor or ACS TURN service. | UDP 3478 through 3481, TCP ports 443 | | Signaling, telemetry, registration| *.skype.com, *.microsoft.com, *.azure.net, *.azure.com, *.azureedge.net, *.office.com, *.trouter.io | TCP 443, 80 | +
+The endpoints below should be reachable for U.S. Government GCC High customers only
+
+| Category | IP ranges or FQDN | Ports |
+| :-- | :-- | :-- |
+| Media traffic | 52.127.88.0/21, 52.238.114.160/32, 52.238.115.146/32, 52.238.117.171/32, 52.238.118.132/32, 52.247.167.192/32, 52.247.169.1/32, 52.247.172.50/32, 52.247.172.103/32, 104.212.44.0/22, 195.134.228.0/22 | UDP 3478 through 3481, TCP ports 443 |
+| Signaling, telemetry, registration| *.gov.teams.microsoft.us, *.infra.gov.skypeforbusiness.us, *.online.gov.skypeforbusiness.us, gov.teams.microsoft.us | TCP 443, 80 |
++ ## Network optimization The following tasks are optional and aren't required for rolling out Communication Services. Use this guidance to optimize your network and Communication Services performance or if you know you have some network limitations.
connectors Connectors Create Api Sqlazure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-sqlazure.md
The SQL Server connector has different versions, based on [logic app type and ho
| Logic app | Environment | Connector version | |--|-|-| | **Consumption** | Multi-tenant Azure Logic Apps | [Managed connector - Standard class](managed.md). For more information, review the [SQL Server managed connector reference](/connectors/sql). |
-| **Consumption** | Integration service environment (ISE) | [Managed connector - Standard class](managed.md) and ISE version. For more information, review the [SQL Server managed connector reference](/connectors/sql). |
+| **Consumption** | Integration service environment (ISE) | [Managed connector - Standard class](managed.md) and ISE version. For more information, review the [SQL Server managed connector reference](/connectors/sql). <br><br>**Note**: The ISE version uses the [ISE message limits](../logic-apps/logic-apps-limits-and-config.md#message-size-limits), not the managed version's message limits. |
| **Standard** | Single-tenant Azure Logic Apps and App Service Environment v3 (Windows plans only) | [Managed connector - Standard class](managed.md) and [built-in connector](built-in.md), which is [service provider based](../logic-apps/custom-connector-overview.md#service-provider-interface-implementation). <br><br>The built-in version differs in the following ways: <br><br>- The built-in version has no triggers. <br><br>- The built-in version has a single **Execute Query** action. The action can directly connect to Azure virtual networks without the on-premises data gateway. <br><br>For the managed version, review the [SQL Server managed connector reference](/connectors/sql/). | ||||
The SQL Server connector has different versions, based on [logic app type and ho
* In multi-tenant Azure Logic Apps, you need the [on-premises data gateway](../logic-apps/logic-apps-gateway-install.md) installed on a local computer and a [data gateway resource that's already created in Azure](../logic-apps/logic-apps-gateway-connection.md).
- * In an ISE, when you have non-Windows or SQL Server Authentication connections, you don't need the on-premises data gateway and can use the ISE-versioned SQL Server connector. For Windows Authentication and SQL Server Authentication, you still have to use the [on-premises data gateway](../logic-apps/logic-apps-gateway-install.md) and a [data gateway resource that's already created in Azure](../logic-apps/logic-apps-gateway-connection.md). Also, the ISE-versioned SQL Server connector doesn't support Windows authentication, so you have to use the non-ISE SQL Server connector.
+ * In an ISE, you don't need the on-premises data gateway for SQL Server Authentication and non-Windows Authentication connections, and you can use the ISE-versioned SQL Server connector. For Windows Authentication, you need the [on-premises data gateway](../logic-apps/logic-apps-gateway-install.md) on a local computer and a [data gateway resource that's already created in Azure](../logic-apps/logic-apps-gateway-connection.md). The ISE-version connector doesn't support Windows Authentication, so you have to use the regular SQL Server managed connector.
* Standard logic app workflow
The following steps use the Azure portal, but with the appropriate Azure Logic A
### [Consumption](#tab/consumption)
-1. In the Azure portal, open your blank logic app workflow in the designer.
+1. In the [Azure portal](https://portal.azure.com), open your blank logic app workflow in the designer.
1. Find and select the [SQL Server managed connector trigger](/connectors/sql) that you want to use.
The following steps use the Azure portal, but with the appropriate Azure Logic A
In Standard logic app workflows, only the SQL Server managed connector has triggers. The SQL Server built-in connector doesn't have any triggers.
-1. In the Azure portal, open your blank logic app workflow in the designer.
+1. In the [Azure portal](https://portal.azure.com), open your blank logic app workflow in the designer.
1. Find and select the [SQL Server managed connector trigger](/connectors/sql) that you want to use.
To make sure that the recurrence time doesn't shift when DST takes effect, manua
The following steps use the Azure portal, but with the appropriate Azure Logic Apps extension, you can also use Visual Studio to edit Consumption logic app workflows or Visual Studio Code to the following tools to edit logic app workflows:
-* Consumption logic app workflow: Visual Studio or Visual Studio Code
+* Consumption logic app workflows: [Visual Studio](../logic-apps/quickstart-create-logic-apps-with-visual-studio.md) or [Visual Studio Code](../logic-apps/quickstart-create-logic-apps-visual-studio-code.md)
-* Standard logic app workflows: Visual Studio Code
+* Standard logic app workflows: [Visual Studio Code](../logic-apps/create-single-tenant-workflows-visual-studio-code.md)
In this example, the logic app workflow starts with the [Recurrence trigger](../connectors/connectors-native-recurrence.md), and calls an action that gets a row from a SQL database. ### [Consumption](#tab/consumption)
-1. In the Azure portal, open your logic app workflow in the designer.
+1. In the [Azure portal](https://portal.azure.com), open your logic app workflow in the designer.
1. Find and select the [SQL Server managed connector action](/connectors/sql) that you want to use. This example continues with the action named **Get row**.
In this example, the logic app workflow starts with the [Recurrence trigger](../
### [Standard](#tab/standard)
-1. In the Azure portal, open your logic app workflow in the designer.
+1. In the [Azure portal](https://portal.azure.com), open your logic app workflow in the designer.
1. Find and select the SQL Server connector action that you want to use.
In this example, the logic app workflow starts with the [Recurrence trigger](../
[!INCLUDE [Create connection general intro](../../includes/connectors-create-connection-general-intro.md)]
-After you provide this information, continue with these steps:
+After you provide this information, continue with the following steps based on your target database:
* [Connect to cloud-based Azure SQL Database or SQL Managed Instance](#connect-azure-sql-db) * [Connect to on-premises SQL Server](#connect-sql-server)
After you provide this information, continue with these steps:
To access a SQL Managed Instance without using the on-premises data gateway or integration service environment, you have to [set up the public endpoint on the SQL Managed Instance](/azure/azure-sql/managed-instance/public-endpoint-configure). The public endpoint uses port 3342, so make sure that you specify this port number when you create the connection from your logic app.
-When you add a [SQL Server trigger](#add-sql-trigger) or [SQL Server action](#add-sql-action) without a previously created and active database connection, complete the following steps:
+In the connection information box, complete the following steps:
1. For **Connection name**, provide a name to use for your connection.
When you add a [SQL Server trigger](#add-sql-trigger) or [SQL Server action](#ad
| Authentication | Description | |-|-|
- | **Service principal (Azure AD application)** | - Available only for the SQL Server managed connector. <br><br>- Requires an Azure AD application and service principal. For more information, see [Create an Azure AD application and service principal that can access resources using the Azure portal](../active-directory/develop/howto-create-service-principal-portal.md). |
- | **Logic Apps Managed Identity** | - Available only for the SQL Server managed connector and ISE-versioned connector. <br><br>- Requires the following items: <br><br> A valid managed identity that's [enabled on your logic app resource](../logic-apps/create-managed-service-identity.md) and has access to your database. <br><br> **SQL DB Contributor** role access to the SQL Server resource <br><br> **Contributor** access to the resource group that includes the SQL Server resource. <br><br>For more information, see [SQL - Server-Level Roles](/sql/relational-databases/security/authentication-access/server-level-roles). |
- | [**Azure AD Integrated**](/azure/azure-sql/database/authentication-aad-overview) | - Available only for the SQL Server managed connector and ISE-versioned connector. <br><br>- Requires a valid managed identity in Azure Active Directory (Azure AD) that's [enabled on your logic app resource](../logic-apps/create-managed-service-identity.md) and has access to your database. For more information, see these topics: <br><br>- [Azure SQL Security Overview - Authentication](/azure/azure-sql/database/security-overview#authentication) <br>- [Authorize database access to Azure SQL - Authentication and authorization](/azure/azure-sql/database/logins-create-manage#authentication-and-authorization) <br>- [Azure SQL - Azure AD Integrated authentication](/azure/azure-sql/database/authentication-aad-overview) |
- | [**SQL Server Authentication**](/sql/relational-databases/security/choose-an-authentication-mode#connecting-through-sql-server-authentication) | - Available only for the SQL Server managed connector and ISE-versioned connector. <br><br>- Requires the following items: <br><br> A data gateway resource that's previously created in Azure for your connection, regardless whether your logic app is in multi-tenant Azure Logic Apps or an ISE. <br><br> A valid user name and strong password that are created and stored in your SQL Server database. For more information, see the following topics: <br><br>- [Azure SQL Security Overview - Authentication](/azure/azure-sql/database/security-overview#authentication) <br>- [Authorize database access to Azure SQL - Authentication and authorization](/azure/azure-sql/database/logins-create-manage#authentication-and-authorization) |
+ | **Service principal (Azure AD application)** | - Supported with the SQL Server managed connector. <br><br>- Requires an Azure AD application and service principal. For more information, see [Create an Azure AD application and service principal that can access resources using the Azure portal](../active-directory/develop/howto-create-service-principal-portal.md). |
+ | **Logic Apps Managed Identity** | - Supported with the SQL Server managed connector and ISE-versioned connector. <br><br>- Requires the following items: <br><br> A valid managed identity that's [enabled on your logic app resource](../logic-apps/create-managed-service-identity.md) and has access to your database. <br><br> **SQL DB Contributor** role access to the SQL Server resource <br><br> **Contributor** access to the resource group that includes the SQL Server resource. <br><br>For more information, see [SQL - Server-Level Roles](/sql/relational-databases/security/authentication-access/server-level-roles). |
+ | [**Azure AD Integrated**](/azure/azure-sql/database/authentication-aad-overview) | - Supported with the SQL Server managed connector and ISE-versioned connector. <br><br>- Requires a valid managed identity in Azure Active Directory (Azure AD) that's [enabled on your logic app resource](../logic-apps/create-managed-service-identity.md) and has access to your database. For more information, see these topics: <br><br>- [Azure SQL Security Overview - Authentication](/azure/azure-sql/database/security-overview#authentication) <br>- [Authorize database access to Azure SQL - Authentication and authorization](/azure/azure-sql/database/logins-create-manage#authentication-and-authorization) <br>- [Azure SQL - Azure AD Integrated authentication](/azure/azure-sql/database/authentication-aad-overview) |
+ | [**SQL Server Authentication**](/sql/relational-databases/security/choose-an-authentication-mode#connecting-through-sql-server-authentication) | - Supported with the SQL Server managed connector and ISE-versioned connector. <br><br>- Requires the following items: <br><br> A data gateway resource that's previously created in Azure for your connection, regardless whether your logic app is in multi-tenant Azure Logic Apps or an ISE. <br><br> A valid user name and strong password that are created and stored in your SQL Server database. For more information, see the following topics: <br><br>- [Azure SQL Security Overview - Authentication](/azure/azure-sql/database/security-overview#authentication) <br>- [Authorize database access to Azure SQL - Authentication and authorization](/azure/azure-sql/database/logins-create-manage#authentication-and-authorization) |
- This connection and authentication information box looks similar to the following example, which selects **Azure AD Integrated**:
+ The following examples show how the connection information box might appear if you select **Azure AD Integrated** authentication.
* Consumption logic app workflows
When you add a [SQL Server trigger](#add-sql-trigger) or [SQL Server action](#ad
### Connect to on-premises SQL Server
-When you add a [SQL Server trigger](#add-sql-trigger) or [SQL Server action](#add-sql-action) without a previously created and active database connection, complete the following steps:
+In the connection information box, complete the following steps:
1. For connections to your on-premises SQL server that require the on-premises data gateway, make sure that you've [completed these prerequisites](#multi-tenant-or-ise).
When you add a [SQL Server trigger](#add-sql-trigger) or [SQL Server action](#ad
| Authentication | Description | |-|-|
- | [**SQL Server Authentication**](/sql/relational-databases/security/choose-an-authentication-mode#connecting-through-sql-server-authentication) | - Available only for the SQL Server managed connector and ISE-versioned connector. <br><br>- Requires the following items: <br><br> A data gateway resource that's previously created in Azure for your connection, regardless whether your logic app is in multi-tenant Azure Logic Apps or an ISE. <br><br> A valid user name and strong password that are created and stored in your SQL Server. <br><br>For more information, see [SQL Server Authentication](/sql/relational-databases/security/choose-an-authentication-mode#connecting-through-sql-server-authentication). |
- | [**Windows Authentication**](/sql/relational-databases/security/choose-an-authentication-mode#connecting-through-windows-authentication) | - Available only for the SQL Server managed connector. <br><br>- Requires the following items: <br><br> A data gateway resource that's previously created in Azure for your connection, regardless whether your logic app is in multi-tenant Azure Logic Apps or an ISE. <br><br> A valid Windows user name and password to confirm your identity through your Windows account. <br><br>For more information, see [Windows Authentication](/sql/relational-databases/security/choose-an-authentication-mode#connecting-through-windows-authentication). |
+ | [**SQL Server Authentication**](/sql/relational-databases/security/choose-an-authentication-mode#connecting-through-sql-server-authentication) | - Supported with the SQL Server managed connector and ISE-versioned connector. <br><br>- Requires the following items: <br><br> A data gateway resource that's previously created in Azure for your connection, regardless whether your logic app is in multi-tenant Azure Logic Apps or an ISE. <br><br> A valid user name and strong password that are created and stored in your SQL Server. <br><br>For more information, see [SQL Server Authentication](/sql/relational-databases/security/choose-an-authentication-mode#connecting-through-sql-server-authentication). |
+ | [**Windows Authentication**](/sql/relational-databases/security/choose-an-authentication-mode#connecting-through-windows-authentication) | - Supported with the SQL Server managed connector. <br><br>- Requires the following items: <br><br> A data gateway resource that's previously created in Azure for your connection, regardless whether your logic app is in multi-tenant Azure Logic Apps or an ISE. <br><br> A valid Windows user name and password to confirm your identity through your Windows account. <br><br>For more information, see [Windows Authentication](/sql/relational-databases/security/choose-an-authentication-mode#connecting-through-windows-authentication). |
||| 1. Select or provide the following values for your SQL database:
When you add a [SQL Server trigger](#add-sql-trigger) or [SQL Server action](#ad
> * `User ID={your-user-name}` > * `Password={your-password}`
- This connection and authentication information box looks similar to the following example, which selects **Windows Authentication**:
+ The following examples show how the connection information box might appear if you select **Windows** authentication.
* Consumption logic app workflows
Connection problems can commonly happen, so to troubleshoot and resolve these ki
## Next steps
-* Learn about other [connectors for Azure Logic Apps](../connectors/apis-list.md)
+* Learn about other [managed connectors for Azure Logic Apps](../connectors/apis-list.md)
container-apps Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/containers.md
Previously updated : 05/12/2022 Last updated : 06/02/2022
Features include:
## Configuration
-Below is an example of the `containers` array in the [`properties.template`](azure-resource-manager-api-spec.md#propertiestemplate) section of a container app resource template. The excerpt shows the available configuration options when setting up a container.
+
+The following is an example of the `containers` array in the [`properties.template`](azure-resource-manager-api-spec.md#propertiestemplate) section of a container app resource template. The excerpt shows the available configuration options when setting up a container.
```json "containers": [
Below is an example of the `containers` array in the [`properties.template`](azu
| `volumeMounts` | An array of volume mount definitions. | You can define a temporary volume or multiple permanent storage volumes for your container. For more information about storage volumes, see [Use storage mounts in Azure Container Apps](storage-mounts.md).| | `probes`| An array of health probes enabled in the container. | This feature is based on Kubernetes health probes. For more information about probes settings, see [Health probes in Azure Container Apps](health-probes.md).| - When allocating resources, the total amount of CPUs and memory requested for all the containers in a container app must add up to one of the following combinations. | vCPUs (cores) | Memory |
To use a container registry, you define the required fields in `registries` arra
} ```
-With the registry information setup, the saved credentials can be used to pull a container image from the private registry when your app is deployed.
+With the registry information set up, the saved credentials can be used to pull a container image from the private registry when your app is deployed.
The following example shows how to configure Azure Container Registry credentials in a container app.
The following example shows how to configure Azure Container Registry credential
} ```
+### Managed identity with Azure Container Registry
+
+You can use an Azure managed identity to authenticate with Azure Container Registry instead of using a username and password. To use a managed identity:
+
+- Assign a system-assigned or user-assigned managed identity to your container app.
+- Specify the managed identity you want to use for each registry.
+
+When assigned a managed identity to a registry, use the managed identity resource ID for a user-assigned identity, or "system" for the system-assigned identity. For more information about using managed identities see, [Managed identities in Azure Container Apps Preview](managed-identity.md).
+
+```json
+{
+ "identity": {
+ "type": "SystemAssigned,UserAssigned",
+ "userAssignedIdentities": {
+ "<IDENTITY1_RESOURCE_ID>": {}
+ }
+ }
+ "properties": {
+ "configuration": {
+ "registries": [
+ {
+ "server": "myacr1.azurecr.io",
+ "identity": "<IDENTITY1_RESOURCE_ID>"
+ },
+ {
+ "server": "myacr2.azurecr.io",
+ "identity": "system"
+ }]
+ }
+ ...
+ }
+}
+```
+
+The managed identity must have `AcrPull` access for the Azure Container Registry. For more information about assigning Azure Container Registry permissions to managed identities, see [Authenticate with managed identity](../container-registry/container-registry-authentication-managed-identity.md).
+
+#### Configure a user-assigned managed identity
+
+To configure a user-assigned managed identity:
+
+1. Create the user-assigned identity if it doesn't exist.
+1. Give the user-assigned identity `AcrPull` permission to your private repository.
+1. Add the identity to your container app configuration as shown above.
+
+For more information about configuring user-assigned identities, see [Add a user-assigned identity](managed-identity.md#add-a-user-assigned-identity).
++
+#### Configure a system-assigned managed identity
+
+System-assigned identities are created at the time your container app is created, and therefore, won't have `AcrPull` access to your Azure Container Registry. As a result, the image can't be pulled from your private registry when your app is first deployed.
+
+To configure a system-assigned identity, you must use one of the following methods.
+
+- **Option 1**: Use a public registry for the initial deployment:
+ 1. Create your container app using a public image and a system-assigned identity.
+ 1. Give the new system-assigned identity `AcrPull` access to your private Azure Container Registry.
+ 1. Update your container app replacing the public image with the image from your private Azure Container Registry.
+- **Option 2**: Restart your app after assigning permissions:
+ 1. Create your container app using a private image and a system-assigned identity. (The deployment will result in a failure to pull the image.)
+ 1. Give the new system-assigned identity `AcrPull` access to your private Azure Container Registry.
+ 1. Restart your container app revision.
+
+For more information about configuring system-assigned identities, see [Add a system-assigned identity](managed-identity.md#add-a-system-assigned-identity).
+ ## Limitations Azure Container Apps has the following limitations:
container-apps Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/managed-identity.md
Previously updated : 04/11/2022 Last updated : 06/02/2022
With managed identities:
- You can use role-based access control to grant specific permissions to a managed identity. - System-assigned identities are automatically created and managed. They're deleted when your container app is deleted. - You can add and delete user-assigned identities and assign them to multiple resources. They're independent of your container app's life cycle.
+- You can use managed identity to [authenticate with a private Azure Container Registry](containers.md#container-registries) without a username and password to pull containers for your Container App.
+ ### Common use cases
User-assigned identities are ideal for workloads that:
## Limitations
-The identity is only available within a running container, which means you can't use a managed identity to:
--- Pull an image from Azure Container Registry-- Define scaling rules or Dapr configuration
- - To access resources that require a connection string or key, such as storage resources, you'll still need to include the connection string or key in the `secretRef` of the scaling rule.
+The identity is only available within a running container, which means you can't use a managed identity in scaling rules or Dapr configuration. To access resources that require a connection string or key, such as storage resources, you'll still need to include the connection string or key in the `secretRef` of the scaling rule.
## Configure managed identities
A container app with a managed identity exposes the identity endpoint by definin
- IDENTITY_ENDPOINT - local URL from which your container app can request tokens. - IDENTITY_HEADER - a header used to help mitigate server-side request forgery (SSRF) attacks. The value is rotated by the platform.
-To get a token for a resource, make an HTTP GET request to this endpoint, including the following parameters:
+To get a token for a resource, make an HTTP GET request to the endpoint, including the following parameters:
| Parameter name | In | Description| ||||
-| resource | Query | The Azure AD resource URI of the resource for which a token should be obtained. This could be one of the [Azure services that support Azure AD authentication](../active-directory/managed-identities-azure-resources/services-support-managed-identities.md#azure-services-that-support-azure-ad-authentication) or any other resource URI. |
+| resource | Query | The Azure AD resource URI of the resource for which a token should be obtained. The resource could be one of the [Azure services that support Azure AD authentication](../active-directory/managed-identities-azure-resources/services-support-managed-identities.md#azure-services-that-support-azure-ad-authentication) or any other resource URI. |
| api-version | Query | The version of the token API to be used. Use "2019-08-01" or later. | | X-IDENTITY-HEADER | Header | The value of the `IDENTITY_HEADER` environment variable. This header mitigates server-side request forgery (SSRF) attacks. | | client_id | Query | (Optional) The client ID of the user-assigned identity to be used. Can't be used on a request that includes `principal_id`, `mi_res_id`, or `object_id`. If all ID parameters (`client_id`, `principal_id`, `object_id`, and `mi_res_id`) are omitted, the system-assigned identity is used.|
cosmos-db Access Key Vault Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/access-key-vault-managed-identity.md
In this step, create an access policy in Azure Key Vault using the previously ma
1. Use the [``az keyvault set-policy``](/cli/azure/keyvault#az-keyvault-set-policy) command to create an access policy in Azure Key Vault that gives the Azure Cosmos DB managed identity permission to access Key Vault. Specifically, the policy will use the **key-permissions** parameters to grant permissions to ``get``, ``list``, and ``import`` keys.
- ```azurecli-itneractive
+ ```azurecli-interactive
az keyvault set-policy \ --name $keyVaultName \ --object-id $principal \
cosmos-db Find Request Unit Charge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/find-request-unit-charge.md
Title: Find request unit (RU) charge for a SQL query in Azure Cosmos DB
-description: Learn how to find the request unit (RU) charge for SQL queries executed against an Azure Cosmos container. You can use the Azure portal, .NET, Java, Python, and Node.js languages to find the RU charge.
-
+ Title: Find request unit charge for a SQL query in Azure Cosmos DB
+description: Find the request unit charge for SQL queries against containers created with Azure Cosmos DB, using the Azure portal, .NET, Java, Python, or Node.js.
+ Previously updated : 10/14/2020- Last updated : 06/02/2022+ ms.devlang: csharp, java, javascript, python-+
+- devx-track-js
+- kr2b-contr-experiment
-# Find the request unit charge for operations executed in Azure Cosmos DB SQL API
+
+# Find the request unit charge for operations in Azure Cosmos DB SQL API
[!INCLUDE[appliesto-sql-api](../includes/appliesto-sql-api.md)] Azure Cosmos DB supports many APIs, such as SQL, MongoDB, Cassandra, Gremlin, and Table. Each API has its own set of database operations. These operations range from simple point reads and writes to complex queries. Each database operation consumes system resources based on the complexity of the operation.
-The cost of all database operations is normalized by Azure Cosmos DB and is expressed by Request Units (or RUs, for short). Request charge is the request units consumed by all your database operations. You can think of RUs as a performance currency abstracting the system resources such as CPU, IOPS, and memory that are required to perform the database operations supported by Azure Cosmos DB. No matter which API you use to interact with your Azure Cosmos container, costs are always measured by RUs. Whether the database operation is a write, point read, or query, costs are always measured in RUs. To learn more, see the [request units and its considerations](../request-units.md) article.
+The cost of all database operations is normalized by Azure Cosmos DB and is expressed by *request units* (RU). *Request charge* is the request units consumed by all your database operations. You can think of RUs as a performance currency abstracting the system resources such as CPU, IOPS, and memory that are required to perform the database operations supported by Azure Cosmos DB. No matter which API you use to interact with your container, costs are always measured in RUs. Whether the database operation is a write, point read, or query, costs are always measured in RUs. To learn more, see [Request Units in Azure Cosmos DB](../request-units.md).
-This article presents the different ways you can find the [request unit](../request-units.md) (RU) consumption for any operation executed against a container in Azure Cosmos DB SQL API. If you are using a different API, see [API for MongoDB](../mongodb/find-request-unit-charge-mongodb.md), [Cassandra API](../cassandr) articles to find the RU/s charge.
+This article presents the different ways that you can find the request unit consumption for any operation run against a container in Azure Cosmos DB SQL API. If you're using a different API, see [API for MongoDB](../mongodb/find-request-unit-charge-mongodb.md), [Cassandra API](../cassandr).
-Currently, you can measure this consumption only by using the Azure portal or by inspecting the response sent back from Azure Cosmos DB through one of the SDKs. If you're using the SQL API, you have multiple options for finding the RU consumption for an operation against an Azure Cosmos container.
+Currently, you can measure consumption only by using the Azure portal or by inspecting the response sent from Azure Cosmos DB through one of the SDKs. If you're using the SQL API, you have multiple options for finding the request charge for an operation.
## Use the Azure portal
Currently, you can measure this consumption only by using the Azure portal or by
1. Select **Query Stats** to display the actual request charge for the request you executed.
+ :::image type="content" source="../media/find-request-unit-charge/portal-sql-query.png" alt-text="Screenshot of a SQL query request charge in the Azure portal.":::
## Use the .NET SDK
For more information, see [Quickstart: Build a Python app by using an Azure Cosm
To learn about optimizing your RU consumption, see these articles:
-* [Request units and throughput in Azure Cosmos DB](../request-units.md)
+* [Request Units in Azure Cosmos DB](../request-units.md)
* [Optimize provisioned throughput cost in Azure Cosmos DB](../optimize-cost-throughput.md) * [Optimize query cost in Azure Cosmos DB](../optimize-cost-reads-writes.md) * [Globally scale provisioned throughput](../request-units.md)
-* [Provision throughput on containers and databases](../set-throughput.md)
+* [Introduction to provisioned throughput in Azure Cosmos DB](../set-throughput.md)
* [Provision throughput for a container](how-to-provision-container-throughput.md)
-* [Monitor and debug with metrics in Azure Cosmos DB](../use-metrics.md)
+* [Monitor and debug with insights in Azure Cosmos DB](../use-metrics.md)
cosmos-db Migrate Dotnet V3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/migrate-dotnet-v3.md
Previously updated : 04/07/2022 Last updated : 06/01/2022 ms.devlang: csharp
The following classes have been replaced on the 3.0 SDK:
The Microsoft.Azure.Documents.UriFactory class has been replaced by the fluent design.
-Because the .NET v3 SDK allows users to configure a custom serialization engine, there's no direct replacement for the `Document` type. When using Newtonsoft.Json (default serialization engine), `JObject` can be used to achieve the same functionality. When using a different serialization engine, you can use its base json document type (for example, `JsonDocument` for System.Text.Json). The recommendation is to use a C# type that reflects the schema of your items instead of relying on generic types.
- # [.NET SDK v3](#tab/dotnet-v3) ```csharp
-private readonly CosmosClient _client;
-private readonly Container _container;
-
-public Program()
-{
- // Client should be a singleton
- _client = new CosmosClient(
- accountEndpoint: "https://testcosmos.documents.azure.com:443/",
- authKeyOrResourceToken: "SuperSecretKey",
- clientOptions: new CosmosClientOptions()
- {
- ApplicationPreferredRegions = new List<string>()
- {
- Regions.EastUS,
- Regions.WestUS,
- }
- });
-
- _container = _client.GetContainer("DatabaseName","ContainerName");
-}
-
-private async Task CreateItemAsync(SalesOrder salesOrder)
-{
- ItemResponse<SalesOrder> response = await this._container.CreateItemAsync(
+Container container = client.GetContainer(databaseName,containerName);
+ItemResponse<SalesOrder> response = await this._container.CreateItemAsync(
salesOrder, new PartitionKey(salesOrder.AccountNumber));
-}
``` # [.NET SDK v2](#tab/dotnet-v2) ```csharp
-private readonly DocumentClient _client;
-private readonly string _databaseName;
-private readonly string _containerName;
-
-public Program()
-{
- ConnectionPolicy connectionPolicy = new ConnectionPolicy()
- {
- ConnectionMode = ConnectionMode.Direct, // Default for v2 is Gateway. v3 is Direct
- ConnectionProtocol = Protocol.Tcp,
- };
-
- connectionPolicy.PreferredLocations.Add(LocationNames.EastUS);
- connectionPolicy.PreferredLocations.Add(LocationNames.WestUS);
-
- // Client should always be a singleton
- _client = new DocumentClient(
- new Uri("https://testcosmos.documents.azure.com:443/"),
- "SuperSecretKey",
- connectionPolicy);
-
- _databaseName = "DatabaseName";
- _containerName = "ContainerName";
-}
-
-private async Task CreateItemAsync(SalesOrder salesOrder)
-{
- Uri collectionUri = UriFactory.CreateDocumentCollectionUri(_databaseName, _containerName)
- await this._client.CreateDocumentAsync(
- collectionUri,
- salesOrder,
- new RequestOptions { PartitionKey = new PartitionKey(salesOrder.AccountNumber) });
-}
+Uri collectionUri = UriFactory.CreateDocumentCollectionUri(databaseName, containerName);
+await client.CreateDocumentAsync(
+ collectionUri,
+ salesOrder,
+ new RequestOptions { PartitionKey = new PartitionKey(salesOrder.AccountNumber) });
```+
+Because the .NET v3 SDK allows users to configure a custom serialization engine, there's no direct replacement for the `Document` type. When using Newtonsoft.Json (default serialization engine), `JObject` can be used to achieve the same functionality. When using a different serialization engine, you can use its base json document type (for example, `JsonDocument` for System.Text.Json). The recommendation is to use a C# type that reflects the schema of your items instead of relying on generic types.
+ ### Changes to item ID generation Item ID is no longer auto populated in the .NET v3 SDK. Therefore, the Item ID must specifically include a generated ID. View the following example:
cosmos-db Sql Query Bitwise Operators https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-query-bitwise-operators.md
Previously updated : 05/31/2022 Last updated : 06/02/2022 # Bitwise operators in Azure Cosmos DB
The example query's results as a JSON object.
``` > [!IMPORTANT]
-> In this example, the values on the left and right side of the operands are 32-bit integer values.
+> The bitwise operators in Azure Cosmos DB SQL API follow the same behavior as bitwise operators in JavaScript. JavaScript stores numbers as 64 bits floating point numbers, but all bitwise operations are performed on 32 bits binary numbers. Before a bitwise operation is performed, JavaScript converts numbers to 32 bits signed integers. After the bitwise operation is performed, the result is converted back to 64 bits JavaScript numbers. For more information about the bitwise operators in JavaScript, see [JavaScript binary bitwise operators at MDN Web Docs](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Operators#binary_bitwise_operators).
## Next steps
cost-management-billing How To Create Azure Support Request Ea https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/how-to-create-azure-support-request-ea.md
Azure enables you to create and manage support requests, also known as support t
> The Azure portal URL is specific to the Azure cloud where your organization is deployed. > >- Azure portal for commercial use is: [https://portal.azure.com](https://portal.azure.com)
->- Azure portal for Germany is: [https://portal.microsoftazure.de](https://portal.microsoftazure.de)
+>- Azure portal for Germany is: `https://portal.microsoftazure.de`
>- Azure portal for the United States government is: [https://portal.azure.us](https://portal.azure.us) Azure provides unlimited support for subscription management, which includes billing, quota adjustments, and account transfers. You need a support plan for technical support. For more information, see [Compare support plans](https://azure.microsoft.com/support/plans).
Follow these links to learn more:
* [Azure support ticket REST API](/rest/api/support) * Engage with us on [Twitter](https://twitter.com/azuresupport) * Get help from your peers in the [Microsoft Q&A question page](/answers/products/azure)
-* Learn more in [Azure Support FAQ](https://azure.microsoft.com/support/faq)
+* Learn more in [Azure Support FAQ](https://azure.microsoft.com/support/faq)
cost-management-billing Analyze Unexpected Charges https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/understand/analyze-unexpected-charges.md
Continuing from the previous example of the anomaly labeled **Daily run rate dow
Cost anomalies are evaluated for subscriptions daily and compare the day's total cost to a forecasted total based on the last 60 days to account for common patterns in your recent usage. For example, spikes every Monday. Anomaly detection runs 36 hours after the end of the day (UTC) to ensure a complete data set is available.
+The anomaly detection model is a univariate time-series, unsupervised prediction and reconstruction-based model that uses 60 days of historical usage for training, then forecasts expected usage for the day. Anomaly detection forecasting uses a deep learning algorithm called [WaveNet](https://www.deepmind.com/blog/wavenet-a-generative-model-for-raw-audio). Note this is different than the Cost Management forecast. The total normalized usage is determined to be anomalous if it falls outside the expected range based on a predetermined confidence interval.
+ Anomaly detection is available to every subscription monitored using the cost analysis preview. To enable anomaly detection for your subscriptions, open the cost analysis preview and select your subscription from the scope selector at the top of the page. You'll see a notification informing you that your subscription is onboarded and you'll start to see your anomaly detection status within 24 hours. ## Manually find unexpected cost changes
Here's an example email generated for an anomaly alert.
## Next steps -- Learn about how to [Optimize your cloud investment with Cost Management](../costs/cost-mgt-best-practices.md).
+- Learn about how to [Optimize your cloud investment with Cost Management](../costs/cost-mgt-best-practices.md).
data-factory Concepts Data Flow Udf https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-data-flow-udf.md
Last updated 04/20/2022
A user defined function is a customized expression you can define to be able to reuse logic across multiple mapping data flows. User defined functions live in a collection called a data flow library to be able to easily group up common sets of customized functions.
-Whenever you find yourself building the same logic in an expression in across multiple mapping data flows this would be a good opportunity to turn that into a user defined function.
+Whenever you find yourself building the same logic in an expression across multiple mapping data flows this would be a good opportunity to turn that into a user defined function.
> [!IMPORTANT] > User defined functions and mapping data flow libraries are currently in public preview.
data-factory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/whats-new.md
This page is updated monthly, so revisit it regularly.
<tr><td><b>Monitoring</b></td><td>Multiple updates to Data Factory monitoring experiences</td><td>New updates to the monitoring experience in Data Factory include the ability to export results to a CSV, clear all filters, and open a run in a new tab. Column and result caching is also improved.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory-blog/adf-monitoring-improvements/ba-p/3295531">Learn more</a></td></tr>
-<tr><td><b>User interface</b></td><td>New regional format support</td><td>Choose your language and the regional format that will influence how data such as dates and times appear in the Data Factory Studio monitoring. These language and regional settings affect only the Data Factory Studio user interface and don't change or modify your actual data.</td></tr>
+<tr><td><b>User interface</b></td><td>New regional format support</td><td>Choosing your language and the regional format in settings will influence the format of how data such as dates and times appear in the Azure Data Factory Studio monitoring. For example, the time format in Monitoring will appear like "Apr 2, 2022, 3:40:29 pm" when choosing English as the regional format, and "2 Apr 2022, 15:40:29" when choosing French as regional format. These settings affect only the Azure Data Factory Studio user interface and do not change/ modify your actual data and time zone.</td></tr>
</table>
ddos-protection Manage Ddos Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/manage-ddos-protection.md
Title: Manage Azure DDoS Protection Standard using the Azure portal
description: Learn how to use Azure DDoS Protection Standard to mitigate an attack. documentationcenter: na-+ editor: '' tags: azure-resource-manager
Last updated 05/04/2022-+
defender-for-cloud Defender For Servers Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-servers-introduction.md
To protect machines in hybrid and multicloud environments, Defender for Cloud us
- [Connect your AWS accounts to Microsoft Defender for Cloud](quickstart-onboard-aws.md) > [!TIP]
-> For details of which Defender for Servers features are relevant for machines running on other cloud environments, see [Supported features for virtual machines and servers](supported-machines-endpoint-solutions-clouds-servers.md?tabs=features-windows#supported-features-for-virtual-machines-and-servers-).
+> For details of which Defender for Servers features are relevant for machines running on other cloud environments, see [Supported features for virtual machines and servers](supported-machines-endpoint-solutions-clouds-servers.md?tabs=features-windows#supported-features-for-virtual-machines-and-servers).
## What are the Microsoft Defender for server plans?
defender-for-cloud Defender For Sql Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-sql-introduction.md
Title: Microsoft Defender for SQL - the benefits and features description: Learn about the benefits and features of Microsoft Defender for SQL. Previously updated : 01/06/2022 Last updated : 06/01/2022 -- # Introduction to Microsoft Defender for SQL
-Microsoft Defender for SQL includes two Microsoft Defender plans that extend Microsoft Defender for Cloud's [data security package](/azure/azure-sql/database/azure-defender-for-sql) to secure your databases and their data wherever they're located. Microsoft Defender for SQL includes functionalities for discovering and mitigating potential database vulnerabilities, and detecting anomalous activities that could indicate a threat to your databases.
+Microsoft Defender for SQL includes two Microsoft Defender plans that extend Microsoft Defender for Cloud's [data security package](/azure/azure-sql/database/azure-defender-for-sql) to protect your SQL estate regardless of where it is located (Azure, multi-cloud or Hybrid environments). Microsoft Defender for SQL includes functions that can be used to discover and mitigate potential database vulnerabilities. Defender for SQL can also detect anomalous activities that may be an indication of a threat to your databases.
+
+To protect SQL databases in hybrid and multi-cloud environments, Defender for Cloud uses Azure Arc. Azure ARC connects your hybrid and multi-cloud machines. You can check out the following articles for more information:
+
+- [Connect your non-Azure machines to Microsoft Defender for Cloud](quickstart-onboard-machines.md)
+
+- [Connect your AWS accounts to Microsoft Defender for Cloud](quickstart-onboard-aws.md)
+
+- [Connect your GCP project to Microsoft Defender for Cloud](quickstart-onboard-gcp.md)
## Availability
Microsoft Defender for SQL includes two Microsoft Defender plans that extend Mic
**Microsoft Defender for SQL** comprises two separate Microsoft Defender plans: - **Microsoft Defender for Azure SQL database servers** protects:+ - [Azure SQL Database](/azure/azure-sql/database/sql-database-paas-overview)+ - [Azure SQL Managed Instance](/azure/azure-sql/managed-instance/sql-managed-instance-paas-overview)+ - [Dedicated SQL pool in Azure Synapse](../synapse-analytics/sql-data-warehouse/sql-data-warehouse-overview-what-is.md) - **Microsoft Defender for SQL servers on machines** extends the protections for your Azure-native SQL Servers to fully support hybrid environments and protect SQL servers (all supported version) hosted in Azure, other cloud environments, and even on-premises machines:+ - [SQL Server on Virtual Machines](https://azure.microsoft.com/services/virtual-machines/sql-server/)
+
- On-premises SQL servers:+ - [Azure Arc-enabled SQL Server (preview)](/sql/sql-server/azure-arc/overview)
+
- [SQL Server running on Windows machines without Azure Arc](../azure-monitor/agents/agent-windows.md)
+
+ - Multi-cloud SQL servers:
+
+ - [Connect your AWS accounts to Microsoft Defender for Cloud](quickstart-onboard-aws.md)
+
+ - [Connect your GCP project to Microsoft Defender for Cloud](quickstart-onboard-gcp.md)
When you enable either of these plans, all supported resources that exist within the subscription are protected. Future resources created on the same subscription will also be protected.
defender-for-cloud Quickstart Onboard Aws https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-aws.md
Title: Connect your AWS account to Microsoft Defender for Cloud description: Defend your AWS resources with Microsoft Defender for Cloud-- Previously updated : 05/17/2022 Last updated : 06/02/2022 zone_pivot_groups: connect-aws-accounts
To protect your AWS-based resources, you can connect an account with one of two
- **Defender for Cloud's CSPM features** extend to your AWS resources. This agentless plan assesses your AWS resources according to AWS-specific security recommendations and these are included in your secure score. The resources will also be assessed for compliance with built-in standards specific to AWS (AWS CIS, AWS PCI DSS, and AWS Foundational Security Best Practices). Defender for Cloud's [asset inventory page](asset-inventory.md) is a multicloud enabled feature helping you manage your AWS resources alongside your Azure resources. - **Microsoft Defender for Containers** brings threat detection and advanced defenses to your Amazon EKS clusters. This plan includes Kubernetes threat protection, behavioral analytics, Kubernetes best practices, admission control recommendations and more. You can view the full list of available features in [Defender for Containers feature availability](supported-machines-endpoint-solutions-clouds-containers.md). - **Microsoft Defender for Servers** brings threat detection and advanced defenses to your Windows and Linux EC2 instances. This plan includes the integrated license for Microsoft Defender for Endpoint, security baselines and OS level assessments, vulnerability assessment scanning, adaptive application controls (AAC), file integrity monitoring (FIM), and more. You can view the full list of available features in the [feature availability table](supported-machines-endpoint-solutions-clouds-servers.md?tabs=tab/features-multicloud).
+ - **Microsoft Defender for SQL** brings threat detection and advanced defenses to your SQL Servers running on AWS EC2, AWS RDS Custom for SQL Server. This plan includes the advanced threat protection and vulnerability assessment scanning. You can view the [full list of available features](defender-for-sql-introduction.md).
For a reference list of all the recommendations Defender for Cloud can provide for AWS resources, see [Security recommendations for AWS resources - a reference guide](recommendations-reference-aws.md).
This screenshot shows AWS accounts displayed in Defender for Cloud's [overview d
|Aspect|Details| |-|:-| |Release state:|General Availability (GA)|
-|Pricing:| The **CSPM plan** is free.<br>The **[Defender for Containers](defender-for-containers-introduction.md)** plan for AWS is billed at the same price as for Azure resources. <br>For every AWS machine connected to Azure with [Azure Arc-enabled servers](../azure-arc/servers/overview.md), the **Defender for Servers** plan is billed at the same price as the [Microsoft Defender for Servers](defender-for-servers-introduction.md) plan for Azure machines. If an AWS EC2 doesn't have the Azure Arc agent deployed, you won't be charged for that machine.|
+|Pricing:|The **CSPM plan** is free.<br>The **[Defender for SQL](defender-for-sql-introduction.md)** plan is billed at the same price as Azure resources.<br>The **[Defender for Containers](defender-for-containers-introduction.md)** plan is free during the preview. After which, it will be billed for AWS at the same price as for Azure resources.<br>For every AWS machine connected to Azure with [Azure Arc-enabled servers](../azure-arc/servers/overview.md), the **Defender for Servers** plan is billed at the same price as the [Microsoft Defender for Servers](defender-for-servers-introduction.md) plan for Azure machines. If an AWS EC2 doesn't have the Azure Arc agent deployed, you won't be charged for that machine.|
|Required roles and permissions:|**Contributor** permission for the relevant Azure subscription. <br> **Administrator** on the AWS account.| |Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: National (Azure Government, Azure China 21Vianet)|
This screenshot shows AWS accounts displayed in Defender for Cloud's [overview d
- At least one Amazon EKS cluster with permission to access to the EKS K8s API server. If you need to create a new EKS cluster, follow the instructions in [Getting started with Amazon EKS ΓÇô eksctl](https://docs.aws.amazon.com/eks/latest/userguide/getting-started-eksctl.html). - The resource capacity to create a new SQS queue, Kinesis Fire Hose delivery stream, and S3 bucket in the cluster's region.
+- **To enable the Defender for SQL plan**, you'll need:
+
+ - Microsoft Defender for SQL enabled on your subscription. Learn how to [enable protection on all of your databases](quickstart-enable-database-protections.md).
+
+ - An active AWS account, with EC2 instances running SQL server or RDS Custom for SQL Server.
+
+ - Azure Arc for servers installed on your EC2 instances/RDS Custom for SQL Server.
+ - (Recommended) Use the auto provisioning process to install Azure Arc on all of your existing and future EC2 instances.
+
+ Auto provisioning is managed by AWS Systems Manager (SSM) using the SSM agent. Some Amazon Machine Images (AMIs) already have the SSM agent pre-installed. If you already have the SSM agent pre-installed, the AMI's are listed in [AMIs with SSM Agent preinstalled](https://docs.aws.amazon.com/systems-manager/latest/userguide/ssm-agent-technical-details.html#ami-preinstalled-agent). If your EC2 instances don't have the SSM Agent, you will need to install it using either of the following relevant instructions from Amazon:
+ - [Install SSM Agent for a hybrid environment (Windows)](https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-install-managed-win.html)
+
+ > [!NOTE]
+ > To enable the Azure Arc auto-provisioning, you'll need **Owner** permission on the relevant Azure subscription.
+
+ - Additional extensions should be enabled on the Arc-connected machines.
+ - Log Analytics (LA) agent on Arc machines, and ensure the selected workspace has security solution installed. The LA agent is currently configured in the subscription level. All of your multicloud AWS accounts and GCP projects under the same subscription will inherit the subscription settings.
+
+ Learn how to [configure auto-provisioning on your subscription](enable-data-collection.md#configure-auto-provisioning-for-agents-and-extensions-from-microsoft-defender-for-cloud).
+ - **To enable the Defender for Servers plan**, you'll need: - Microsoft Defender for Servers enabled on your subscription. Learn how to enable plans in [Enable enhanced security features](enable-enhanced-security.md).
If you have any existing connectors created with the classic cloud connectors ex
- (Optional) Select **Configure**, to edit the configuration as required. If you choose to disable this configuration, the `Threat detection (control plane)` feature will be disabled. Learn more about the [feature availability](supported-machines-endpoint-solutions-clouds-containers.md).
+1. By default the **Databases** plan is set to **On**. This is necessary to extend Defender for SQL's coverage to your AWS EC2 and RDS Custom for SQL Server.
+
+ - (Optional) Select **Configure**, to edit the configuration as required. We recommend you leave it set to the default configuration.
+ 1. Select **Next: Configure access**. 1. Download the CloudFormation template.
defender-for-cloud Quickstart Onboard Gcp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-gcp.md
Title: Connect your GCP project to Microsoft Defender for Cloud description: Monitoring your GCP resources from Microsoft Defender for Cloud-- Previously updated : 05/17/2022 Last updated : 06/01/2022 zone_pivot_groups: connect-gcp-accounts
To protect your GCP-based resources, you can connect an account in two different
- **Defender for Cloud's CSPM features** extends to your GCP resources. This agentless plan assesses your GCP resources according to GCP-specific security recommendations and these are included in your secure score. The resources will also be assessed for compliance with built-in standards specific to GCP. Defender for Cloud's [asset inventory page](asset-inventory.md) is a multicloud enabled feature helping you manage your GCP resources alongside your Azure resources. - **Microsoft Defender for Servers** brings threat detection and advanced defenses to your GCP VM instances. This plan includes the integrated license for Microsoft Defender for Endpoint, security baselines and OS level assessments, vulnerability assessment scanning, adaptive application controls (AAC), file integrity monitoring (FIM), and more. You can view the full list of available features in the [Supported features for virtual machines and servers table](supported-machines-endpoint-solutions-clouds-servers.md) - **Microsoft Defender for Containers** - Microsoft Defender for Containers brings threat detection and advanced defenses to your Google's Kubernetes Engine (GKE) Standard clusters. This plan includes Kubernetes threat protection, behavioral analytics, Kubernetes best practices, admission control recommendations and more. You can view the full list of available features in [Defender for Containers feature availability](supported-machines-endpoint-solutions-clouds-containers.md).
+ - **Microsoft Defender for SQL** brings threat detection and advanced defenses to your SQL Servers running on GCP compute engine instances. This plan includes the advanced threat protection and vulnerability assessment scanning. You can view the [full list of available features](defender-for-sql-introduction.md).
:::image type="content" source="./media/quickstart-onboard-gcp/gcp-account-in-overview.png" alt-text="Screenshot of GCP projects shown in Microsoft Defender for Cloud's overview dashboard." lightbox="./media/quickstart-onboard-gcp/gcp-account-in-overview.png":::
To protect your GCP-based resources, you can connect an account in two different
|Aspect|Details| |-|:-| | Release state: | Preview <br> The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to the Azure features that are in beta, preview, or otherwise not yet released into general availability. |
-|Pricing:|The **CSPM plan** is free.<br> The **Defender for Servers** plan is billed at the same price as the [Microsoft Defender for Servers](defender-for-servers-introduction.md) plan for Azure machines. If a GCP VM instance doesn't have the Azure Arc agent deployed, you won't be charged for that machine. <br>The **[Defender for Containers](defender-for-containers-introduction.md)** plan is free during the preview. After which, it will be billed for GCP at the same price as for Azure resources.|
+|Pricing:|The **CSPM plan** is free.<br>The **[Defender for SQL](defender-for-sql-introduction.md)** plan is billed at the same price as Azure resources.<br> The **Defender for Servers** plan is billed at the same price as the [Microsoft Defender for Servers](defender-for-servers-introduction.md) plan for Azure machines. If a GCP VM instance doesn't have the Azure Arc agent deployed, you won't be charged for that machine. <br>The **[Defender for Containers](defender-for-containers-introduction.md)** plan is free during the preview. After which, it will be billed for GCP at the same price as for Azure resources.|
|Required roles and permissions:| **Contributor** on the relevant Azure Subscription <br> **Owner** on the GCP organization or project| |Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: National (Azure Government, Azure China 21Vianet, Other Gov)|
Follow the steps below to create your GCP cloud connector.
|--|--| | CSPM service account reader role <br> Microsoft Defender for Cloud identity federation <br> CSPM identity pool <br>*Microsoft Defender for Servers* service account (when the servers plan is enabled) <br>*Azure-Arc for servers onboarding* service account (when the Arc for servers auto-provisioning is enabled) | Microsoft Defender ContainersΓÇÖ service account role, <br> Microsoft Defender Data Collector service account role <br> microsoft defender for cloud identity pool |
-1. (**Servers only**) When Arc auto-provisioning is enabled, copy the unique numeric ID presented at the end of the Cloud Shell script.
+(**Servers/SQL only**) When Arc auto-provisioning is enabled, copy the unique numeric ID presented at the end of the Cloud Shell script.
- :::image type="content" source="media/quickstart-onboard-gcp/powershell-unique-id.png" alt-text="Screenshot showing the unique numeric I D to be copied." lightbox="media/quickstart-onboard-gcp/powershell-unique-id-expanded.png":::
- To locate the unique numeric ID in the GCP portal, Navigate to **IAM & Admin** > **Service Accounts**, in the Name column, locate `Azure-Arc for servers onboarding` and copy the unique numeric ID number (OAuth 2 Client ID).
+To locate the unique numeric ID in the GCP portal, Navigate to **IAM & Admin** > **Service Accounts**, in the Name column, locate `Azure-Arc for servers onboarding` and copy the unique numeric ID number (OAuth 2 Client ID).
1. Navigate back to the Microsoft Defender for Cloud portal. 1. (Optional) If you changed any of the names of any of the resources, update the names in the appropriate fields.
-1. (**Servers only**) Select **Azure-Arc for servers onboarding**
+1. (**Servers/SQL only**) Select **Azure-Arc for servers onboarding**
:::image type="content" source="media/quickstart-onboard-gcp/unique-numeric-id.png" alt-text="Screenshot showing the Azure-Arc for servers onboarding section of the screen.":::
To have full visibility to Microsoft Defender for Servers security content, ensu
1. Continue from step number 8, of the [Connect your GCP projects](#connect-your-gcp-projects) instructions.
+### Configure the Databases plan
+
+Connect your GCP VM instances to Azure Arc in order to have full visibility to Microsoft Defender for SQL security content.
+
+Microsoft Defender for SQL brings threat detection and vulnerability assessment to your GCP VM instances.
+To have full visibility to Microsoft Defender for SQL security content, ensure you have the following requirements configured:
+
+- Microsoft SQL servers on machines plan enabled on your subscription. Learn how to enable plan in the [Enable enhanced security features](quickstart-enable-database-protections.md) article.
+
+- Azure Arc for servers installed on your VM instances.
+ - **(Recommended) Auto-provisioning** - Auto-provisioning is enabled by default in the onboarding process and requires owner permissions on the subscription. Arc auto-provisioning process is using the OS config agent on GCP end. Learn more about the [OS config agent availability on GCP machines](https://cloud.google.com/compute/docs/images/os-details#vm-manager).
+
+ > [!NOTE]
+ > The Arc auto-provisioning process leverages the VM manager on your Google Cloud Platform, to enforce policies on the your VMs through the OS config agent. A VM with an [Active OS agent](https://cloud.google.com/compute/docs/manage-os#agent-state), will incur a cost according to GCP. Refer to [GCP's technical documentation](https://cloud.google.com/compute/docs/vm-manager#pricing) to see how this may affect your account.
+ > <br><br> Microsoft Defender for Servers does not install the OS config agent to a VM that does not have it installed. However, Microsoft Defender for Servers will enable communication between the OS config agent and the OS config service if the agent is already installed but not communicating with the service.
+ > <br><br> This can change the OS config agent from `inactive` to `active`, and will lead to additional costs.
+- Additional extensions should be enabled on the Arc-connected machines.
+ - SQL servers on machines. Ensure the plan is enabled on your subscription.
+ - Log Analytics (LA) agent on Arc machines. Ensure the selected workspace has security solution installed.
+
+ The LA agent and SQL servers on machines plan are currently configured in the subscription level, such that all the multicloud accounts and projects (from both AWS and GCP) under the same subscription will inherit the subscription settings and may result in additional charges.
+
+ Learn how to [configure auto-provisioning on your subscription](enable-data-collection.md#configure-auto-provisioning-for-agents-and-extensions-from-microsoft-defender-for-cloud).
+
+ > [!NOTE]
+ > Defender for SQL assigns tags to your GCP resources to manage the auto-provisioning process. You must have these tags properly assigned to your resources so that Defender for Cloud can manage your resources:
+ **Cloud**, **InstanceName**, **MDFCSecurityConnector**, **MachineId**, **ProjectId**, **ProjectNumber**
+- Automatic SQL server discovery and registration. Enable these settings to allow automatic discovery and registration of SQL servers, providing centralized SQL asset inventory and management.
+
+**To configure the Databases plan**:
+
+1. Follow the steps to [Connect your GCP project](#connect-your-gcp-project).
+
+1. On the Select plans screen select **Configure**.
+
+ :::image type="content" source="media/quickstart-onboard-gcp/view-configuration.png" alt-text="Screenshot showing where to click to configure the Databases plan.":::
+
+1. On the Auto provisioning screen, toggle the switches on or off depending on your need.
+
+ :::image type="content" source="media/quickstart-onboard-gcp/auto-provision-databases-screen.png" alt-text="Screenshot showing the toggle switches for the Databases plan.":::
+
+ > [!Note]
+ > If Azure Arc is toggled **Off**, you will need to follow the manual installation process mentioned above.
+
+1. Select **Save**.
+
+1. Continue from step number 8 of the [Connect your GCP projects](#connect-your-gcp-projects) instructions.
+ ### Configure the Containers plan Microsoft Defender for Containers brings threat detection, and advanced defenses to your GCP GKE Standard clusters. To get the full security value out of Defender for Containers, and to fully protect GCP clusters, ensure you have the following requirements configured:
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
To learn about *planned* changes that are coming soon to Defender for Cloud, see
> [!TIP] > If you're looking for items older than six months, you'll find them in the [Archive for What's new in Microsoft Defender for Cloud](release-notes-archive.md).
+## June 2022
+
+Updates in June include:
+
+- [General availability (GA) of Defender for SQL on machines for AWS and GCP environments](#general-availability-ga-of-defender-for-sql-on-machines-for-aws-and-gcp-environments)
+
+### General availability (GA) of Defender for SQL on machines for AWS and GCP environments
+
+The database protection capabilities provided by Microsoft Defender for Cloud, has added support for your SQL servers that are hosted in either AWS or GCP environments.
+
+Using Defender for SQL, enterprises can now protect their entire database estate, hosted in Azure, AWS, GCP and on-premises machines.
+
+Microsoft Defender for SQL provides a unified multicloud experience to view security recommendations, security alerts and vulnerability assessment findings for both the SQL server and the underlining Windows OS.
+
+Using the multicloud onboarding experience, you can enable and enforce databases protection for SQL servers running on AWS EC2, RDS Custom for SQL Server and GCP compute engine. After enabling either of these plans, all supported resources that exist within the subscription are protected. Future resources created on the same subscription will also be protected.
+
+Learn how to protect and connect your [AWS environment](quickstart-onboard-aws.md) and your [GCP organization](quickstart-onboard-gcp.md) with Microsoft Defender for Cloud.
+ ## May 2022 Updates in May include:
defender-for-cloud Secure Score Security Controls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/secure-score-security-controls.md
Title: Security posture for Microsoft Defender for Cloud description: Description of Microsoft Defender for Cloud's secure score and its security controls --- Previously updated : 04/03/2022+++ Last updated : 06/02/2022 # Security posture for Microsoft Defender for Cloud ## Introduction to secure score
-Microsoft Defender for Cloud has two main goals:
+Microsoft Defender for Cloud has two main goals:
- to help you understand your current security situation - to help you efficiently and effectively improve your security
You can group this section by environment by selecting the Group by Environment
:::image type="content" source="media/secure-score-security-controls/bottom-half.png" alt-text="Screenshot of the bottom half of the security posture page.":::
-## How your secure score is calculated
+## How your secure score is calculated
The contribution of each security control towards the overall secure score is shown on the recommendations page.
In this example:
- **Current score** - :::image type="icon" source="media/secure-score-security-controls/current-score.png" border="false":::
- The current score for this control.<br>Current score=[Score per resource]*[Number of healthy resources].
+ The current score for this control.<br>Current score=[Score per resource]*[Number of healthy resources].
Each control contributes towards the total score. In this example, the control is contributing 2.00 points to current total secure score.
In this example:
For example, Potential score increase=[Score per resource]*[Number of unhealthy resources] or 0.1714 x 30 unhealthy resources = 5.14. -- **Insights** - :::image type="icon" source="media/secure-score-security-controls/insights.png" border="false":::
+- **Insights** - :::image type="icon" source="media/secure-score-security-controls/insights.png" border="false":::
- Gives you extra details for each recommendation. Which can be:
+ Gives you extra details for each recommendation. Which can be:
- - :::image type="icon" source="media/secure-score-security-controls/preview-icon.png" border="false"::: Preview recommendation - This recommendation won't affect your secure score until it's GA.
+ - :::image type="icon" source="media/secure-score-security-controls/preview-icon.png" border="false"::: Preview recommendation - This recommendation won't affect your secure score until it's GA.
- - :::image type="icon" source="media/secure-score-security-controls/fix-icon.png" border="false"::: Fix - From within the recommendation details page, you can use 'Fix' to resolve this issue.
+ - :::image type="icon" source="media/secure-score-security-controls/fix-icon.png" border="false"::: Fix - From within the recommendation details page, you can use 'Fix' to resolve this issue.
- - :::image type="icon" source="media/secure-score-security-controls/enforce-icon.png" border="false"::: Enforce - From within the recommendation details page, you can automatically deploy a policy to fix this issue whenever someone creates a non-compliant resource.
+ - :::image type="icon" source="media/secure-score-security-controls/enforce-icon.png" border="false"::: Enforce - From within the recommendation details page, you can automatically deploy a policy to fix this issue whenever someone creates a non-compliant resource.
- - :::image type="icon" source="media/secure-score-security-controls/deny-icon.png" border="false"::: Deny - From within the recommendation details page, you can prevent new resources from being created with this issue.
+ - :::image type="icon" source="media/secure-score-security-controls/deny-icon.png" border="false"::: Deny - From within the recommendation details page, you can prevent new resources from being created with this issue.
### Calculations - understanding your score
In this example:
|**Security control's current score**|<br>![Equation for calculating a security control's score.](media/secure-score-security-controls/secure-score-equation-single-control.png)<br><br>Each individual security control contributes towards the Security Score. Each resource affected by a recommendation within the control, contributes towards the control's current score. The current score for each control is a measure of the status of the resources *within* the control.<br>![Tooltips showing the values used when calculating the security control's current score](media/secure-score-security-controls/security-control-scoring-tooltips.png)<br>In this example, the max score of 6 would be divided by 78 because that's the sum of the healthy and unhealthy resources.<br>6 / 78 = 0.0769<br>Multiplying that by the number of healthy resources (4) results in the current score:<br>0.0769 * 4 = **0.31**<br><br>| |**Secure score**<br>Single subscription, or connector|<br>![Equation for calculating a subscription's secure score](media/secure-score-security-controls/secure-score-equation-single-sub.png)<br><br>![Single subscription secure score with all controls enabled](media/secure-score-security-controls/secure-score-example-single-sub.png)<br>In this example, there's a single subscription, or connector with all security controls available (a potential maximum score of 60 points). The score shows 28 points out of a possible 60 and the remaining 32 points are reflected in the "Potential score increase" figures of the security controls.<br>![List of controls and the potential score increase](media/secure-score-security-controls/secure-score-example-single-sub-recs.png) <br> This equation is the same equation for a connector with just the word subscription being replaced by the word connector. | |**Secure score**<br>Multiple subscriptions, and connectors|<br>![Equation for calculating the secure score for multiple subscriptions.](media/secure-score-security-controls/secure-score-equation-multiple-subs.png)<br><br>When calculating the combined score for multiple subscriptions, and connectors, Defender for Cloud includes a *weight* for each subscription, and connector. The relative weights for your subscriptions, and connectors are determined by Defender for Cloud based on factors such as the number of resources.<br>The current score for each subscription, a dn connector is calculated in the same way as for a single subscription, or connector, but then the weight is applied as shown in the equation.<br>When viewing multiple subscriptions, and connectors, the secure score evaluates all resources within all enabled policies and groups their combined impact on each security control's maximum score.<br>![Secure score for multiple subscriptions with all controls enabled](media/secure-score-security-controls/secure-score-example-multiple-subs.png)<br>The combined score is **not** an average; rather it's the evaluated posture of the status of all resources across all subscriptions, and connectors.<br><br>Here too, if you go to the recommendations page and add up the potential points available, you'll find that it's the difference between the current score (22) and the maximum score available (58).|
-
+ ### Which recommendations are included in the secure score calculations? Only built-in recommendations have an impact on the secure score.
You can also configure the Enforce and Deny options on the relevant recommendati
## Security controls and their recommendations
-The table below lists the security controls in Microsoft Defender for Cloud. For each control, you can see the maximum number of points you can add to your secure score if you remediate *all* of the recommendations listed in the control, for *all* of your resources.
+The table below lists the security controls in Microsoft Defender for Cloud. For each control, you can see the maximum number of points you can add to your secure score if you remediate *all* of the recommendations listed in the control, for *all* of your resources.
+
+The set of security recommendations provided with Defender for Cloud is tailored to the available resources in each organization's environment. You can [disable policies](tutorial-security-policy.md#disable-security-policies-and-disable-recommendations) and [exempt specific resources from a recommendation](exempt-resource.md) to further customize the recommendations.
-The set of security recommendations provided with Defender for Cloud is tailored to the available resources in each organization's environment. You can [disable policies](tutorial-security-policy.md#disable-security-policies-and-disable-recommendations) and [exempt specific resources from a recommendation](exempt-resource.md) to further customize the recommendations.
-
We recommend every organization carefully reviews their assigned Azure Policy initiatives. > [!TIP]
-> For details about reviewing and editing your initiatives, see [Working with security policies](tutorial-security-policy.md).
+> For details about reviewing and editing your initiatives, see [Working with security policies](tutorial-security-policy.md).
Even though Defender for Cloud's default security initiative is based on industry best practices and standards, there are scenarios in which the built-in recommendations listed below might not completely fit your organization. It's sometimes necessary to adjust the default initiative - without compromising security - to ensure it's aligned with your organization's own policies, industry standards, regulatory standards, and benchmarks.<br><br> [!INCLUDE [security-center-controls-and-recommendations](../../includes/asc/security-control-recommendations.md)] -- ## FAQ - Secure score ### If I address only three out of four recommendations in a security control, will my secure score change?+ No. It won't change until you remediate all of the recommendations for a single resource. To get the maximum score for a control, you must remediate all recommendations for all resources. ### If a recommendation isn't applicable to me, and I disable it in the policy, will my security control be fulfilled and my secure score updated?+ Yes. We recommend disabling recommendations when they're inapplicable in your environment. For instructions on how to disable a specific recommendation, see [Disable security policies](./tutorial-security-policy.md#disable-security-policies-and-disable-recommendations). ### If a security control offers me zero points towards my secure score, should I ignore it?+ In some cases, you'll see a control max score greater than zero, but the impact is zero. When the incremental score for fixing resources is negligible, it's rounded to zero. Don't ignore these recommendations because they still bring security improvements. The only exception is the "Additional Best Practice" control. Remediating these recommendations won't increase your score, but it will enhance your overall security. ## Next steps
-This article described the secure score and the included security controls.
+This article described the secure score and the included security controls.
> [!div class="nextstepaction"] > [Access and track your secure score](secure-score-access-and-track.md)
defender-for-cloud Supported Machines Endpoint Solutions Clouds Servers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/supported-machines-endpoint-solutions-clouds-servers.md
The **tabs** below show the features of Microsoft Defender for Cloud that are available for Windows and Linux machines.
-## Supported features for virtual machines and servers <a name="vm-server-features"></a>
+## Supported features for virtual machines and servers<a name="vm-server-features"></a>
### [**Windows machines**](#tab/features-windows)
defender-for-cloud Upcoming Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/upcoming-changes.md
On this page, you'll learn about changes that are planned for Defender for Cloud
If you're looking for the latest release notes, you'll find them in the [What's new in Microsoft Defender for Cloud](release-notes.md). - ## Planned changes | Planned change | Estimated date for change | |--|--|
+| [GA support for Arc-enabled Kubernetes clusters](#ga-support-for-arc-enabled-kubernetes-clusters) | July 2022 |
| [Changes to recommendations for managing endpoint protection solutions](#changes-to-recommendations-for-managing-endpoint-protection-solutions) | June 2022 | | [Key Vault recommendations changed to "audit"](#key-vault-recommendations-changed-to-audit) | June 2022 | | [Multiple changes to identity recommendations](#multiple-changes-to-identity-recommendations) | June 2022 | | [Deprecating three VM alerts](#deprecating-three-vm-alerts) | June 2022|
-| [Deprecating the "API App should only be accessible over HTTPS" policy](#deprecating-the-api-app-should-only-be-accessible-over-https-policy)|June 2022|
+| [Deprecating the "API App should only be accessible over HTTPS" policy](#deprecating-the-api-app-should-only-be-accessible-over-https-policy)|June 2022|
+
+### GA support for Arc-enabled Kubernetes clusters
+
+**Estimated date for change:** July 2022
+
+Defender for Containers is currently a preview feature for Arc-enabled Kubernetes clusters. In July, Arc-enabled Kubernetes clusters will be charged according to the listing on the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/). Customers that already have clusters onboarded to Arc (on the subscription level) will incur charges.
### Changes to recommendations for managing endpoint protection solutions
In August 2021, we added two new **preview** recommendations to deploy and maint
When the recommendations are released to general availability, they will replace the following existing recommendations: - **Endpoint protection should be installed on your machines** will replace:
- - [Install endpoint protection solution on virtual machines (key: 83f577bd-a1b6-b7e1-0891-12ca19d1e6df)](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/83f577bd-a1b6-b7e1-0891-12ca19d1e6df)
- - [Install endpoint protection solution on your machines (key: 383cf3bc-fdf9-4a02-120a-3e7e36c6bfee)](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/383cf3bc-fdf9-4a02-120a-3e7e36c6bfee)
+ - [Install endpoint protection solution on virtual machines (key: 83f577bd-a1b6-b7e1-0891-12ca19d1e6df)](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/83f577bd-a1b6-b7e1-0891-12ca19d1e6df)
+ - [Install endpoint protection solution on your machines (key: 383cf3bc-fdf9-4a02-120a-3e7e36c6bfee)](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/383cf3bc-fdf9-4a02-120a-3e7e36c6bfee)
- **Endpoint protection health issues should be resolved on your machines** will replace the existing recommendation that has the same name. The two recommendations have different assessment keys:
- - Assessment key for the **preview** recommendation: 37a3689a-818e-4a0e-82ac-b1392b9bb000
- - Assessment key for the **GA** recommendation: 3bcd234d-c9c7-c2a2-89e0-c01f419c1a8a
+ - Assessment key for the **preview** recommendation: 37a3689a-818e-4a0e-82ac-b1392b9bb000
+ - Assessment key for the **GA** recommendation: 3bcd234d-c9c7-c2a2-89e0-c01f419c1a8a
Learn more:+ - [Defender for Cloud's supported endpoint protection solutions](supported-machines-endpoint-solutions-clouds-servers.md#endpoint-supported) - [How these recommendations assess the status of your deployed solutions](endpoint-protection-recommendations-technical.md)
The new release will bring the following capabilities:
- **Improved freshness interval** - Currently, the identity recommendations have a freshness interval of 24 hours. This update will reduce that interval to 12 hours. -- **Account exemption capability** - Defender for Cloud has many features you can use to customize your experience and ensure that your secure score reflects your organization's security priorities. For example, you can [exempt resources and recommendations from your secure score](exempt-resource.md).
+- **Account exemption capability** - Defender for Cloud has many features you can use to customize your experience and ensure that your secure score reflects your organization's security priorities. For example, you can [exempt resources and recommendations from your secure score](exempt-resource.md).
This update will allow you to exempt specific accounts from evaluation with the six recommendations listed in the following table.
The new release will bring the following capabilities:
|External accounts with read permissions should be removed from your subscription|a8c6a4ad-d51e-88fe-2979-d3ee3c864f8b| |External accounts with write permissions should be removed from your subscription|04e7147b-0deb-9796-2e5c-0336343ceb3d|
-#### Recommendations rename
+#### Recommendations rename
This update, will rename two recommendations, and revise their descriptions. The assessment keys will remain unchanged.
- | Property | Current value | New update's change |
+ | Property | Current value | New update's change |
|-|-|-| |**First recommendation**| - | - |
- |Assessment key | e52064aa-6853-e252-a11e-dffc675689c2 | No change|
- | Name | [Deprecated accounts with owner permissions should be removed from your subscription](https://ms.portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/e52064aa-6853-e252-a11e-dffc675689c2) |Subscriptions should be purged of accounts that are blocked in Active Directory and have owner permissions.|
- |Description| User accounts that have been blocked from signing in, should be removed from your subscriptions.|These accounts can be targets for attackers looking to find ways to access your data without being noticed. <br> Learn more about securing the identity perimeter in [Azure Identity Management and access control security best practices](../security/fundamentals/identity-management-best-practices.md).|
- |Related policy|[Deprecated accounts with owner permissions should be removed from your subscription](https://ms.portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2febb62a0c-3560-49e1-89ed-27e074e9f8ad) | Subscriptions should be purged of accounts that are blocked in Active Directory and have owner permissions.|
- |**Second recommendation**| - | - |
- | Assessment key | 00c6d40b-e990-6acf-d4f3-471e747a27c4 | No change |
- | Name | [Deprecated accounts should be removed from your subscription](https://ms.portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/00c6d40b-e990-6acf-d4f3-471e747a27c4)|Subscriptions should be purged of accounts that are blocked in Active Directory and have read and write permissions.|
-|Description|User accounts that have been blocked from signing in, should be removed from your subscriptions. <br> These accounts can be targets for attackers looking to find ways to access your data without being noticed.|User accounts that have been blocked from signing into Active Directory, should be removed from your subscriptions.<br> Learn more about securing the identity perimeter in [Azure Identity Management and access control security best practices](../security/fundamentals/identity-management-best-practices.md).|
- | Related policy | [Deprecated accounts should be removed from your subscription](https://ms.portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f6b1cbf55-e8b6-442f-ba4c-7246b6381474) | Subscriptions should be purged of accounts that are blocked in Active Directory and have read and write permissions. |
+ |Assessment key | e52064aa-6853-e252-a11e-dffc675689c2 | No change|
+ | Name | [Deprecated accounts with owner permissions should be removed from your subscription](https://ms.portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/e52064aa-6853-e252-a11e-dffc675689c2) |Subscriptions should be purged of accounts that are blocked in Active Directory and have owner permissions.|
+ |Description| User accounts that have been blocked from signing in, should be removed from your subscriptions.|These accounts can be targets for attackers looking to find ways to access your data without being noticed. <br> Learn more about securing the identity perimeter in [Azure Identity Management and access control security best practices](../security/fundamentals/identity-management-best-practices.md).|
+ |Related policy|[Deprecated accounts with owner permissions should be removed from your subscription](https://ms.portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2febb62a0c-3560-49e1-89ed-27e074e9f8ad) | Subscriptions should be purged of accounts that are blocked in Active Directory and have owner permissions.|
+ |**Second recommendation**| - | - |
+ | Assessment key | 00c6d40b-e990-6acf-d4f3-471e747a27c4 | No change |
+ | Name | [Deprecated accounts should be removed from your subscription](https://ms.portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/00c6d40b-e990-6acf-d4f3-471e747a27c4)|Subscriptions should be purged of accounts that are blocked in Active Directory and have read and write permissions.|
+|Description|User accounts that have been blocked from signing in, should be removed from your subscriptions. <br> These accounts can be targets for attackers looking to find ways to access your data without being noticed.|User accounts that have been blocked from signing into Active Directory, should be removed from your subscriptions.<br> Learn more about securing the identity perimeter in [Azure Identity Management and access control security best practices](../security/fundamentals/identity-management-best-practices.md).|
+ | Related policy | [Deprecated accounts should be removed from your subscription](https://ms.portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f6b1cbf55-e8b6-442f-ba4c-7246b6381474) | Subscriptions should be purged of accounts that are blocked in Active Directory and have read and write permissions. |
### Deprecating three VM alerts
The following table lists the alerts that will be deprecated during June 2022.
|--|--|--|--| | **Docker build operation detected on a Kubernetes node** <br>(VM_ImageBuildOnNode) | Machine logs indicate a build operation of a container image on a Kubernetes node. While this behavior might be legitimate, attackers might build their malicious images locally to avoid detection. | Defense Evasion | Low | | **Suspicious request to Kubernetes API** <br>(VM_KubernetesAPI) | Machine logs indicate that a suspicious request was made to the Kubernetes API. The request was sent from a Kubernetes node, possibly from one of the containers running in the node. Although this behavior can be intentional, it might indicate that the node is running a compromised container. | LateralMovement | Medium |
-| **SSH server is running inside a container** <br>(VM_ContainerSSH) | Machine logs indicate that an SSH server is running inside a Docker container. While this behavior can be intentional, it frequently indicates that a container is misconfigured or breached. | Execution | Medium |
+| **SSH server is running inside a container** <br>(VM_ContainerSSH) | Machine logs indicate that an SSH server is running inside a Docker container. While this behavior can be intentional, it frequently indicates that a container is misconfigured or breached. | Execution | Medium |
These alerts are used to notify a user about suspicious activity connected to a Kubernetes cluster. The alerts will be replaced with matching alerts that are part of the Microsoft Defender for Cloud Container alerts (`K8S.NODE_ImageBuildOnNode`, `K8S.NODE_ KubernetesAPI` and `K8S.NODE_ ContainerSSH`) which will provide improved fidelity and comprehensive context to investigate and act on the alerts. Learn more about alerts for [Kubernetes Clusters](alerts-reference.md).
These alerts are used to notify a user about suspicious activity connected to a
**Estimated date for change:** June 2022
-The policy `API App should only be accessible over HTTPS` is set to be deprecated. This policy will be replaced with `Web Application should only be accessible over HTTPS`, which will be renamed to `App Service apps should only be accessible over HTTPS`.
+The policy `API App should only be accessible over HTTPS` is set to be deprecated. This policy will be replaced with `Web Application should only be accessible over HTTPS`, which will be renamed to `App Service apps should only be accessible over HTTPS`.
To learn more about policy definitions for Azure App Service, see [Azure Policy built-in definitions for Azure App Service](../azure-app-configuration/policy-reference.md)
defender-for-iot Agent Based Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/agent-based-recommendations.md
Last updated 03/28/2022
Defender for IoT scans your Azure resources and IoT devices and provides security recommendations to reduce your attack surface. Security recommendations are actionable and aim to aid customers in complying with security best practices.
-In this article, you will find a list of recommendations, which can be triggered on your IoT devices.
+In this article, you'll find a list of recommendations, which can be triggered on your IoT devices.
## Agent based recommendations
Operational recommendations provide insights and suggestions to improve security
| Severity | Name | Data Source | Description | |--|--|--|--| | Low | Agent sends unutilized messages | Legacy Defender-IoT-micro-agent | 10% or more of security messages were smaller than 4 KB during the last 24 hours. |
-| Low | Security twin configuration not optimal | Legacy Defender-IoT-micro-agent | Security twin configuration is not optimal. |
+| Low | Security twin configuration not optimal | Legacy Defender-IoT-micro-agent | Security twin configuration isn't optimal. |
| Low | Security twin configuration conflict | Legacy Defender-IoT-micro-agent | Conflicts were identified in the security twin configuration. | ## Next steps
defender-for-iot Concept Agent Based Security Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/concept-agent-based-security-alerts.md
Defender for IoT continuously analyzes your IoT solution using advanced analytic
In addition, you can create custom alerts based on your knowledge of expected device behavior. An alert acts as an indicator of potential compromise, and should be investigated and remediated.
-In this article, you will find a list of built-in alerts, which can be triggered on your IoT devices.
+In this article, you'll find a list of built-in alerts, which can be triggered on your IoT devices.
In addition to built-in alerts, Defender for IoT allows you to define custom alerts based on expected IoT Hub and/or device behavior. For more information, see [customizable alerts](concept-customizable-security-alerts.md).
For more information, see [customizable alerts](concept-customizable-security-al
| Port forwarding detection | High | Defender-IoT-micro-agent | Initiation of port forwarding to an external IP address detected. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. | IoT_PortForwarding | | Possible attempt to disable Auditd logging detected | High | Defender-IoT-micro-agent | Linux Auditd system provides a way to track security-relevant information on the system. The system records as much information about the events that are happening on your system as possible. This information is crucial for mission-critical environments to determine who violated the security policy and the actions they performed. Disabling Auditd logging may prevent your ability to discover violations of security policies used on the system. | Check with the device owner if this was legitimate activity with business reasons. If not, this event may be hiding activity by malicious actors. Immediately escalated the incident to your information security team. | IoT_DisableAuditdLogging | | Reverse shells | High | Defender-IoT-micro-agent | Analysis of host data on a device detected a potential reverse shell. Reverse shells are often used to get a compromised machine to call back into a machine controlled by a malicious actor. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. | IoT_ReverseShell |
-| Successful local login | High | Defender-IoT-micro-agent | Successful local sign in to the device detected | Make sure the signed in user is an authorized party. | IoT_SucessfulLocalLogin |
+| Successful local login | High | Defender-IoT-micro-agent | Successful local sign-in to the device detected | Make sure the signed in user is an authorized party. | IoT_SucessfulLocalLogin |
| Web shell | High | Defender-IoT-micro-agent | Possible web shell detected. Malicious actors commonly upload a web shell to a compromised machine to gain persistence or for further exploitation. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. | IoT_WebShell | | Behavior similar to ransomware detected | High | Defender-IoT-micro-agent | Execution of files similar to known ransomware that may prevent users from accessing their system, or personal files, and may demand ransom payment to regain access. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. | IoT_Ransomware | | Crypto coin miner image | High | Defender-IoT-micro-agent | Execution of a process normally associated with digital currency mining detected. | Verify with the user that ran the command if this was legitimate activity on the device. If not, escalate the alert to the information security team. | IoT_CryptoMiner |
For more information, see [customizable alerts](concept-customizable-security-al
| Name | Severity | Data Source | Description | Suggested remediation steps | Alert type | |--|--|--|--|--|--| | Behavior similar to common Linux bots detected | Medium | Defender-IoT-micro-agent | Execution of a process normally associated with common Linux botnets detected. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. | IoT_CommonBots |
-| Behavior similar to Fairware ransomware detected | Medium | Defender-IoT-micro-agent | Execution of rm -rf commands applied to suspicious locations detected using analysis of host data. Because rm -rf recursively deletes files, it is normally only used on discrete folders. In this case, it is being used in a location that could remove a large amount of data. Fairware ransomware is known to execute rm -rf commands in this folder. | Review with the user that ran the command this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. | IoT_FairwareMalware |
-| Crypto coin miner container image detected | Medium | Defender-IoT-micro-agent | Container detecting running known digital currency mining images. | 1. If this behavior is not intended, delete the relevant container image. <br> 2. Make sure that the Docker daemon is not accessible via an unsafe TCP socket. <br> 3. Escalate the alert to the information security team. | IoT_CryptoMinerContainer |
+| Behavior similar to Fairware ransomware detected | Medium | Defender-IoT-micro-agent | Execution of rm -rf commands applied to suspicious locations detected using analysis of host data. Because rm -rf recursively deletes files, it's normally only used on discrete folders. In this case, it's being used in a location that could remove a large amount of data. Fairware ransomware is known to execute rm -rf commands in this folder. | Review with the user that ran the command this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. | IoT_FairwareMalware |
+| Crypto coin miner container image detected | Medium | Defender-IoT-micro-agent | Container detecting running known digital currency mining images. | 1. If this behavior isn't intended, delete the relevant container image. <br> 2. Make sure that the Docker daemon isn't accessible via an unsafe TCP socket. <br> 3. Escalate the alert to the information security team. | IoT_CryptoMinerContainer |
| Detected suspicious use of the nohup command | Medium | Defender-IoT-micro-agent | Suspicious use of the nohup command on host detected. Malicious actors commonly run the nohup command from a temporary directory, effectively allowing their executables to run in the background. Seeing this command run on files located in a temporary directory is not expected or usual behavior. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. | IoT_SuspiciousNohup | | Detected suspicious use of the useradd command | Medium | Defender-IoT-micro-agent | Suspicious use of the useradd command detected on the device. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. | IoT_SuspiciousUseradd | | Exposed Docker daemon by TCP socket | Medium | Defender-IoT-micro-agent | Machine logs indicate that your Docker daemon (dockerd) exposes a TCP socket. By default, Docker configuration, does not use encryption or authentication when a TCP socket is enabled. Default Docker configuration enables full access to the Docker daemon, by anyone with access to the relevant port. | Review with the user that ran the command if this was legitimate activity that you expect to see on the device. If not, escalate the alert to the information security team. | IoT_ExposedDocker |
defender-for-iot Concept Baseline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/concept-baseline.md
Baseline custom checks establish a custom list of checks for each device baselin
1. In your IoT Hub, locate and select the device you wish to change.
-1. Click on the device, and then click the **azureiotsecurity** module.
+1. Select on the device, and then select the **azureiotsecurity** module.
-1. Click **Module Identity Twin**.
+1. Select **Module Identity Twin**.
1. Upload the **baseline custom checks** file to the device.
-1. Add baseline properties to the Defender-IoT-micro-agent and click **Save**.
+1. Add baseline properties to the Defender-IoT-micro-agent and select **Save**.
### Baseline custom check file example
defender-for-iot Concept Customizable Security Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/concept-customizable-security-alerts.md
The following lists of Defender for IoT alerts are definable by you based on you
| Custom alert - The number of cloud to device messages in AMQP protocol is outside the allowed range | Low | IoT Hub | The number of cloud to device messages (AMQP protocol) within a specific time window is outside the currently configured and allowable range. | IoT_CA_AmqpC2DMessagesNotInAllowedRange | | Custom alert - The number of rejected cloud to device messages in AMQP protocol is outside the allowed range | Low | IoT Hub | The number of cloud to device messages (AMQP protocol) rejected by the device, within a specific time window is outside the currently configured and allowable range. | IoT_CA_AmqpC2DRejectedMessagesNotInAllowedRange | | Custom alert - The number of device to cloud messages in AMQP protocol is outside the allowed range | Low | IoT Hub | The amount of device to cloud messages (AMQP protocol) within a specific time window is outside the currently configured and allowable range. | IoT_CA_AmqpD2CMessagesNotInAllowedRange |
-| Custom alert - The number of direct method invokes is outside the allowed range | Low | IoT Hub | The amount of direct method invokes within a specific time window is outside the currently configured and allowable range. | IoT_CA_DirectMethodInvokesNotInAllowedRange |
+| Custom alert - The number of direct method invokes are outside the allowed range | Low | IoT Hub | The amount of direct method invokes within a specific time window is outside the currently configured and allowable range. | IoT_CA_DirectMethodInvokesNotInAllowedRange |
| Custom alert - The number of file uploads is outside the allowed range | Low | IoT Hub | The amount of file uploads within a specific time window is outside the currently configured and allowable range. | IoT_CA_FileUploadsNotInAllowedRange |
-| Custom alert - The number of cloud to device messages in HTTP protocol is outside the allowed range | Low | IoT Hub | The amount of cloud to device messages (HTTP protocol) in a time window is not in the configured allowed range | IoT_CA_HttpC2DMessagesNotInAllowedRange |
-| Custom alert - The number of rejected cloud to device messages in HTTP protocol is not in the allowed range | Low | IoT Hub | The amount of cloud to device messages (HTTP protocol) within a specific time window is outside the currently configured and allowable range. | IoT_CA_HttpC2DRejectedMessagesNotInAllowedRange |
+| Custom alert - The number of cloud to device messages in HTTP protocol is outside the allowed range | Low | IoT Hub | The amount of cloud to device messages (HTTP protocol) in a time window isn't in the configured allowed range | IoT_CA_HttpC2DMessagesNotInAllowedRange |
+| Custom alert - The number of rejected cloud to device messages in HTTP protocol isn't in the allowed range | Low | IoT Hub | The amount of cloud to device messages (HTTP protocol) within a specific time window is outside the currently configured and allowable range. | IoT_CA_HttpC2DRejectedMessagesNotInAllowedRange |
| Custom alert - The number of device to cloud messages in HTTP protocol is outside the allowed range | Low | IoT Hub | The amount of device to cloud messages (HTTP protocol) within a specific time window is outside the currently configured and allowable range. | IoT_CA_HttpD2CMessagesNotInAllowedRange | | Custom alert - The number of cloud to device messages in MQTT protocol is outside the allowed range | Low | IoT Hub | The amount of cloud to device messages (MQTT protocol) within a specific time window is outside the currently configured and allowable range. | IoT_CA_MqttC2DMessagesNotInAllowedRange | | Custom alert - The number of rejected cloud to device messages in MQTT protocol is outside the allowed range | Low | IoT Hub | The amount of cloud to device messages (MQTT protocol) rejected by the device within a specific time window is outside the currently configured and allowable range. | IoT_CA_MqttC2DRejectedMessagesNotInAllowedRange | | Custom alert - The number of device to cloud messages in MQTT protocol is outside the allowed range | Low | IoT Hub | The amount of device to cloud messages (MQTT protocol) within a specific time window is outside the currently configured and allowable range. | IoT_CA_MqttD2CMessagesNotInAllowedRange | | Custom alert - The number of command queue purges that are outside of the allowed range | Low | IoT Hub | The amount of command queue purges within a specific time window is outside the currently configured and allowable range. | IoT_CA_QueuePurgesNotInAllowedRange |
-| Custom alert - The number of module twin updates is outside the allowed range | Low | IoT Hub | The amount of module twin updates within a specific time window is outside the currently configured and allowable range. | IoT_CA_TwinUpdatesNotInAllowedRange |
-| Custom alert - The number of unauthorized operations is outside the allowed range | Low | IoT Hub | The amount of unauthorized operations within a specific time window is outside the currently configured and allowable range. | IoT_CA_UnauthorizedOperationsNotInAllowedRange |
+| Custom alert - The number of module twin updates is outside the allowed range | Low | IoT Hub | The number of module twin updates within a specific time window is outside the currently configured and allowable range. | IoT_CA_TwinUpdatesNotInAllowedRange |
+| Custom alert - The number of unauthorized operations is outside the allowed range | Low | IoT Hub | The number of unauthorized operations within a specific time window is outside the currently configured and allowable range. | IoT_CA_UnauthorizedOperationsNotInAllowedRange |
## Next steps
defender-for-iot Concept Data Processing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/concept-data-processing.md
# Data processing and residency
-Microsoft Defender for IoT is a separate service which adds an extra layer of threat protection to the Azure IoT Hub, IoT Edge, and your devices. Defender for IoT may process, and store your data within a different geographic location than your IoT Hub.
+Microsoft Defender for IoT is a separate service, which adds an extra layer of threat protection to the Azure IoT Hub, IoT Edge, and your devices. Defender for IoT may process, and store your data within a different geographic location than your IoT Hub.
Mapping between the IoT Hub, and Microsoft Defender for IoT's regions is as follows:
defender-for-iot Concept Event Aggregation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/concept-event-aggregation.md
The data collected for each event is:
| **os_version** | The version of the operating system. For example, `Windows 10`, or `Ubuntu 20.04.1`. | | **os_platform** | The OS of the device. | | **os_arch** | The architecture of the OS. For example, `x86_64`. |
-| **nics** | The network interface controller. The full list of properties are listed below. |
+| **nics** | The network interface controller. The full list of properties is listed below. |
The **nics** properties are composed of the following;
defender-for-iot Concept Micro Agent Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/concept-micro-agent-configuration.md
These configurations include process, and network activity collectors.
| Setting Name | Setting options | Description | Default | |--|--|--|--|
-| **Devices** | A list of the network devices separated by a comma. <br><br>For example `eth0,eth1` | Defines the list of network devices (interfaces) that the agent will use to monitor the traffic. <br><br>If a network device isn't listed, the Network Raw events will not be recorded for the missing device.| `eth0` |
+| **Devices** | A list of the network devices separated by a comma. <br><br>For example `eth0,eth1` | Defines the list of network devices (interfaces) that the agent will use to monitor the traffic. <br><br>If a network device isn't listed, the Network Raw events won't be recorded for the missing device.| `eth0` |
| | | | | ## Process collector specific-settings
defender-for-iot Tutorial Configure Agent Based Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/tutorial-configure-agent-based-solution.md
This tutorial will help you learn how to configure the Microsoft Defender for IoT agent-based solution.
-In this tutorial you will learn how to:
+In this tutorial you'll learn how to:
> [!div class="checklist"] > - Enable data collection
You can choose to add storage of an additional information type as `raw events`.
1. Select a subscription from the drop-down menu.
-1. Select a workspace from the drop-down menu. If you do not already have an existing Log Analytics workspace, you can select **Create New Workspace** to create a new one.
+1. Select a workspace from the drop-down menu. If you don't already have an existing Log Analytics workspace, you can select **Create New Workspace** to create a new one.
1. Verify that the **Access to raw security data** option is selected.
defender-for-iot Tutorial Create Micro Agent Module Twin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/tutorial-create-micro-agent-module-twin.md
Title: Create a DefenderforIoTMicroAgent module twin (Preview)
-description: In this tutorial, you will learn how to create a DefenderIotMicroAgent module twin for new devices.
+description: In this tutorial, you'll learn how to create a DefenderIotMicroAgent module twin for new devices.
Last updated 01/16/2022
Defender for IoT uses the module twin mechanism, and maintains a Defender-IoT-mi
To take full advantage of all Defender for IoT feature's, you need to create, configure, and use the Defender-IoT-micro-agent twins for every device in the service.
-In this tutorial you will learn how to:
+In this tutorial you'll learn how to:
> [!div class="checklist"] > - Create a DefenderIotMicroAgent module twin
defender-for-iot Tutorial Investigate Security Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/tutorial-investigate-security-alerts.md
Last updated 01/13/2022
This tutorial will help you learn how to investigate, and remediate the alerts issued by Defender for IoT. Remediating alerts is the best way to ensure compliance, and protection across your IoT solution.
-In this tutorial you will learn how to:
+In this tutorial you'll learn how to:
> [!div class="checklist"] > - Investigate security alerts
defender-for-iot Tutorial Investigate Security Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/tutorial-investigate-security-recommendations.md
This tutorial will help you learn how to explore the information available in ea
Timely analysis and mitigation of recommendations by Defender for IoT is the best way to improve security posture and reduce attack surface across your IoT solution.
-In this tutorial you will learn how to:
+In this tutorial you'll learn how to:
> [!div class="checklist"] > - Investigate new recommendations
defender-for-iot Tutorial Standalone Agent Binary Installation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/tutorial-standalone-agent-binary-installation.md
This tutorial will help you learn how to install and authenticate the Defender for IoT micro agent.
-In this tutorial you will learn how to:
+In this tutorial you'll learn how to:
> [!div class="checklist"] > - Download and install the micro agent
In this tutorial you will learn how to:
- An [IoT hub](../../iot-hub/iot-hub-create-through-portal.md). -- Verify you are running one of the following [operating systems](concept-agent-portfolio-overview-os-support.md#agent-portfolio-overview-and-os-support-preview).
+- Verify you're running one of the following [operating systems](concept-agent-portfolio-overview-os-support.md#agent-portfolio-overview-and-os-support-preview).
- You must have [enabled Microsoft Defender for IoT on your Azure IoT Hub](quickstart-onboard-iot-hub.md).
Depending on your setup, the appropriate Microsoft package will need to be insta
sudo cp ./microsoft.gpg /etc/apt/trusted.gpg.d/ ```
-1. Ensure that you have updated the apt using the following command:
+1. Ensure that you've updated the apt using the following command:
```bash sudo apt-get update
You will need to copy the module identity connection string from the DefenderIoT
systemctl status defender-iot-micro-agent.service ```
-1. Ensure that the service is stable by making sure it is `active`, and that the uptime of the process is appropriate.
+1. Ensure that the service is stable by making sure it's `active`, and that the uptime of the process is appropriate.
:::image type="content" source="media/quickstart-standalone-agent-binary-installation/active-running.png" alt-text="Check to make sure your service is stable and active.":::
defender-for-iot Upgrade Micro Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/upgrade-micro-agent.md
For more information, see our [release notes for device builders](release-notes.
## Upgrade a standalone micro agent
-1. Ensure that you have upgraded the apt. Run:
+1. Ensure that you've upgraded the apt. Run:
```bash sudo apt-get update
For more information, see our [release notes for device builders](release-notes.
## Upgrade a micro agent for Edge
-1. Ensure that you have upgraded the apt. Run:
+1. Ensure that you've upgraded the apt. Run:
```bash sudo apt-get update
defender-for-iot How To Investigate All Enterprise Sensor Detections In A Device Inventory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-investigate-all-enterprise-sensor-detections-in-a-device-inventory.md
Title: Learn about devices discovered by all enterprise sensors
+ Title: Learn about devices discovered by all sensors
description: Use the device inventory in the on-premises management console to get a comprehensive view of device information from connected sensors. Use import, export, and filtering tools to manage this information. Last updated 11/09/2021
-# Investigate all enterprise sensor detections in the device inventory
+# Investigate all sensor detections in the device inventory
You can view device information from connected sensors by using the *device inventory* in the on-premises management console. This feature gives you a comprehensive view of all network information. Use import, export, and filtering tools to manage this information. The status information about the connected sensor versions also appears.
The Defender for IoT Device Inventory displays an extensive range of device attr
1. Devices composed of multiple backplane components (including all racks/slots/modules) 1. Devices acting as network infrastructure such as Switch/Router (w/ multiple NICs).
-Public internet IP addresses, multicast groups, and broadcast groups are not considered inventory devices.
+Public internet IP addresses, multicast groups, and broadcast groups aren't considered inventory devices.
Devices that have been inactive for more than 60 days are classified as inactive Inventory devices.
-## Integrate data into the enterprise device inventory
+## Integrate data into the device inventory
-Data integration capabilities let you enhance the data in the device inventory with information from other enterprise resources. These sources include CMDBs, DNS, firewalls, and Web APIs.
+Data integration capabilities let you enhance the data in the device inventory with information from other resources. These sources include CMDBs, DNS, firewalls, and Web APIs.
You can use this information to learn. For example:
You can integrate data by either:
- Running customized scripts that Defender for IoT provides You can work with Defender for IoT technical support to set up your system to receive Web API queries.
To add data manually:
6. In the upper-right corner of the **Device Inventory** window, select :::image type="icon" source="media/how-to-work-with-asset-inventory-information/menu-icon-device-inventory.png" border="false":::, select **Import Manual Input Columns**, and browse to the CSV file. The new data appears in the **Device Inventory** table.
-To integrate data from other enterprise entities:
+To integrate data from other entities:
1. In the upper-right corner of the **Device Inventory** window, select :::image type="icon" source="media/how-to-work-with-asset-inventory-information/menu-icon-device-inventory.png" border="false"::: and select **Export All Device Inventory**.
defender-for-iot How To Work With The Sensor Device Map https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-work-with-the-sensor-device-map.md
# Investigate sensor detections in the Device map
-The Device map provides a graphical representation of network devices detected, as well as the connections between them. Use the map to:
+The Device map provides a graphical representation of network devices detected, and the connections between them. Use the map to:
- Retrieve, analyze, and manage device information.
The Device map provides a graphical representation of network devices detected,
## Map search and layout tools
-A variety of map tools help you gain insight into devices and connections of interest to you.
+A variety of map tools help you gain insight into devices and connections of interest to you.
- [Basic search tools](#basic-search-tools) - [Group highlight and filters tools](#group-highlight-and-filters-tools) - [Map display tools](#map-display-tools)
When you search by IP or MAC address, the map displays the device that you searc
Filter or highlight the map based on default and custom device groups. -- Filtering omits the devices that are not in the selected group.-- Highlights displays all devices and highlights the selected items in the group in blue.
+- Filtering omits the devices that aren't in the selected group.
+- Highlights display all devices and highlights the selected items in the group in blue.
:::image type="content" source="media/how-to-work-with-maps/group-highlight-and-filters-v2.png" alt-text="Screenshot of the group highlights and filters.":::
The following predefined groups are available:
| Group name | Description | |--|--| | **Known applications** | Devices that use reserved ports, such as TCP. |
-| **non-standard ports (default)** | Devices that use non-standard ports or ports that have not been assigned an alias. |
+| **non-standard ports (default)** | Devices that use non-standard ports or ports that haven't been assigned an alias. |
| **OT protocols (default)** | Devices that handle known OT traffic. | | **Authorization (default)** | Devices that were discovered in the network during the learning process or were officially authorized on the network. | | **Device inventory filters** | Devices grouped according to the filters save in the Device Inventory table. |
The following predefined groups are available:
| **Cross subnet connections** | Devices that communicate from one subnet to another subnet. | | **Attack vector simulations** | Vulnerable devices detected in attack vector reports. To view these devices on the map, select the **Display on Device Map** checkbox when generating the Attack Vector. :::image type="content" source="media/how-to-work-with-maps/add-attack-v3.png" alt-text="Screenshot of the Add Attack Vector Simulations":::| | **Last seen** | Devices grouped by the time frame they were last seen, for example: One hour, six hours, one day, seven days. |
-| **Not In Active Directory** | All non-PLC devices that are not communicating with the Active Directory. |
+| **Not In Active Directory** | All non-PLC devices that aren't communicating with the Active Directory. |
For information about creating custom groups, see [Define custom groups](#define-custom-groups).
Overall connections are displayed.
**To view specific connections:** 1. Select a device in the map.
-1. Specific connections between devices are displayed in blue. In addition, you will see connections that cross various Purdue levels.
+1. Specific connections between devices are displayed in blue. In addition, you'll see connections that cross various Purdue levels.
:::image type="content" source="media/how-to-work-with-maps/connections-purdue-level.png" alt-text="Screenshot of the detailed map view." lightbox="media/how-to-work-with-maps/connections-purdue-level.png" :::
By default, IT devices are automatically aggregated by subnet, so that the map v
Each subnet is presented as a single entity on the Device map. Options are available to expand subnets to see details; and collapse subnets or hide them. **To expand an IT subnet:**
-1. Right-click the icon on the map the represents the IT network and select **Expand Network**.
-1. A confirmation box appears, notifying you that the layout change cannot be redone.
+1. Right-click the icon on the map that represents the IT network and select **Expand Network**.
+1. A confirmation box appears, notifying you that the layout change can't be redone.
1. Select **OK**. The IT subnet elements appear on the map. **To collapse an IT subnet:**
The following labels and indicators may appear on devices on the map:
| :::image type="content" source="media/how-to-work-with-maps/amount-alerts-v2.png" alt-text="Screenshot of the number of alerts"::: | Number of alerts associated with the device | | :::image type="icon" source="media/how-to-work-with-maps/type-v2.png" border="false"::: | Device type icon, for example storage, PLC or historian. | | :::image type="content" source="media/how-to-work-with-maps/grouped-v2.png" alt-text="Screenshot of devices grouped together."::: | Number of devices grouped in a subnet in an IT network. In this example 8. |
-| :::image type="content" source="media/how-to-work-with-maps/not-authorized-v2.png" alt-text="Screenshot of the device learning period"::: | A device that was detected after the Learning period and was not authorized as a network device. |
+| :::image type="content" source="media/how-to-work-with-maps/not-authorized-v2.png" alt-text="Screenshot of the device learning period"::: | A device that was detected after the Learning period and wasn't authorized as a network device. |
| Solid line | Logical connection between devices | | :::image type="content" source="media/how-to-work-with-maps/new-v2.png" alt-text="Screenshot of a new device discovered after learning is complete."::: | New device discovered after Learning is complete. |
This section describes device details.
| Location | The Purdue layer identified by the sensor for this device, including: <br /> - Automatic <br /> - Process Control <br /> - Supervisory <br /> - Enterprise | | Description | A free text field. <br /> Add more information about the device. | | Attributes | Additional information was discovered on the device. For example, view the PLC Run and Key state, the secure status of the PLC, or information on when the state changed. <br /> The information is read only and cannot be updated from the Attributes section. |
-| Scanner or Programming device | **Scanner**: Enable this option if you know that this device is known as scanner and there is no need to alert you about it. <br /> **Programming Device**: Enable this option if you know that this device is known as a programming device and is used to make programming changes. Identifying it as a programming device will prevent alerts for programming changes originating from this asset. |
+| Scanner or Programming device | **Scanner**: Enable this option if you know that this device is known as a scanner and there's no need to alert you about it. <br /> **Programming Device**: Enable this option if you know that this device is known as a programming device and is used to make programming changes. Identifying it as a programming device will prevent alerts for programming changes originating from this asset. |
| Network Interfaces | The device interfaces. A RO field. | | Protocols | The protocols used by the device. A RO field. |
-| Firmware | If Backplane information is available, firmware information will not be displayed. |
+| Firmware | If Backplane information is available, firmware information won't be displayed. |
| Address | The device IP address. | | Serial | The device serial number. | | Module Address | The device model and slot number or ID. |
This table lists device types you can manually assign to a device.
### Delete devices
-You may want to delete a device if the information learned is not relevant. For example,
+You may want to delete a device if the information learned isn't relevant. For example,
- A partner contractor at an engineering workstation connects temporarily to perform configuration updates. After the task is completed, the device is removed. - Due to changes in the network, some devices are no longer connected.
-If you do not delete the device, the sensor will continue monitoring it. After 60 days, a notification will appear, recommending that you delete.
+If you don't delete the device, the sensor will continue monitoring it. After 60 days, a notification will appear, recommending that you delete.
You may receive an alert indicating that the device is unresponsive if another device tries to access it. In this case, your network may be misconfigured.
The event timeline presents the merge event.
:::image type="content" source="media/how-to-work-with-maps/events-time.png" alt-text="Screenshot of an event timeline with merged events.":::
-You cannot undo a device merge. If you mistakenly merged two devices, delete the device and wait for the sensor to rediscover both.
+You can't undo a device merge. If you mistakenly merged two devices, delete the device and wait for the sensor to rediscover both.
**To merge devices:**
You cannot undo a device merge. If you mistakenly merged two devices, delete the
### Authorize and unauthorize devices
-During the Learning period, all the devices discovered in the network are identified as authorized devices. The **Authorized** label does not appear on these devices in the Device map.
+During the Learning period, all the devices discovered in the network are identified as authorized devices. The **Authorized** label doesn't appear on these devices in the Device map.
When a device is discovered after the Learning period, it appears as an unauthorized device. In addition to seeing unauthorized devices in the map, you can also see them in the Device Inventory.
digital-twins Concepts 3D Scenes Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-3d-scenes-studio.md
You can use the **Elements** list to explore all the elements and active alerts
3D Scenes Studio is extensible to support additional viewing needs. The [viewer component](#viewer) can be embedded into custom applications, and can work in conjunction with 3rd party components.
-Here's an example of what the embedded viewer might look like in an independent application:
--
-The 3D visualization components are available in GitHub, in the [iot-cardboard-js](https://github.com/microsoft/iot-cardboard-js) repository. For instructions on how to use the components to embed 3D experiences into custom applications, see the repository's wiki, [Embedding 3D Scenes](https://github.com/microsoft/iot-cardboard-js/wiki/Embedding-3D-Scenes).
## Recommended limits
digital-twins How To Use 3D Scenes Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-use-3d-scenes-studio.md
You can switch to **View** mode to enable filtering on specific elements and vis
The viewer component can also be embedded into custom applications outside of 3D Scenes Studio, and can work in conjunction with 3rd party components.
-Here's an example of what the embedded viewer might look like in an independent application:
--
-The 3D visualization components are available in GitHub, in the [iot-cardboard-js](https://github.com/microsoft/iot-cardboard-js) repository. For instructions on how to use the components to embed 3D experiences into custom applications, see the repository's wiki, [Embedding 3D Scenes](https://github.com/microsoft/iot-cardboard-js/wiki/Embedding-3D-Scenes).
## Add elements
dns Dns Private Resolver Get Started Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-private-resolver-get-started-portal.md
description: In this quickstart, you create and test a private DNS resolver in A
Previously updated : 05/11/2022 Last updated : 06/02/2022
Create a second virtual network to simulate an on-premises or other environment.
![second vnet create](./media/dns-resolver-getstarted-portal/vnet-create.png)
+## Link your forwarding ruleset to the second virtual network
+
+To apply your forwarding ruleset to the second virtual network, you must create a virtual link.
+
+1. Search for **DNS forwarding rulesets** in the Azure services list and select your ruleset (ex: **myruleset**).
+2. Select **Virtual Network Links**, select **Add**, choose **myvnet2** and use the default Link Name **myvnet2-link**.
+3. Select **Add** and verify that the link was added successfully. You might need to refresh the page.
+
+ ![Screenshot of ruleset virtual network links.](./media/dns-resolver-getstarted-portal/ruleset-links.png)
+
+## Configure a DNS forwarding ruleset
+
+Add or remove specific rules your DNS forwarding ruleset as desired, such as:
+- A rule to resolve an Azure Private DNS zone linked to your virtual network: azure.contoso.com.
+- A rule to resolve an on-premises zone: internal.contoso.com.
+- A wildcard rule to forward unmatched DNS queries to a protective DNS service.
+
+### Delete a rule from the forwarding ruleset
+
+Individual rules can be deleted or disabled. In this example, a rule is deleted.
+
+1. Search for **Dns Forwarding Rulesets** in the Azure Services list and select it.
+2. Select the ruleset you previously configured (ex: **myruleset**) and then select **Rules**.
+3. Select the **contosocom** sample rule that you previously configured, select **Delete**, and then select **OK**.
+
+### Add rules to the forwarding ruleset
+
+Add three new conditional forwarding rules to the ruleset.
+
+1. On the **myruleset | Rules** page, click **Add**, and enter the following rule data:
+ - Rule Name: **AzurePrivate**
+ - Domain Name: **azure.contoso.com.**
+ - Rule State: **Enabled**
+2. Under **Destination IP address** enter 10.0.0.4, and then click **Add**.
+3. On the **myruleset | Rules** page, click **Add**, and enter the following rule data:
+ - Rule Name: **Internal**
+ - Domain Name: **internal.contoso.com.**
+ - Rule State: **Enabled**
+4. Under **Destination IP address** enter 192.168.1.2, and then click **Add**.
+5. On the **myruleset | Rules** page, click **Add**, and enter the following rule data:
+ - Rule Name: **Wildcard**
+ - Domain Name: **.** (enter only a dot)
+ - Rule State: **Enabled**
+6. Under **Destination IP address** enter 10.5.5.5, and then click **Add**.
+
+ ![Screenshot of a forwarding ruleset example.](./media/dns-resolver-getstarted-portal/ruleset.png)
+
+In this example:
+- 10.0.0.4 is the resolver's inbound endpoint.
+- 192.168.1.2 is an on-premises DNS server.
+- 10.5.5.5 is a protective DNS service.
+ ## Test the private resolver You should now be able to send DNS traffic to your DNS resolver and resolve records based on your forwarding rulesets, including: - Azure DNS private zones linked to the virtual network where the resolver is deployed.-- DNS zones in the public internet DNS namespace. - Private DNS zones that are hosted on-premises.
+- DNS zones in the public internet DNS namespace.
## Next steps
dns Dns Private Resolver Get Started Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-private-resolver-get-started-powershell.md
description: In this quickstart, you learn how to create and manage your first p
Previously updated : 05/10/2022 Last updated : 06/02/2022
This article walks you through the steps to create your first private DNS zone a
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)]
-Azure DNS Private Resolver is a new service currently in public preview. Azure DNS Private Resolver enables you to query Azure DNS private zones from an on-prem environment and vice versa without deploying VM based DNS servers. For more information, including benefits, capabilities, and regional availability, see [What is Azure DNS Private Resolver](dns-private-resolver-overview.md).
+Azure DNS Private Resolver is a new service currently in public preview. Azure DNS Private Resolver enables you to query Azure DNS private zones from an on-premises environment and vice versa without deploying VM based DNS servers. For more information, including benefits, capabilities, and regional availability, see [What is Azure DNS Private Resolver](dns-private-resolver-overview.md).
## Prerequisites
$virtualNetwork | Set-AzVirtualNetwork
### Create the inbound endpoint
-Create an inbound endpoint to enable name resolution from on-prem or another private location using an IP address that is part of your private virtual network address space.
+Create an inbound endpoint to enable name resolution from on-premises or another private location using an IP address that is part of your private virtual network address space.
```Azure PowerShell $ipconfig = New-AzDnsResolverIPConfigurationObject -PrivateIPAllocationMethod Dynamic -SubnetId /subscriptions/<your sub id>/resourceGroups/myresourcegroup/providers/Microsoft.Network/virtualNetworks/myvnet/subnets/snet-inbound
$virtualNetworkLink.ToJsonString()
## Create a second virtual network and link it to your DNS forwarding ruleset
-Create a second virtual network to simulate an on-prem or other environment.
+Create a second virtual network to simulate an on-premises or other environment.
```Azure PowerShell $vnet2 = New-AzVirtualNetwork -Name myvnet2 -ResourceGroupName myresourcegroup -Location westcentralus -AddressPrefix "12.0.0.0/8"
$virtualNetworkLink2 = Get-AzDnsForwardingRulesetVirtualNetworkLink -DnsForwardi
$virtualNetworkLink2.ToJsonString() ```
-## Create a forwarding rule
+## Create forwarding rules
Create a forwarding rule for a ruleset to one or more target DNS servers. You must specify the fully qualified domain name (FQDN) with a trailing dot. The **New-AzDnsResolverTargetDnsServerObject** cmdlet sets the default port as 53, but you can also specify a unique port. ```Azure PowerShell
-$targetDNS1 = New-AzDnsResolverTargetDnsServerObject -IPAddress 11.0.1.4 -Port 53
-$targetDNS2 = New-AzDnsResolverTargetDnsServerObject -IPAddress 11.0.1.5 -Port 53
-$forwardingrule = New-AzDnsForwardingRulesetForwardingRule -ResourceGroupName myresourcegroup -DnsForwardingRulesetName myruleset -Name "contosocom" -DomainName "contoso.com." -ForwardingRuleState "Enabled" -TargetDnsServer @($targetDNS1,$targetDNS2)
+$targetDNS1 = New-AzDnsResolverTargetDnsServerObject -IPAddress 192.168.1.2 -Port 53
+$targetDNS2 = New-AzDnsResolverTargetDnsServerObject -IPAddress 192.168.1.3 -Port 53
+$targetDNS3 = New-AzDnsResolverTargetDnsServerObject -IPAddress 10.0.0.4 -Port 53
+$targetDNS4 = New-AzDnsResolverTargetDnsServerObject -IPAddress 10.5.5.5 -Port 53
+$forwardingrule = New-AzDnsForwardingRulesetForwardingRule -ResourceGroupName myresourcegroup -DnsForwardingRulesetName myruleset -Name "Internal" -DomainName "internal.contoso.com." -ForwardingRuleState "Enabled" -TargetDnsServer @($targetDNS1,$targetDNS2)
+$forwardingrule = New-AzDnsForwardingRulesetForwardingRule -ResourceGroupName myresourcegroup -DnsForwardingRulesetName myruleset -Name "AzurePrivate" -DomainName "." -ForwardingRuleState "Enabled" -TargetDnsServer $targetDNS3
+$forwardingrule = New-AzDnsForwardingRulesetForwardingRule -ResourceGroupName myresourcegroup -DnsForwardingRulesetName myruleset -Name "Wildcard" -DomainName "." -ForwardingRuleState "Enabled" -TargetDnsServer $targetDNS4
```
+In this example:
+- 10.0.0.4 is the resolver's inbound endpoint.
+- 192.168.1.2 and 192.168.1.3 are on-premises DNS servers.
+- 10.5.5.5 is a protective DNS service.
+ ## Test the private resolver You should now be able to send DNS traffic to your DNS resolver and resolve records based on your forwarding rulesets, including: - Azure DNS private zones linked to the virtual network where the resolver is deployed. - DNS zones in the public internet DNS namespace.-- Private DNS zones that are hosted on-prem.
+- Private DNS zones that are hosted on-premises.
## Delete a DNS resolver
dns Dns Private Resolver Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-private-resolver-overview.md
Previously updated : 05/25/2022 Last updated : 06/02/2022 #Customer intent: As an administrator, I want to evaluate Azure DNS Private Resolver so I can determine if I want to use it instead of my current DNS resolver service.
Subnets used for DNS resolver have the following limitations:
### Outbound endpoint restrictions Outbound endpoints have the following limitations:-- An outbound endpoint can't be deleted unless the DNS forwarding ruleset and the virtual network links under it are deleted
+- An outbound endpoint can't be deleted unless the DNS forwarding ruleset and the virtual network links under it are deleted.
### Other restrictions -- IPv6 enabled subnets aren't supported in Public Preview-
+- IPv6 enabled subnets aren't supported in Public Preview.
+- Currently, rulesets can't be linked across different subscriptions.
## Next steps
dns Private Dns Getstarted Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/private-dns-getstarted-portal.md
Title: Quickstart - Create an Azure private DNS zone using the Azure portal description: In this quickstart, you create and test a private DNS zone and record in Azure DNS. This is a step-by-step guide to create and manage your first private DNS zone and record using the Azure portal. --++ Last updated 05/18/2022
event-grid Event Filtering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/event-filtering.md
Title: Event filtering for Azure Event Grid description: Describes how to filter events when creating an Azure Event Grid subscription. Previously updated : 03/04/2021 Last updated : 06/01/2022 # Understand event filtering for Event Grid subscriptions
Here's an example of using an extension context attribute in a filter.
Advanced filtering has the following limitations:
-* 5 advanced filters and 25 filter values across all the filters per event grid subscription
+* 25 advanced filters and 25 filter values across all the filters per event grid subscription
* 512 characters per string value
-* Five values for **in** and **not in** operators
-* The `StringNotContains` operator is currently not available in the portal.
* Keys with **`.` (dot)** character in them. For example: `http://schemas.microsoft.com/claims/authnclassreference` or `john.doe@contoso.com`. Currently, there's no support for escape characters in keys. The same key can be used in more than one filter.
firewall-manager Manage Web Application Firewall Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall-manager/manage-web-application-firewall-policies.md
Previously updated : 06/01/2022 Last updated : 06/02/2022 # Manage Web Application Firewall policies (preview)
You can centrally create and associate Web Application Firewall (WAF) policies f
1. Select **Add** to create a new WAF policy or import settings from an existing WAF policy. :::image type="content" source="media/manage-web-application-firewall-policies/web-application-firewall-policies.png" alt-text="Screenshot of Firewall Manager Web Application Firewall policies.":::
-## Upgrade Application Gateway WAF configuration to WAF policy
-
-For Application Gateway with WAF configuration, you can upgrade the WAF configuration to a WAF policy associated with Application Gateway.
-
-The WAF policy can be shared to multiple application gateways. Also, a WAF policy allows you to take advantage of advanced and new features like bot protection, newer rule sets, and reduced false positives. New features are only released on WAF policies.
-
-To upgrade a WAF configuration to a WAF policy, select **Upgrade from WAF configuration** from the desired application gateway.
-- ## Next steps -- [Configure an Azure DDoS Protection Plan using Azure Firewall Manager (preview)](configure-ddos.md)-
+- [Configure WAF policies using Azure Firewall Manager (preview)](../web-application-firewall/shared/manage-policies.md)
firewall Forced Tunneling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/forced-tunneling.md
Previously updated : 01/13/2022 Last updated : 06/02/2022
When you configure a new Azure Firewall, you can route all Internet-bound traffi
Azure Firewall provides automatic SNAT for all outbound traffic to public IP addresses. Azure Firewall doesnΓÇÖt SNAT when the destination IP address is a private IP address range per IANA RFC 1918. This logic works perfectly when you egress directly to the Internet. However, with forced tunneling enabled, Internet-bound traffic is SNATed to one of the firewall private IP addresses in the AzureFirewallSubnet. This hides the source address from your on-premises firewall. You can configure Azure Firewall to not SNAT regardless of the destination IP address by adding *0.0.0.0/0* as your private IP address range. With this configuration, Azure Firewall can never egress directly to the Internet. For more information, see [Azure Firewall SNAT private IP address ranges](snat-private-range.md).
+> [!IMPORTANT]
+> If you deploy a Secured Virtual Hub in forced tunnel mode, advertising the default route over Express Route or VPN Gateway is not currently supported. A fix is being investigated.
+ ## Forced tunneling configuration You can configure Forced Tunneling during Firewall creation by enabling Forced Tunnel mode as shown below. To support forced tunneling, Service Management traffic is separated from customer traffic. An additional dedicated subnet named **AzureFirewallManagementSubnet** (minimum subnet size /26) is required with its own associated public IP address. This public IP address is for management traffic. It is used exclusively by the Azure platform and can't be used for any other purpose.
hdinsight Hdinsight Use Sqoop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/hdinsight-use-sqoop.md
In this article, you use these two datasets to test Sqoop import and export.
## <a name="create-cluster-and-sql-database"></a>Set up test environment
-The cluster, SQL database, and other objects are created through the Azure portal using an Azure Resource Manager template. The template can be found in [Azure quickstart templates](https://azure.microsoft.com/resources/templates/hdinsight-linux-with-sql-database/). The Resource Manager template calls a bacpac package to deploy the table schemas to a SQL database. The bacpac package is located in a public blob container, https://hditutorialdata.blob.core.windows.net/usesqoop/SqoopTutorial-2016-2-23-11-2.bacpac. If you want to use a private container for the bacpac files, use the following values in the template:
+The cluster, SQL database, and other objects are created through the Azure portal using an Azure Resource Manager template. The template can be found in [Azure quickstart templates](https://azure.microsoft.com/resources/templates/hdinsight-linux-with-sql-database/). The Resource Manager template calls a bacpac package to deploy the table schemas to a SQL database. If you want to use a private container for the bacpac files, use the following values in the template:
```json "storageKeyType": "Primary",
hdinsight Apache Hbase Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hbase/apache-hbase-replication.md
To help you set up the environments, we have created some [Azure Resource Manage
### Set up two virtual networks in two different regions
-To use a template that creates two virtual networks in two different regions and the VPN connection between the VNets, select the following **Deploy to Azure** button. The template definition is stored in a [public blob storage](https://hditutorialdata.blob.core.windows.net/hbaseha/azuredeploy.json).
+To use a template that creates two virtual networks in two different regions and the VPN connection between the VNets, select the following **Deploy to Azure** button.
<a href="https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fhditutorialdata.blob.core.windows.net%2Fhbaseha%2Fazuredeploy.json" target="_blank"><img src="./media/apache-hbase-replication/hdi-deploy-to-azure1.png" alt="Deploy to Azure button for new cluster"></a>
hdinsight Hdinsight Apache Spark With Kafka https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-apache-spark-with-kafka.md
While you can create an Azure virtual network, Kafka, and Spark clusters manuall
<a href="https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fhditutorialdata.blob.core.windows.net%2Farmtemplates%2Fcreate-linux-based-kafka-spark-cluster-in-vnet-v4.1.json" target="_blank"><img src="./media/hdinsight-apache-spark-with-kafka/hdi-deploy-to-azure1.png" alt="Deploy to Azure button for new cluster"></a>
- The Azure Resource Manager template is located at **https://hditutorialdata.blob.core.windows.net/armtemplates/create-linux-based-kafka-spark-cluster-in-vnet-v4.1.json**.
- > [!WARNING] > To guarantee availability of Kafka on HDInsight, your cluster must contain at least three worker nodes. This template creates a Kafka cluster that contains three worker nodes.
hdinsight Hdinsight Apps Install Custom Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-apps-install-custom-applications.md
You can see the installation status from the tile pinned to the portal dashboard
<a href="https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fhditutorialdata.blob.core.windows.net%2Fhdinsightapps%2Fcreate-linux-based-hadoop-cluster-in-hdinsight.json" target="_blank"><img src="./media/hdinsight-apps-install-custom-applications/hdi-deploy-to-azure1.png" alt="Deploy to Azure button for new cluster"></a>
- The Resource Manager template is located at [https://hditutorialdata.blob.core.windows.net/hdinsightapps/create-linux-based-hadoop-cluster-in-hdinsight.json](https://hditutorialdata.blob.core.windows.net/hdinsightapps/create-linux-based-hadoop-cluster-in-hdinsight.json). To learn how to write this Resource Manager template, see [MSDN: Install an HDInsight application](/rest/api/hdinsight/hdinsight-application).
+ To learn how to write this Resource Manager template, see [MSDN: Install an HDInsight application](/rest/api/hdinsight/hdinsight-application).
2. Follow the instruction to create cluster and install Hue. For more information on creating HDInsight clusters, see [Create Linux-based Hadoop clusters in HDInsight](hdinsight-hadoop-provision-linux-clusters.md).
hdinsight Hdinsight Hadoop Create Linux Clusters Adf https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-create-linux-clusters-adf.md
If you don't have an Azure subscription, [create a free account](https://azure.m
## Prerequisites
-* The PowerShell [Az Module](/powershell/azure/) installed.
+* The PowerShell [Az Module](/powershell/azure/install-az-ps) installed.
* An Azure Active Directory service principal. Once you've created the service principal, be sure to retrieve the **application ID** and **authentication key** using the instructions in the linked article. You need these values later in this tutorial. Also, make sure the service principal is a member of the *Contributor* role of the subscription or the resource group in which the cluster is created. For instructions to retrieve the required values and assign the right roles, see [Create an Azure Active Directory service principal](../active-directory/develop/howto-create-service-principal-portal.md).
This section uses an Azure PowerShell script to create the storage account and c
2. Creates an Azure resource group. 3. Creates an Azure Storage account. 4. Creates a Blob container in the storage account
-5. Copies the sample HiveQL script (**partitionweblogs.hql**) the Blob container. The script is available at [https://hditutorialdata.blob.core.windows.net/adfhiveactivity/script/partitionweblogs.hql](https://hditutorialdata.blob.core.windows.net/adfhiveactivity/script/partitionweblogs.hql). The sample script is already available in another public Blob container. The PowerShell script below makes a copy of these files into the Azure Storage account it creates.
+5. Copies the sample HiveQL script (**partitionweblogs.hql**) the Blob container. The sample script is already available in another public Blob container. The PowerShell script below makes a copy of these files into the Azure Storage account it creates.
### Create storage account and copy files
hdinsight Apache Kafka Scalability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/apache-kafka-scalability.md
To control the number of disks used by the worker nodes in a Kafka cluster, use
], ```
-You can find a complete template that demonstrates how to configure managed disks at [https://hditutorialdata.blob.core.windows.net/armtemplates/create-linux-based-kafka-mirror-cluster-in-vnet-v2.1.json](https://hditutorialdata.blob.core.windows.net/armtemplates/create-linux-based-kafka-mirror-cluster-in-vnet-v2.1.json).
- ## Next steps For more information on working with Apache Kafka on HDInsight, see the following documents:
hdinsight Kafka Mirrormaker 2 0 Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/kafka-mirrormaker-2-0-guide.md
The implementation needs to be added to the Kafka classpath for the class refere
[Apache Kafka 2.4 Documentation](https://kafka.apache.org/24/documentation.html)
-[Connect an on-premises network to Azure](/azure/architecture/reference-architectures/hybrid-networking.md)
+[Connect an on-premises network to Azure](/azure/architecture/reference-architectures/hybrid-networking)
key-vault How To Configure Key Rotation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/keys/how-to-configure-key-rotation.md
## Overview
-Automated key rotation in Key Vault allows users to configure Key Vault to automatically generate a new key version at a specified frequency. You can use rotation policy to configure rotation for each individual
-key. Our recommendation is to rotate encryption keys at least every two years to meet cryptographic best practices.
+Automated key rotation in [Key Vault](../general/overview.md) allows users to configure Key Vault to automatically generate a new key version at a specified frequency. For more information about how keys are versioned, see [Key Vault objects, identifiers, and versioning](../general/about-keys-secrets-certificates.md#objects-identifiers-and-versioning).
-This feature enables end-to-end zero-touch rotation for encryption at rest for Azure services with customer-managed key (CMK) stored in Azure Key Vault. Please refer to specific Azure service documentation to see if the service covers end-to-end rotation.
+You can use rotation policy to configure rotation for each individual key. Our recommendation is to rotate encryption keys at least every two years to meet cryptographic best practices.
+
+This feature enables end-to-end zero-touch rotation for encryption at rest for Azure services with customer-managed key (CMK) stored in Azure Key Vault. Please refer to specific Azure service documentation to see if the service covers end-to-end rotation.
+
+For more information about data encryption in Azure, see:
+- [Azure Encryption at Rest](../../security/fundamentals/encryption-atrest.md#azure-encryption-at-rest-components)
+- [Azure services data encryption support table](../../security/fundamentals/encryption-models.md#supporting-services)
## Pricing
Key rotation policy settings:
:::image type="content" source="../media/keys/key-rotation/key-rotation-1.png" alt-text="Rotation policy configuration":::
+> [!IMPORTANT]
+> Key rotation generates a new key version of an existing key with new key material. Ensure that your data encryption solution uses versioned key uri to point to the same key material for encrypt/decrypt, wrap/unwrap operations to avoid disruption to your services. All Azure services are currently following that pattern for data encryption.
+ ## Configure key rotation policy Configure key rotation policy during key creation.
Save key rotation policy to a file. Key rotation policy example:
} } ```
-Set rotation policy on a key passing previously saved file.
+
+Set rotation policy on a key passing previously saved file using Azure CLI [az keyvault key rotation-policy update](/cli/azure/keyvault/key/rotation-policy) command.
```azurecli az keyvault key rotation-policy update --vault-name <vault-name> --name <key-name> --value </path/to/policy.json>
Click 'Rotate Now' to invoke rotation.
:::image type="content" source="../media/keys/key-rotation/key-rotation-4.png" alt-text="Rotation on-demand"::: ### Azure CLI+
+Use Azure CLI [az keyvault key rotate](/cli/azure/keyvault/key#az-keyvault-key-rotate) command to rotate key.
+ ```azurecli az keyvault key rotate --vault-name <vault-name> --name <key-name> ```
lab-services Class Type Networking Gns3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/class-type-networking-gns3.md
Title: Set up a networking lab with Azure Lab Services and GNS3 | Microsoft Docs
+ Title: Set up a networking lab with GNS3
description: Learn how to set up a lab using Azure Lab Services to teach networking with GNS3. Previously updated : 01/19/2021 Last updated : 04/19/2022 # Set up a lab to teach a networking class
lab-services Class Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/class-types.md
Title: Example class types on Azure Lab Services | Microsoft Docs description: Provides some types of classes for which you can set up labs using Azure Lab Services. Previously updated : 01/04/2020 Last updated : 04/04/2022 # Class types overview - Azure Lab Services
lab-services Connect Virtual Machine Mac Remote Desktop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/connect-virtual-machine-mac-remote-desktop.md
Title: How to connect to an Azure Lab Services VM from Mac | Microsoft Docs
+ Title: Connect to Azure Lab Services VMs from Mac
description: Learn how to connect from a Mac to a virtual machine in Azure Lab Services. Previously updated : 01/04/2020 Last updated : 02/04/2022 # Connect to a VM using Remote Desktop Protocol on a Mac
logic-apps Logic Apps Enterprise Integration Liquid Transform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-liquid-transform.md
This article shows you how to complete these tasks:
* If your template uses [Liquid filters](https://shopify.github.io/liquid/basics/introduction/#filters), make sure that you follow the [DotLiquid and C# naming conventions](https://github.com/dotliquid/dotliquid/wiki/DotLiquid-for-Designers#filter-and-output-casing), which use *sentence casing*. For all Liquid transforms, make sure that filter names in your template also use sentence casing. Otherwise, the filters won't work.
- For example, when you use the `replace` filter, use `Replace`, not `replace`. The same rule applies if you try out examples at [DotLiquid online](http://dotliquidmarkup.org/try-online). For more information, see [Shopify Liquid filters](https://shopify.dev/docs/themes/liquid/reference/filters) and [DotLiquid Liquid filters](https://github.com/dotliquid/dotliquid/wiki/DotLiquid-for-Developers#create-your-own-filters). The Shopify specification includes examples for each filter, so for comparison, you can try these examples at [DotLiquid - Try online](http://dotliquidmarkup.org/try-online).
+ For example, when you use the `replace` filter, use `Replace`, not `replace`. The same rule applies if you try out examples at [DotLiquid online](http://dotliquidmarkup.org/TryOnline). For more information, see [Shopify Liquid filters](https://shopify.dev/docs/themes/liquid/reference/filters) and [DotLiquid Liquid filters](https://github.com/dotliquid/dotliquid/wiki/DotLiquid-for-Developers#create-your-own-filters). The Shopify specification includes examples for each filter, so for comparison, you can try these examples at [DotLiquid - Try online](http://dotliquidmarkup.org/TryOnline).
* The `json` filter from the Shopify extension filters is currently [not implemented in DotLiquid](https://github.com/dotliquid/dotliquid/issues/384). Typically, you can use this filter to prepare text output for JSON string parsing, but instead, you need to use the `Replace` filter instead.
Here are the sample inputs and outputs:
* [Shopify Liquid language and examples](https://shopify.github.io/liquid/basics/introduction/) * [DotLiquid](http://dotliquidmarkup.org/)
-* [DotLiquid - Try online](http://dotliquidmarkup.org/try-online)
+* [DotLiquid - Try online](http://dotliquidmarkup.org/TryOnline)
* [DotLiquid GitHub](https://github.com/dotliquid/dotliquid) * [DotLiquid GitHub issues](https://github.com/dotliquid/dotliquid/issues/) * Learn more about [maps](../logic-apps/logic-apps-enterprise-integration-maps.md)
machine-learning How To Autoscale Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-autoscale-endpoints.md
Title: Autoscale managed online endpoints
+ Title: Autoscale online endpoints
-description: Learn to scale up managed endpoints. Get more CPU, memory, disk space, and extra features.
+description: Learn to scale up online endpoints. Get more CPU, memory, disk space, and extra features.
Last updated 04/27/2022
-# Autoscale a managed online endpoint
+# Autoscale an online endpoint
-Autoscale automatically runs the right amount of resources to handle the load on your application. [Managed endpoints](concept-endpoints.md) supports autoscaling through integration with the Azure Monitor autoscale feature.
+Autoscale automatically runs the right amount of resources to handle the load on your application. [Online endpoints](concept-endpoints.md) supports autoscaling through integration with the Azure Monitor autoscale feature.
Azure Monitor autoscaling supports a rich set of rules. You can configure metrics-based scaling (for instance, CPU utilization >70%), schedule-based scaling (for example, scaling rules for peak business hours), or a combination. For more information, see [Overview of autoscale in Microsoft Azure](../azure-monitor/autoscale/autoscale-overview.md).
Today, you can manage autoscaling using either the Azure CLI, REST, ARM, or the
## Prerequisites
-* A deployed endpoint. [Deploy and score a machine learning model by using a managed online endpoint](how-to-deploy-managed-online-endpoints.md).
+* A deployed endpoint. [Deploy and score a machine learning model by using an online endpoint](how-to-deploy-managed-online-endpoints.md).
## Define an autoscale profile
machine-learning How To Monitor Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-monitor-online-endpoints.md
Title: Monitor managed online endpoints
+ Title: Monitor online endpoints
-description: Monitor managed online endpoints and create alerts with Application Insights.
+description: Monitor online endpoints and create alerts with Application Insights.
Previously updated : 10/21/2021 Last updated : 06/01/2022
-# Monitor managed online endpoints
+# Monitor online endpoints
-In this article, you learn how to monitor [Azure Machine Learning managed online endpoints](concept-endpoints.md). Use Application Insights to view metrics and create alerts to stay up to date with your managed online endpoints.
+In this article, you learn how to monitor [Azure Machine Learning online endpoints](concept-endpoints.md). Use Application Insights to view metrics and create alerts to stay up to date with your online endpoints.
In this article you learn how to: > [!div class="checklist"]
-> * View metrics for your managed online endpoint
+> * View metrics for your online endpoint
> * Create a dashboard for your metrics > * Create a metric alert ## Prerequisites -- Deploy an Azure Machine Learning managed online endpoint.
+- Deploy an Azure Machine Learning online endpoint.
- You must have at least [Reader access](../role-based-access-control/role-assignments-portal.md) on the endpoint. ## View metrics Use the following steps to view metrics for a managed endpoint or deployment: 1. Go to the [Azure portal](https://portal.azure.com).
-1. Navigate to the managed online endpoint or deployment resource.
+1. Navigate to the online endpoint or deployment resource.
- Managed online endpoints and deployments are Azure Resource Manager (ARM) resources that can be found by going to their owning resource group. Look for the resource types **Machine Learning online endpoint** and **Machine Learning online deployment**.
+ online endpoints and deployments are Azure Resource Manager (ARM) resources that can be found by going to their owning resource group. Look for the resource types **Machine Learning online endpoint** and **Machine Learning online deployment**.
1. In the left-hand column, select **Metrics**. ## Available metrics
-Depending on the resource that you select, the metrics that you see will be different. Metrics are scoped differently for managed online endpoints and managed online deployments.
+Depending on the resource that you select, the metrics that you see will be different. Metrics are scoped differently for online endpoints and online deployments.
### Metrics at endpoint scope
Split on the following dimensions:
#### Bandwidth throttling
-Bandwidth will be throttled if the limits are exceeded (see managed online endpoints section in [Manage and increase quotas for resources with Azure Machine Learning](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints)). To determine if requests are throttled:
+Bandwidth will be throttled if the limits are exceeded for _managed_ online endpoints (see managed online endpoints section in [Manage and increase quotas for resources with Azure Machine Learning](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints)). To determine if requests are throttled:
- Monitor the "Network bytes" metric - The response trailers will have the fields: `ms-azureml-bandwidth-request-delay-ms` and `ms-azureml-bandwidth-response-delay-ms`. The values of the fields are the delays, in milliseconds, of the bandwidth throttling.
Split on the following dimension:
## Create a dashboard
-You can create custom dashboards to visualize data from multiple sources in the Azure portal, including the metrics for your managed online endpoint. For more information, see [Create custom KPI dashboards using Application Insights](../azure-monitor/app/tutorial-app-dashboards.md#add-custom-metric-chart).
+You can create custom dashboards to visualize data from multiple sources in the Azure portal, including the metrics for your online endpoint. For more information, see [Create custom KPI dashboards using Application Insights](../azure-monitor/app/tutorial-app-dashboards.md#add-custom-metric-chart).
## Create an alert
-You can also create custom alerts to notify you of important status updates to your managed online endpoint:
+You can also create custom alerts to notify you of important status updates to your online endpoint:
1. At the top right of the metrics page, select **New alert rule**.
machine-learning How To Train Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-sdk.md
To run your script on `cpu-cluster`, you need an environment, which has the requ
* A base docker image with a conda YAML to customize further * A docker build context
- Check this [example](https://github.com/Azure/azureml-examples/sdk/assets/environment/environment.ipynb) on how to create custom environments.
+ Check this [example](https://github.com/Azure/azureml-examples/blob/main/sdk/assets/environment/environment.ipynb) on how to create custom environments.
You'll use a curated environment provided by Azure ML for `lightgm` called `AzureML-lightgbm-3.2-ubuntu18.04-py37-cpu`
machine-learning How To Train With Ui https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-with-ui.md
There are many ways to create a training job with Azure Machine Learning. You ca
* Or, you may enter the job creation from the left pane. Click **+New** and select **Job**. [![Azure Machine Learning studio left navigation](media/how-to-train-with-ui/left-nav-entry.png)](media/how-to-train-with-ui/left-nav-entry.png)
-* Or, if you're in the Experiment page, you may go to the **All runs** tab and click **Create job**.
-[![Experiment page entry for job creation UI](media/how-to-train-with-ui/experiment-entry.png)](media/how-to-train-with-ui/experiment-entry.png)
- These options will all take you to the job creation panel, which has a wizard for configuring and creating a training job. ## Select compute resources
machine-learning Migrate Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-overview.md
In Studio (classic), **datasets** were saved in your workspace and could only be
In Azure Machine Learning, **datasets** are registered to the workspace and can be used across all of Azure Machine Learning. For more information on the benefits of Azure Machine Learning datasets, see [Secure data access](./v1/concept-data.md).
-![automobile-price-aml-dataset](./media/migrate-overview/aml-dataset.png)
### Pipeline
machine-learning Reference Managed Online Endpoints Vm Sku List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-managed-online-endpoints-vm-sku-list.md
Title: Managed online endpoints VM SKU list (preview)
+ Title: Managed online endpoints VM SKU list
-description: Lists the VM SKUs that can be used for managed online endpoints (preview) in Azure Machine Learning.
+description: Lists the VM SKUs that can be used for managed online endpoints in Azure Machine Learning.
--++ Previously updated : 04/11/2022 Last updated : 06/02/2022
-# Managed online endpoints SKU list (preview)
+# Managed online endpoints SKU list
-This table shows the VM SKUs that are supported for Azure Machine Learning managed online endpoints (preview).
+This table shows the VM SKUs that are supported for Azure Machine Learning managed online endpoints.
* The `instance_type` attribute used for deployment must be specified in the form "Standard_F4s_v2". The table below lists instance names, for example, F2s v2. These names should be put in the specified form (`Standard_{name}`) for Azure CLI or Azure Resource Manager templates (ARM templates) requests to create and update deployments.
machine-learning Tutorial Designer Automobile Price Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-designer-automobile-price-deploy.md
To deploy your pipeline, you must first convert the training pipeline into a rea
> [!NOTE] > By default, the **Web Service Input** will expect the same data schema as the component output data which connects to the same downstream port as it. In this sample, **Web Service Input** and **Automobile price data (Raw)** connect to the same downstream component, hence **Web Service Input** expect the same data schema as **Automobile price data (Raw)** and target variable column `price` is included in the schema.
- > However, usually When you score the data, you won't know the target variable values. For such case, you can remove the target variable column in the inference pipeline using **Select Columns in Dataset** component. Make sure that the output of **Select Columns in Dataset** removing target variable column is connected to the same port as the output of the **Web Service Intput** component.
+ > However, usually When you score the data, you won't know the target variable values. For such case, you can remove the target variable column in the inference pipeline using **Select Columns in Dataset** component. Make sure that the output of **Select Columns in Dataset** removing target variable column is connected to the same port as the output of the **Web Service Input** component.
1. Select **Submit**, and use the same compute target and experiment that you used in part one.
machine-learning Tutorial Power Bi Designer Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-power-bi-designer-model.md
There are three ways to create and deploy the model you'll use in Power BI. Thi
But you could instead use one of the other options:
-* [Option A: Train and deploy models by using Jupyter Notebooks](tutorial-power-bi-custom-model.md). This code-first authoring experience uses Jupyter Notebooks that are hosted in Azure Machine Learning Studio.
+* [Option A: Train and deploy models by using Jupyter Notebooks](tutorial-power-bi-custom-model.md). This code-first authoring experience uses Jupyter Notebooks that are hosted in Azure Machine Learning studio.
* [Option C: Train and deploy models by using automated machine learning](tutorial-power-bi-automated-model.md). This no-code authoring experience fully automates data preparation and model training. ## Prerequisites
But you could instead use one of the other options:
In this section, you create a *compute instance*. Compute instances are used to train machine learning models. You also create an *inference cluster* to host the deployed model for real-time scoring.
-Sign in to [Azure Machine Learning Studio](https://ml.azure.com). In the menu on the left, select **Compute** and then **New**:
+Sign in to [Azure Machine Learning studio](https://ml.azure.com). In the menu on the left, select **Compute** and then **New**:
:::image type="content" source="media/tutorial-power-bi/create-new-compute.png" alt-text="Screenshot showing how to create a compute instance.":::
Your inference cluster **Status** is now **Creating**. Your single node cluster
In this tutorial, you use the [Diabetes dataset](https://www4.stat.ncsu.edu/~boos/var.select/diabetes.html). This dataset is available in [Azure Open Datasets](https://azure.microsoft.com/services/open-datasets/).
-To create the dataset, in the menu on the left, select **Datasets**. Then select **Create dataset**. You see the following options:
+To create the dataset, in the menu on the left, select **Data**. Then select **Create**. You see the following options:
:::image type="content" source="media/tutorial-power-bi/create-dataset.png" alt-text="Screenshot showing how to create a new dataset.":::
The data has 10 baseline input variables, such as age, sex, body mass index, ave
## Create a machine learning model by using the designer
-After you create the compute and datasets, you can use the designer to create the machine learning model. In Azure Machine Learning Studio, select **Designer** and then **New pipeline**:
+After you create the compute and datasets, you can use the designer to create the machine learning model. In Azure Machine Learning studio, select **Designer** and then **New pipeline**:
+ You see a blank *canvas* and a **Settings** menu:
marketplace Plan Azure App Managed App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/plan-azure-app-managed-app.md
Previously updated : 05/25/2022 Last updated : 06/03/2022 # Plan an Azure managed application for an Azure application offer
Use an Azure Application: Managed application plan when the following conditions
### Rules and known issues for AKS and containers in managed applications - AKS Node Resource Group does not inherit the Deny Assignments as a part of the Azure Managed Application. This means the customer will have full access to the AKS Node Resource Group that is created by the AKS resource when it is included in the managed application while the Managed Resource Group will have the proper Deny Assignments.
-
+ - The publisher can include Helm charts and other scripts as part of the Azure Managed Application. However, the offer will be treated like a regular managed application deployment and there will be no automatic container-specific processing or Helm chart installation at deployment time. It is the publisherΓÇÖs responsibility to execute the relevant scripts, either at deployment time, using the usual techniques such as VM custom script extension or Azure Deployment Scripts, or after deployment.
-
+ - Same as with the regular Azure Managed Application, it is the publisherΓÇÖs responsibility to ensure that the solution deploys successfully and that all components are properly configured, secured, and operational. For example, publishers can use their own container registry as the source of the images but are fully responsible for the container security and ongoing vulnerability scanning. > [!NOTE]
You must indicate who can manage a managed application in each of the selected c
For each principal ID, you will associate one of the Azure AD built-in roles (Owner or Contributor). The role you select describes the permissions the principal will have on the resources in the customer subscription. For more information, see [Azure built-in roles](../role-based-access-control/built-in-roles.md). For more information about role-based access control (RBAC), see [Get started with RBAC in the Azure portal](../role-based-access-control/overview.md). > [!NOTE]
-> Although you may add up to 100 authorizations per Azure region, it's generally easier to create an Active Directory user group and specify its ID in the "Principal ID." This lets you add more users to the management group after the plan is deployed and reduce the need to update the plan just to add more authorizations.
+> Although you may add up to 100 authorizations per Azure region, we recommend you create an Active Directory user group and specify its ID in the "Principal ID." This lets you add more users to the management group after the plan is deployed and reduces the need to update the plan just to add more authorizations.
## Policy settings
postgresql Concepts Single To Flexible https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-single-to-flexible.md
# Migrate from Azure Database for PostgreSQL Single Server to Flexible Server (Preview) >[!NOTE]
-> Single Server to Flexible Server migration feature is in public preview.
+> Single Server to Flexible Server migration tool is in public preview.
-Azure Database for PostgreSQL Flexible Server provides zone redundant high availability, control over price, and control over maintenance window. Single to Flexible Server Migration feature enables customers to migrate their databases from Single server to Flexible. See this [documentation](../flexible-server/concepts-compare-single-server-flexible-server.md) to understand the differences between Single and Flexible servers. Customers can initiate migrations for multiple servers and databases in a repeatable fashion using this migration feature. This feature automates most of the steps needed to do the migration and thus making the migration journey across Azure platforms as seamless as possible. The feature is provided free of cost for customers.
+Azure Database for PostgreSQL Flexible Server provides zone redundant high availability, control over price, and control over maintenance window. Single to Flexible Server Migration tool enables customers to migrate their databases from Single server to Flexible. See this [documentation](../flexible-server/concepts-compare-single-server-flexible-server.md) to understand the differences between Single and Flexible servers. Customers can initiate migrations for multiple servers and databases in a repeatable fashion using this migration tool. This tool automates most of the steps needed to do the migration and thus making the migration journey across Azure platforms as seamless as possible. The tool is provided free of cost for customers.
Single to Flexible server migration is enabled in **Preview** in Australia Southeast, Canada Central, Canada East, East Asia, North Central US, South Central US, Switzerland North, UAE North, UK South, UK West, West US, and Central US. ## Overview
-Single to Flexible server migration feature provides an inline experience to migrate databases from Single Server (source) to Flexible Server (target).
+Single to Flexible server migration tool provides an inline experience to migrate databases from Single Server (source) to Flexible Server (target).
-You choose the source server and can select up to **8** databases from it. This limitation is per migration task. The migration feature automates the following steps:
+You choose the source server and can select up to **8** databases from it. This limitation is per migration task. The migration tool automates the following steps:
1. Creates the migration infrastructure in the region of the target flexible server 2. Creates public IP address and attaches it to the migration infrastructure
You choose the source server and can select up to **8** databases from it. This
7. Creates databases with the same name on the target Flexible server 8. Migrates data from source to target
-Following is the flow diagram for Single to Flexible migration feature.
+Following is the flow diagram for Single to Flexible migration tool.
:::image type="content" source="./media/concepts-single-to-flexible/concepts-flow-diagram.png" alt-text="Diagram that shows Single to Flexible Server migration." lightbox="./media/concepts-single-to-flexible/concepts-flow-diagram.png"::: **Steps:**
Following is the flow diagram for Single to Flexible migration feature.
4. Initiates migration ΓÇô (4a) Initial dump/restore (online & offline) (4b) streaming the changes (online only) 5. Cutover to the target
-The migration feature is exposed through **Azure Portal** and via easy-to-use **Azure CLI** commands. It allows you to create migrations, list migrations, display migration details, modify the state of the migration, and delete migrations
+The migration tool is exposed through **Azure Portal** and via easy-to-use **Azure CLI** commands. It allows you to create migrations, list migrations, display migration details, modify the state of the migration, and delete migrations
## Migration modes comparison
Based on the above differences, pick the mode that best works for your workloads
### Pre-requisites
-Follow the steps provided in this section before you get started with the single to flexible server migration feature.
+Follow the steps provided in this section before you get started with the single to flexible server migration tool.
-- **Target Server Creation** - You need to create the target PostgreSQL flexible server before using the migration feature. Use the creation [QuickStart guide](../flexible-server/quickstart-create-server-portal.md) to create one.
+- **Target Server Creation** - You need to create the target PostgreSQL flexible server before using the migration tool. Use the creation [QuickStart guide](../flexible-server/quickstart-create-server-portal.md) to create one.
- **Source Server pre-requisites** - You must [enable logical replication](./concepts-logical.md) on the source server.
Follow the steps provided in this section before you get started with the single
>[!NOTE] > Enabling logical replication will require a server reboot for the change to take effect. -- **Azure Active Directory App set up** - It is a critical component of the migration feature. Azure AD App helps with role-based access control as the migration feature needs access to both the source and target servers. See [How to setup and configure Azure AD App](./how-to-setup-azure-ad-app-portal.md) for step-by-step process.
+- **Azure Active Directory App set up** - It is a critical component of the migration tool. Azure AD App helps with role-based access control as the migration tool needs access to both the source and target servers. See [How to setup and configure Azure AD App](./how-to-setup-azure-ad-app-portal.md) for step-by-step process.
### Data and schema migration
Once all these pre-requisites are taken care of, you can do the migration. This
### Size limitations -- Databases of sizes up to 1TB can be migrated using this feature. To migrate larger databases or heavy write workloads, reach out to your account team or reach us @ AskAzureDBforPGS2F@microsoft.com.
+- Databases of sizes up to 1TB can be migrated using this tool. To migrate larger databases or heavy write workloads, reach out to your account team or reach us @ AskAzureDBforPGS2F@microsoft.com.
- In one migration attempt, you can migrate up to eight user databases from a single server to flexible server. In case you have more databases to migrate, you can create multiple migrations between the same single and flexible servers.
Once all these pre-requisites are taken care of, you can do the migration. This
### Replication limitations -- Single to Flexible Server migration feature uses logical decoding feature of PostgreSQL to perform the online migration and it comes with the following limitations. See PostgreSQL documentation for [logical replication limitations](https://www.postgresql.org/docs/10/logical-replication-restrictions.html).
+- Single to Flexible Server migration tool uses logical decoding feature of PostgreSQL to perform the online migration and it comes with the following limitations. See PostgreSQL documentation for [logical replication limitations](https://www.postgresql.org/docs/10/logical-replication-restrictions.html).
- **DDL commands** are not replicated. - **Sequence** data is not replicated. - **Truncate** commands are not replicated.(**Workaround**: use DELETE instead of TRUNCATE. To avoid accidental TRUNCATE invocations, you can revoke the TRUNCATE privilege from tables)
Once all these pre-requisites are taken care of, you can do the migration. This
### Other limitations -- The migration feature migrates only data and schema of the single server databases to flexible server. It does not migrate other features such as server parameters, connection security details, firewall rules, users, roles and permissions. In other words, everything except data and schema must be manually configured in the target flexible server.
+- The migration tool migrates only data and schema of the single server databases to flexible server. It does not migrate other features such as server parameters, connection security details, firewall rules, users, roles and permissions. In other words, everything except data and schema must be manually configured in the target flexible server.
- It does not validate the data in flexible server post migration. The customers must manually do this. - The migration tool only migrates user databases including Postgres database and not system/maintenance databases. -- For failed migrations, there is no option to retry the same migration task. A new migration task with a unique name can to be created.
+- For failed migrations, there is no option to retry the same migration task. A new migration task with a unique name has to be created.
-- The migration feature does not include assessment of your single server.
+- The migration tool does not include assessment of your single server.
## Best practices
postgresql How To Migrate Single To Flexible Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-migrate-single-to-flexible-cli.md
Last updated 05/09/2022
# Migrate Single Server to Flexible Server PostgreSQL using Azure CLI >[!NOTE]
-> Single Server to Flexible Server migration feature is in public preview.
+> Single Server to Flexible Server migration tool is in public preview.
-This quick start article shows you how to use Single to Flexible Server migration feature to migrate databases from Azure database for PostgreSQL Single server to Flexible server.
+This quick start article shows you how to use Single to Flexible Server migration tool to migrate databases from Azure database for PostgreSQL Single server to Flexible server.
## Before you begin
This quick start article shows you how to use Single to Flexible Server migratio
```bash az login ```
-4. Take care of the pre-requisites listed in this [**document**](./concepts-single-to-flexible.md#pre-requisites) which are necessary to get started with the Single to Flexible migration feature.
+4. Take care of the pre-requisites listed in this [**document**](./concepts-single-to-flexible.md#pre-requisites) which are necessary to get started with the Single to Flexible migration tool.
## Migration CLI commands
-Single to Flexible Server migration feature comes with a list of easy-to-use CLI commands to do migration-related tasks. All the CLI commands start with **az postgres flexible-server migration**. You can use the **help** parameter to help you with understanding the various options associated with a command and in framing the right syntax for the same.
+Single to Flexible Server migration tool comes with a list of easy-to-use CLI commands to do migration-related tasks. All the CLI commands start with **az postgres flexible-server migration**. You can use the **help** parameter to help you with understanding the various options associated with a command and in framing the right syntax for the same.
```azurecli-interactive az postgres flexible-server migration --help
az postgres flexible-server migration create --subscription 5c5037e5-d3f1-4e7b-b
The **migration-name** argument used in **create migration** command will be used in other CLI commands such as **update, delete, show** to uniquely identify the migration attempt and to perform the corresponding actions.
-The migration feature offers online and offline mode of migration. To know more about the migration modes and their differences, visit this [link](./concepts-single-to-flexible.md)
+The migration tool offers online and offline mode of migration. To know more about the migration modes and their differences, visit this [link](./concepts-single-to-flexible.md)
Create a migration between a source and target server with a migration mode of your choice. The **create** command needs a JSON file to be passed as part of its **properties** argument.
Create migration parameters:
| **SourceDBServerFullyQualifiedDomainName** | optional | Used when a custom DNS server is used for name resolution for a virtual network. The FQDN of the single server as per the custom DNS server should be provided for this property. | | **TargetDBServerFullyQualifiedDomainName** | optional | Used when a custom DNS server is used for name resolution inside a virtual network. The FQDN of the flexible server as per the custom DNS server should be provided for this property. <br> **_SourceDBServerFullyQualifiedDomainName_**, **_TargetDBServerFullyQualifiedDomainName_** should be included as a part of the JSON only in the rare scenario of a custom DNS server being used for name resolution instead of Azure provided DNS. Otherwise, these parameters should not be included as a part of the JSON file. | | **SecretParameters** | Required | Passwords for admin user for both single server and flexible server along with the Azure AD app credentials. They help to authenticate against the source and target servers and help in checking proper authorization access to the resources.
-| **MigrationResourceGroup** | optional | This section consists of two properties. <br> **ResourceID (optional)** : The migration infrastructure and other network infrastructure components are created to migrate data and schema from the source to target. By default, all the components created by this feature are provisioned under the resource group of the target server. If you wish to deploy them under a different resource group, then you can assign the resource ID of that resource group to this property. <br> **SubnetResourceID (optional)** : In case if your source has public access turned OFF or if your target server is deployed inside a VNet, then specify a subnet under which migration infrastructure needs to be created so that it can connect to both source and target servers. |
+| **MigrationResourceGroup** | optional | This section consists of two properties. <br> **ResourceID (optional)** : The migration infrastructure and other network infrastructure components are created to migrate data and schema from the source to target. By default, all the components created by this tool are provisioned under the resource group of the target server. If you wish to deploy them under a different resource group, then you can assign the resource ID of that resource group to this property. <br> **SubnetResourceID (optional)** : In case if your source has public access turned OFF or if your target server is deployed inside a VNet, then specify a subnet under which migration infrastructure needs to be created so that it can connect to both source and target servers. |
| **DBsToMigrate** | Required | Specify the list of databases you want to migrate to the flexible server. You can include a maximum of 8 database names at a time. | | **SetupLogicalReplicationOnSourceDBIfNeeded** | Optional | Logical replication can be enabled on the source server automatically by setting this property to **true**. This change in the server settings requires a server restart with a downtime of few minutes (~ 2-3 mins). |
-| **OverwriteDBsinTarget** | Optional | If the target server happens to have an existing database with the same name as the one you are trying to migrate, the migration will pause until you acknowledge that overwrites in the target DBs are allowed. This pause can be avoided by giving the migration feature, permission to automatically overwrite databases by setting the value of this property to **true** |
+| **OverwriteDBsinTarget** | Optional | If the target server happens to have an existing database with the same name as the one you are trying to migrate, the migration will pause until you acknowledge that overwrites in the target DBs are allowed. This pause can be avoided by giving the migration tool, permission to automatically overwrite databases by setting the value of this property to **true** |
### Mode of migrations
postgresql How To Migrate Single To Flexible Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-migrate-single-to-flexible-portal.md
Last updated 05/09/2022
# Migrate Single Server to Flexible Server PostgreSQL using the Azure portal
-This guide shows you how to use Single to Flexible server migration feature to migrate databases from Azure database for PostgreSQL Single server to Flexible server.
+This guide shows you how to use Single to Flexible server migration tool to migrate databases from Azure database for PostgreSQL Single server to Flexible server.
## Before you begin
In your subscription, navigate to **Resource Providers** from the left navigatio
## Pre-requisites
-Take care of the pre-requisites listed [here](./concepts-single-to-flexible.md#pre-requisites) to get started with the migration feature.
+Take care of the [pre-requisites](./concepts-single-to-flexible.md#pre-requisites) to get started with the migration tool.
## Configure migration task
-Single to Flexible server migration feature comes with a simple, wizard-based portal experience. Let us get started to know the steps needed to consume the tool from portal.
+Single to Flexible server migration tool comes with a simple, wizard-based portal experience. Let us get started to know the steps needed to consume the tool from portal.
1. **Sign into the Azure portal -** Open your web browser and go to the [portal](https://portal.azure.com/). Enter your credentials to sign in. The default view is your service dashboard.
Single to Flexible server migration feature comes with a simple, wizard-based po
:::image type="content" source="./media/concepts-single-to-flexible/single-to-flex-migration-preview.png" alt-text="Screenshot of Migration Preview Tab details." lightbox="./media/concepts-single-to-flexible/single-to-flex-migration-preview.png":::
-4. Click the **Migrate from Single Server** button to start a migration from Single Server to Flexible Server. If this is the first time you are using the migration feature, you will see an empty grid with a prompt to begin your first migration.
+4. Click the **Migrate from Single Server** button to start a migration from Single Server to Flexible Server. If this is the first time you are using the migration tool, you will see an empty grid with a prompt to begin your first migration.
:::image type="content" source="./media/concepts-single-to-flexible/single-to-flex-migrate-single-server.png" alt-text="Screenshot of Migrate from Single Server tab." lightbox="./media/concepts-single-to-flexible/single-to-flex-migrate-single-server.png":::
The first is the setup tab which has basic information about the migration and t
:::image type="content" source="./media/concepts-single-to-flexible/single-to-flex-setup.png" alt-text="Screenshot of Setup Tab details." lightbox="./media/concepts-single-to-flexible/single-to-flex-setup.png"::: - The **Migration name** is the unique identifier for each migration to this flexible server. This field accepts only alphanumeric characters and does not accept any special characters except **&#39;-&#39;**. The name cannot start with a **&#39;-&#39;** and should be unique for a target server. No two migrations to the same flexible server can have the same name.-- The **Migration resource group** is where all the migration-related components will be created by this migration feature.
+- The **Migration resource group** is where all the migration-related components will be created by this migration tool.
By default, it is resource group of the target flexible server and all the components will be cleaned up automatically once the migration completes. If you want to create a temporary resource group for migration-related purposes, create a resource group and select the same from the dropdown.
The source tab prompts you to give details related to the source single server f
Choose the single server from which you want to migrate databases from, in the drop down.
-Once the single server is chosen, the fields such as **Location, PostgreSQL version, Server admin login name** are automatically pre-populated. The server admin login name is the admin username that was used to create the single server. Enter the password for the **server admin login name**. This is required for the migration feature to login into the single server to initiate the dump and migration.
+Once the single server is chosen, the fields such as **Location, PostgreSQL version, Server admin login name** are automatically pre-populated. The server admin login name is the admin username that was used to create the single server. Enter the password for the **server admin login name**. This is required for the migration tool to login into the single server to initiate the dump and migration.
You should also see the list of user databases inside the single server that you can pick for migration. You can select up to eight databases that can be migrated in a single migration attempt. If there are more than eight user databases, create multiple migrations using the same experience between the source and target servers.
-The final property in the source tab is migration mode. The migration feature offers online and offline mode of migration. To know more about the migration modes and their differences, please visit this [link](./concepts-single-to-flexible.md).
+The final property in the source tab is migration mode. The migration tool offers online and offline mode of migration. The concepts page talks more about the [migration modes and their differences](./concepts-single-to-flexible.md).
Once you pick the migration mode, the restrictions associated with the mode are displayed.
After filling out all the fields, please click the **Next** button.
:::image type="content" source="./media/concepts-single-to-flexible/single-to-flex-migration-target.png" alt-text="Screenshot of target database server details." lightbox="./media/concepts-single-to-flexible/single-to-flex-migration-target.png":::
-This tab displays metadata of the flexible server like the **Subscription**, **Resource Group**, **Server name**, **Location**, and **PostgreSQL version**. It displays **server admin login name** which is the admin username that was used during the creation of the flexible server.Enter the corresponding password for the admin user. This is required for the migration feature to login into the flexible server to perform restore operations.
+This tab displays metadata of the flexible server like the **Subscription**, **Resource Group**, **Server name**, **Location**, and **PostgreSQL version**. It displays **server admin login name** which is the admin username that was used during the creation of the flexible server.Enter the corresponding password for the admin user. This is required for the migration tool to login into the flexible server to perform restore operations.
Choose an option **yes/no** for **Authorize DB overwrite**.
If either source or target is configured in private access, then the networking
:::image type="content" source="./media/concepts-single-to-flexible/single-to-flex-migration-private.png" alt-text="Screenshot of Networking Private Access configuration." lightbox="./media/concepts-single-to-flexible/single-to-flex-migration-private.png":::
-All the fields will be automatically populated with subnet details. This is the subnet in which the migration feature will deploy Azure DMS to move data between the source and target.
+All the fields will be automatically populated with subnet details. This is the subnet in which the migration tool will deploy Azure DMS to move data between the source and target.
You can go ahead with the suggested subnet or choose a different subnet. But make sure that the selected subnet can connect to both the source and target servers.
postgresql Overview Single Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/overview-single-server.md
Last updated 11/30/2021
[Azure Database for PostgreSQL](./overview.md) powered by the PostgreSQL community edition is available in three deployment modes: -- Single Server-- Flexible Server-- Hyperscale (Citus)
+- [Single Server](./overview-single-server.md)
+- [Flexible Server](../flexible-server/overview.md)
+- [Hyperscale (Citus)](../hyperscale/overview.md)
In this article, we will provide an overview and introduction to core concepts of single server deployment model. To learn about flexible server deployment mode, see [flexible server overview](../flexible-server/overview.md) and Hyperscale (Citus) Overview respectively.
private-link Configure Asg Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/configure-asg-private-endpoint.md
+
+ Title: Configure an application security group with a private endpoint
+
+description: Learn how to create a private endpoint with an Application Security Group or apply an ASG to an existing private endpoint.
++++ Last updated : 06/02/2022+++
+# Configure an application security group (ASG) with a private endpoint
+
+Azure Private endpoints support application security groups for network security. Private endpoints can be associated with an existing ASG in your current infrastructure along side virtual machines and other network resources.
+
+## Prerequisites
+
+- An Azure account with an active subscription. If you don't already have an Azure account, [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+- An Azure web app with a *PremiumV2-tier* or higher app service plan, deployed in your Azure subscription.
+
+ - For more information and an example, see [Quickstart: Create an ASP.NET Core web app in Azure](../app-service/quickstart-dotnetcore.md).
+
+ - The example webapp in this article is named **myWebApp1979**. Replace the example with your webapp name.
+
+- An existing Application Security Group in your subscription. For more information about ASGs, see [Application security groups](../virtual-network/application-security-groups.md).
+
+ - The example ASG used in this article is named **myASG**. Replace the example with your application security group.
+
+- An existing Azure Virtual Network and subnet in your subscription. For more information about creating a virtual network, see [Quickstart: Create a virtual network using the Azure portal](../virtual-network/quick-create-portal.md).
+
+ - The example virtual network used in this article is named **myVNet**. Replace the example with your virtual network.
+
+## Create private endpoint with an ASG
+
+An ASG can be associated with a private endpoint when it's created. The following procedures demonstrate how to associate an ASG with a private endpoint when it's created.
+
+1. Sign-in to the [Azure portal](https://portal.azure.com).
+
+2. In the search box at the top of the portal, enter **Private endpoint**. Select **Private endpoints** in the search results.
+
+3. Select **+ Create** in **Private endpoints**.
+
+4. In the **Basics** tab of **Create a private endpoint**, enter or select the following information.
+
+ | Value | Setting |
+ | -- | - |
+ | **Project details** | |
+ | Subscription | Select your subscription. |
+ | Resource group | Select your resource group. </br> In this example, it's **myResourceGroup**. |
+ | **Instance details** | |
+ | Name | Enter **myPrivateEndpoint**. |
+ | Region | Select **East US**. |
+
+5. Select **Next: Resource** at the bottom of the page.
+
+6. In the **Resource** tab, enter or select the following information.
+
+ | Value | Setting |
+ | -- | - |
+ | Connection method | Select **Connect to an Azure resource in my directory.** |
+ | Subscription | Select your subscription |
+ | Resource type | Select **Microsoft.Web/sites**. |
+ | Resource | Select **mywebapp1979**. |
+ | Target subresource | Select **sites**. |
+
+7. Select **Next: Virtual Network** at the bottom of the page.
+
+8. In the **Virtual Network** tab, enter or select the following information.
+
+ | Value | Setting |
+ | -- | - |
+ | **Networking** | |
+ | Virtual network | Select **myVNet**. |
+ | Subnet | Select your subnet. </br> In this example, it's **myVNet/myBackendSubnet(10.0.0.0/24)**. |
+ | Enable network policies for all private endpoints in this subnet. | Leave the default of checked. |
+ | **Application security group** | |
+ | Application security group | Select **myASG**. |
+
+ :::image type="content" source="./media/configure-asg-private-endpoint/asg-new-endpoint.png" alt-text="Screenshot of ASG selection when creating a new private endpoint.":::
+
+9. Select **Next: DNS** at the bottom of the page.
+
+10. Select **Next: Tags** at the bottom of the page.
+
+11. Select **Next: Review + create**.
+
+12. Select **Create**.
+
+## Associate an ASG with an existing private endpoint
+
+An ASG can be associated with an existing private endpoint. The following procedures demonstrate how to associate an ASG with an existing private endpoint.
+
+> [!IMPORTANT]
+> You must have a previously deployed private endpoint to proceed with the steps in this section. The example endpoint used in this section is named **myPrivateEndpoint**. Replace the example with your private endpoint.
+
+1. Sign-in to the [Azure portal](https://portal.azure.com).
+
+2. In the search box at the top of the portal, enter **Private endpoint**. Select **Private endpoints** in the search results.
+
+3. In **Private endpoints**, select **myPrivateEndpoint**.
+
+4. In **myPrivateEndpoint**, in **Settings**, select **Application security groups**.
+
+5. In **Application security groups**, select **myASG** in the pull-down box.
+
+ :::image type="content" source="./media/configure-asg-private-endpoint/asg-existing-endpoint.png" alt-text="Screenshot of ASG selection when associating with an existing private endpoint.":::
+
+6. Select **Save**.
+
+## Next steps
+
+For more information about Azure Private Link, see:
+
+- [What is Azure Private Link?](private-link-overview.md)
++
purview Reference Azure Purview Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/reference-azure-purview-glossary.md
The scoring of data assets that determine the order search results are returned.
## Self-hosted integration runtime An integration runtime installed on an on-premises machine or virtual machine inside a private network that is used to connect to data on-premises or in a private network. ## Sensitivity label
-Annotations that classify and protect an organizationΓÇÖs data. Microsoft Purview integrates with Microsoft Information Protection for creation of sensitivity labels.
+Annotations that classify and protect an organizationΓÇÖs data. Microsoft Purview integrates with Microsoft Purview Information Protection for creation of sensitivity labels.
## Sensitivity label report A summary of which sensitivity labels are applied across the data estate. ## Service
purview Register Scan Azure Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-sql-database.md
When setting up scan, you can further scope the scan after providing the databas
* Microsoft Purview doesn't support over 300 columns in the Schema tab and it will show "Additional-Columns-Truncated" if there are more than 300 columns. * Column level lineage is currently not supported in the lineage tab. However, the columnMapping attribute in properties tab of Azure SQL Stored Procedure Run captures column lineage in plain text.
-* Stored procedures with dynamic SQL, running from remote data integration tools like Azure Data Factory is currently not supported
+* Stored procedures running remotely from data integration tools like Azure Data Factory is currently not supported
* Data lineage extraction is currently not supported for Functions, Triggers. * Lineage extraction scan is scheduled and defaulted to run every six hours. Frequency can't be changed. * If sql views are referenced in stored procedures, they're captured as sql tables currently.
Now that you've registered your source, follow the below guides to learn more ab
- [Data Estate Insights in Microsoft Purview](concept-insights.md) - [Lineage in Microsoft Purview](catalog-lineage-user-guide.md)-- [Search Data Catalog](how-to-search-catalog.md)
+- [Search Data Catalog](how-to-search-catalog.md)
purview Register Scan Power Bi Tenant Cross Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-power-bi-tenant-cross-tenant.md
This article outlines how to register a Power BI tenant in a cross-tenant scenar
- For cross-tenant scenario, delegated authentication is only supported option for scanning. - You can create only one scan for a Power BI data source that is registered in your Microsoft Purview account. - If Power BI dataset schema isn't shown after scan, it's due to one of the current limitations with [Power BI Metadata scanner](/power-bi/admin/service-admin-metadata-scanning).
+- Empty workspaces are skipped.
## Prerequisites
Use any of the following deployment checklists during the setup or for troublesh
1. Make sure Power BI and Microsoft Purview accounts are in cross-tenant.
-2. Make sure Power BI tenant ID is entered correctly during the registration. By default, Power BI tenant ID that exists in the same Azure Active Directory as Microsoft Purview will be populated.
+1. Make sure Power BI tenant ID is entered correctly during the registration. By default, Power BI tenant ID that exists in the same Azure Active Directory as Microsoft Purview will be populated.
-3. From Azure portal, validate if Microsoft Purview account Network is set to public access.
+1. Make sure your [PowerBI Metadata model is up to date by enabling metadata scanning.](/power-bi/admin/service-admin-metadata-scanning-setup#enable-tenant-settings-for-metadata-scanning)
-4. From Power BI tenant Admin Portal, make sure Power BI tenant is configured to allow public network.
+1. From Azure portal, validate if Microsoft Purview account Network is set to public access.
-5. Check your Azure Key Vault to make sure:
+1. From Power BI tenant Admin Portal, make sure Power BI tenant is configured to allow public network.
+
+1. Check your Azure Key Vault to make sure:
1. There are no typos in the password. 2. Microsoft Purview Managed Identity has get/list access to secrets.
-6. Review your credential to validate:
+1. Review your credential to validate:
1. Client ID matches _Application (Client) ID_ of the app registration. 2. Username includes the user principal name such as `johndoe@contoso.com`.
-7. In Power BI Azure AD tenant, validate Power BI admin user settings to make sure:
+1. In Power BI Azure AD tenant, validate Power BI admin user settings to make sure:
1. User is assigned to Power BI Administrator role. 2. At least one [Power BI license](/power-bi/admin/service-admin-licensing-organization#subscription-license-types) is assigned to the user. 3. If user is recently created, sign in with the user at least once to make sure password is reset successfully and user can successfully initiate the session. 4. There's no MFA or Conditional Access Policies are enforced on the user.
-8. In Power BI Azure AD tenant, validate App registration settings to make sure:
+1. In Power BI Azure AD tenant, validate App registration settings to make sure:
1. App registration exists in your Azure Active Directory tenant where Power BI tenant is located. 2. Under **API permissions**, the following **delegated permissions** and **grant admin consent for the tenant** is set up with read for the following APIs: 1. Power BI Service Tenant.Read.All
Use any of the following deployment checklists during the setup or for troublesh
1. Make sure Power BI and Microsoft Purview accounts are in cross-tenant.
-2. Make sure Power BI tenant ID is entered correctly during the registration.By default, Power BI tenant ID that exists in the same Azure Active Directory as Microsoft Purview will be populated.
+1. Make sure Power BI tenant ID is entered correctly during the registration.By default, Power BI tenant ID that exists in the same Azure Active Directory as Microsoft Purview will be populated.
+
+1. Make sure your [PowerBI Metadata model is up to date by enabling metadata scanning.](/power-bi/admin/service-admin-metadata-scanning-setup#enable-tenant-settings-for-metadata-scanning)
-3. From Azure portal, validate if Microsoft Purview account Network is set to public access.
+1. From Azure portal, validate if Microsoft Purview account Network is set to public access.
-4. From Power BI tenant Admin Portal, make sure Power BI tenant is configured to allow public network.
+1. From Power BI tenant Admin Portal, make sure Power BI tenant is configured to allow public network.
-5. Check your Azure Key Vault to make sure:
+1. Check your Azure Key Vault to make sure:
1. There are no typos in the password. 2. Microsoft Purview Managed Identity has get/list access to secrets.
-6. Review your credential to validate:
+1. Review your credential to validate:
1. Client ID matches _Application (Client) ID_ of the app registration. 2. Username includes the user principal name such as `johndoe@contoso.com`.
-8. In Power BI Azure AD tenant, validate Power BI admin user settings to make sure:
+1. In Power BI Azure AD tenant, validate Power BI admin user settings to make sure:
1. User is assigned to Power BI Administrator role. 2. At least one [Power BI license](/power-bi/admin/service-admin-licensing-organization#subscription-license-types) is assigned to the user. 3. If user is recently created, sign in with the user at least once to make sure password is reset successfully and user can successfully initiate the session. 4. There's no MFA or Conditional Access Policies are enforced on the user.
-9. In Power BI Azure AD tenant, validate App registration settings to make sure:
+1. In Power BI Azure AD tenant, validate App registration settings to make sure:
5. App registration exists in your Azure Active Directory tenant where Power BI tenant is located. 6. Under **API permissions**, the following **delegated permissions** and **grant admin consent for the tenant** is set up with read for the following APIs: 1. Power BI Service Tenant.Read.All
Use any of the following deployment checklists during the setup or for troublesh
2. **Implicit grant and hybrid flows**, **ID tokens (used for implicit and hybrid flows)** is selected. 3. **Allow public client flows** is enabled.
-10. Validate Self-hosted runtime settings:
+1. Validate Self-hosted runtime settings:
8. Latest version of [Self-hosted runtime](https://www.microsoft.com/download/details.aspx?id=39717) is installed on the VM. 9. Network connectivity from Self-hosted runtime to Power BI tenant is enabled. 10. Network connectivity from Self-hosted runtime to Microsoft services is enabled.
purview Register Scan Power Bi Tenant Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-power-bi-tenant-troubleshoot.md
If delegated auth is used:
- Validate if user is assigned to Power BI Administrator role. - If user is recently created, make sure password is reset successfully and user can successfully initiate the session.
+## My schema is not showing up after scanning
+
+It can take some time for schema to finish the scanning and ingestion process, depending on the size of your Power BI Tenant. Currently if you have a large PowerBI tenant, this process could take a few hours.
+ ## Error code: Test connection failed - AASDST50079 - **Message**: `Failed to get access token with given credential to access Power BI tenant. Authentication type PowerBIDelegated Message: AASDST50079 Due to a configuration change made by your administrator or because you moved to a new location, you must enroll in multi-factor authentication.` - **Cause**: Authentication is interrupted, due multi-factor authentication requirement for the Power BI admin user. -- **Recommendation**: Disable multi-factor authentication requirement and exclude user from conditional access policies. Login with the user to Power BI dashboard to validate if user can successfully login to the application.
+- **Recommendation**: Disable multi-factor authentication requirement and exclude user from conditional access policies. Login with the user to Power BI dashboard to validate if user can successfully login to the application.
## Error code: Test connection failed - AASTS70002
purview Register Scan Power Bi Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-power-bi-tenant.md
This article outlines how to register a Power BI tenant in a **same-tenant scena
- Delegated authentication is the only supported authentication option if self-hosted integration runtime is used during the scan. - You can create only one scan for a Power BI data source that is registered in your Microsoft Purview account. - If Power BI dataset schema isn't shown after scan, it's due to one of the current limitations with [Power BI Metadata scanner](/power-bi/admin/service-admin-metadata-scanning).
+- Empty workspaces are skipped.
## Prerequisites
Use any of the following deployment checklists during the setup or for troublesh
### Scan same-tenant Power BI using Azure IR and Managed Identity in public network 1. Make sure Power BI and Microsoft Purview accounts are in the same tenant.
-2. Make sure Power BI tenant ID is entered correctly during the registration.
-3. From Azure portal, validate if Microsoft Purview account Network is set to public access.
-4. From Power BI tenant Admin Portal, make sure Power BI tenant is configured to allow public network.
-5. In Azure Active Directory tenant, create a security group.
-6. From Azure Active Directory tenant, make sure [Microsoft Purview account MSI is member of the new security group](#authenticate-to-power-bi-tenant-managed-identity-only).
-7. On the Power BI Tenant Admin portal, validate if [Allow service principals to use read-only Power BI admin APIs](#associate-the-security-group-with-power-bi-tenant) is enabled for the new security group.
+1. Make sure Power BI tenant ID is entered correctly during the registration.
+1. Make sure your [PowerBI Metadata model is up to date by enabling metadata scanning.](/power-bi/admin/service-admin-metadata-scanning-setup#enable-tenant-settings-for-metadata-scanning)
+1. From Azure portal, validate if Microsoft Purview account Network is set to public access.
+1. From Power BI tenant Admin Portal, make sure Power BI tenant is configured to allow public network.
+1. In Azure Active Directory tenant, create a security group.
+1. From Azure Active Directory tenant, make sure [Microsoft Purview account MSI is member of the new security group](#authenticate-to-power-bi-tenant-managed-identity-only).
+1. On the Power BI Tenant Admin portal, validate if [Allow service principals to use read-only Power BI admin APIs](#associate-the-security-group-with-power-bi-tenant) is enabled for the new security group.
# [Public access with Self-hosted IR](#tab/Scenario2) ### Scan same-tenant Power BI using self-hosted IR and Delegated Authentication in public network 1. Make sure Power BI and Microsoft Purview accounts are in the same tenant.
-2. Make sure Power BI tenant ID is entered correctly during the registration.
-3. From Azure portal, validate if Microsoft Purview account Network is set to public access.
-4. From Power BI tenant Admin Portal, make sure Power BI tenant is configured to allow public network.
-5. Check your Azure Key Vault to make sure:
+1. Make sure Power BI tenant ID is entered correctly during the registration.
+1. Make sure your [PowerBI Metadata model is up to date by enabling metadata scanning.](/power-bi/admin/service-admin-metadata-scanning-setup#enable-tenant-settings-for-metadata-scanning)
+1. From Azure portal, validate if Microsoft Purview account Network is set to public access.
+1. From Power BI tenant Admin Portal, make sure Power BI tenant is configured to allow public network.
+1. Check your Azure Key Vault to make sure:
1. There are no typos in the password. 2. Microsoft Purview Managed Identity has get/list access to secrets.
-6. Review your credential to validate:
+1. Review your credential to validate:
1. Client ID matches _Application (Client) ID_ of the app registration. 2. Username includes the user principal name such as `johndoe@contoso.com`.
-8. Validate Power BI admin user settings to make sure:
+1. Validate Power BI admin user settings to make sure:
1. User is assigned to Power BI Administrator role. 2. At least one [Power BI license](/power-bi/admin/service-admin-licensing-organization#subscription-license-types) is assigned to the user. 3. If user is recently created, sign in with the user at least once to make sure password is reset successfully and user can successfully initiate the session. 4. There's no MFA or Conditional Access Policies are enforced on the user.
-9. Validate App registration settings to make sure:
+1. Validate App registration settings to make sure:
5. App registration exists in your Azure Active Directory tenant. 6. Under **API permissions**, the following **delegated permissions** and **grant admin consent for the tenant** is set up with read for the following APIs: 1. Power BI Service Tenant.Read.All 2. Microsoft Graph openid 3. Microsoft Graph User.Read
-10. Validate Self-hosted runtime settings:
+1. Validate Self-hosted runtime settings:
1. Latest version of [Self-hosted runtime](https://www.microsoft.com/download/details.aspx?id=39717) is installed on the VM. 2. Network connectivity from Self-hosted runtime to Power BI tenant is enabled. 3. Network connectivity from Self-hosted runtime to Microsoft services is enabled.
Use any of the following deployment checklists during the setup or for troublesh
### Scan same-tenant Power BI using self-hosted IR and Delegated Authentication in a private network 1. Make sure Power BI and Microsoft Purview accounts are in the same tenant.
-2. Make sure Power BI tenant ID is entered correctly during the registration.
-3. Check your Azure Key Vault to make sure:
+1. Make sure Power BI tenant ID is entered correctly during the registration.
+1. Make sure your [PowerBI Metadata model is up to date by enabling metadata scanning.](/power-bi/admin/service-admin-metadata-scanning-setup#enable-tenant-settings-for-metadata-scanning)
+1. Check your Azure Key Vault to make sure:
1. There are no typos in the password. 2. Microsoft Purview Managed Identity has get/list access to secrets.
-4. Review your credential to validate:
+1. Review your credential to validate:
1. Client ID matches _Application (Client) ID_ of the app registration. 2. Username includes the user principal name such as `johndoe@contoso.com`.
-5. Validate Power BI admin user to make sure:
+1. Validate Power BI admin user to make sure:
1. User is assigned to Power BI Administrator role. 2. At least one [Power BI license](/power-bi/admin/service-admin-licensing-organization#subscription-license-types) is assigned to the user. 3. If user is recently created, sign in with the user at least once to make sure password is reset successfully and user can successfully initiate the session. 4. There's no MFA or Conditional Access Policies are enforced on the user.
-6. Validate Self-hosted runtime settings:
+1. Validate Self-hosted runtime settings:
1. Latest version of [Self-hosted runtime](https://www.microsoft.com/download/details.aspx?id=39717) is installed on the VM. 2. [JDK 8 or later](https://www.oracle.com/java/technologies/javase-jdk11-downloads.html) is installed.
-7. Validate App registration settings to make sure:
+1. Validate App registration settings to make sure:
1. App registration exists in your Azure Active Directory tenant. 2. Under **API permissions**, the following **delegated permissions** and **grant admin consent for the tenant** is set up with read for the following APIs: 1. Power BI Service Tenant.Read.All 2. Microsoft Graph openid 3. Microsoft Graph User.Read
-8. Review network configuration and validate if:
+1. Review network configuration and validate if:
1. A [private endpoint for Power BI tenant](/power-bi/enterprise/service-security-private-links) is deployed. 2. All required [private endpoints for Microsoft Purview](/catalog-private-link-end-to-end.md) are deployed. 3. Network connectivity from Self-hosted runtime to Power BI tenant is enabled through private network.
pytorch-enterprise Pte Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/pytorch-enterprise/pte-overview.md
Title: What is PyTorch Enterprise on Azure? description: This article describes the PyTorch Enterprise program. --++ Last updated 07/06/2021
pytorch-enterprise Support Boundaries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/pytorch-enterprise/support-boundaries.md
Title: 'Support boundaries for PyTorch Enterprise on Azure' description: This article defines the support boundaries for PyTorch Enterprise. --++ Last updated 07/06/2021
search Cognitive Search Concept Intro https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-concept-intro.md
To repeat any of the above steps, [reset the indexer](search-howto-reindex.md) b
## Next steps
-+ [Quickstart: Create a text translation and entity skillset](cognitive-search-quickstart-blob.md)
-+ [Quickstart: Create an OCR image skillset](cognitive-search-quickstart-ocr.md)
++ [Quickstart: Create a skillset for AI enrichment](cognitive-search-quickstart-blob.md) + [Tutorial: Learn about the AI enrichment REST APIs](cognitive-search-tutorial-blob.md) + [Skillset concepts](cognitive-search-working-with-skillsets.md) + [Knowledge store concepts](knowledge-store-concept-intro.md)
search Cognitive Search Concept Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-concept-troubleshooting.md
Last updated 09/16/2021
This article contains a list of tips and tricks to keep you moving as you get started with AI enrichment capabilities in Azure Cognitive Search.
-If you haven't already, step through either the [Create a text translation and entity skillset](cognitive-search-quickstart-blob.md) or [Create an OCR image skillset](cognitive-search-quickstart-ocr.md) quickstarts for an introduction to enrichment of blob data.
+If you haven't already, step through [Quickstart: Create a skillset for AI enrichment](cognitive-search-quickstart-blob.md) for an introduction to enrichment of blob data.
## Tip 1: Start with a small dataset The best way to find issues quickly is to increase the speed at which you can fix issues. The best way to reduce the indexing time is by reducing the number of documents to be indexed.
search Cognitive Search Quickstart Blob https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-quickstart-blob.md
Title: "Quickstart: Text translation and entity recognition"
+ Title: "Quickstart: Create a skillset in the Azure portal"
-description: Use the Import Data wizard and AI cognitive skills to detect language, translate text, and recognize entities. The new fields created through AI become searchable text in an Azure Cognitive Search index.
+description: In this portal quickstart, use the Import Data wizard to add cognitive skills to an indexing pipeline to generate searchable text from images and unstructured documents. Skills in this quickstart include OCR, image analysis, and natural language processing.
+ Last updated 05/31/2022-
-# Quickstart: Translate text and recognize entities using the Import data wizard
+# Quickstart: Create an Azure Cognitive Search skillset in the Azure portal
-Learn how AI enrichment in Azure Cognitive Search adds language detection, text translation, and entity recognition to create searchable content in a search index.
+Learn how AI enrichment in Azure Cognitive Search adds Optical Character Recognition (OCR), image analysis, language detection, text translation, and entity recognition to create searchable content in a search index.
-In this quickstart, you'll run the **Import data** wizard to analyze French and Spanish descriptions of several national museums located in Spain. Output is a searchable index containing translated text and entities, queryable in the portal using [Search explorer](search-explorer.md).
+In this quickstart, you'll run the **Import data** wizard to apply skills that transform and enrich content during indexing. Output is a searchable index containing image text, translated text, and entities. Enriched content is queryable in the portal using [Search explorer](search-explorer.md).
To prepare, you'll create a few resources and upload sample files before running the wizard.
Before you begin, have the following prerequisites in place:
+ Azure Cognitive Search. [Create a service](search-create-service-portal.md) or [find an existing service](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Search%2FsearchServices). You can use a free service for this quickstart.
-+ Azure Storage account with Blob Storage. [Create a storage account](../storage/common/storage-account-create.md?tabs=azure-portal) or [find an existing account](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Storage%2storageAccounts/).
-
- + Choose the same subscription if you want the wizard to find your storage account and set up the connection.
- + Choose the same region as Azure Cognitive Search to avoid bandwidth charges.
- + Choose the StorageV2 (general purpose V2).
++ Azure Storage account with Blob Storage. > [!NOTE] > This quickstart uses [Cognitive Services](https://azure.microsoft.com/services/cognitive-services/) for the AI. Because the workload is so small, Cognitive Services is tapped behind the scenes for free processing for up to 20 transactions. You can complete this exercise without having to create a Cognitive Services resource.
Before you begin, have the following prerequisites in place:
In the following steps, set up a blob container in Azure Storage to store heterogeneous content files.
-1. [Download sample data](https://github.com/Azure-Samples/azure-search-sample-data) from GitHub. There are multiple data sets. Use the files in the **spanish-museums** folder for this quickstart.
+1. [Download sample data](https://1drv.ms/f/s!As7Oy81M_gVPa-LCb5lC_3hbS-4) consisting of a small file set of different types. Unzip the files.
+
+1. Sign in to the [Azure portal](https://portal.azure.com/) with your Azure account.
+
+1. [Create an Azure Storage account](../storage/common/storage-account-create.md?tabs=azure-portal) or [find an existing account](https://ms.portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Storage%2storageAccounts/).
-1. Upload the sample data to a blob container.
+ + Choose the same region as Azure Cognitive Search to avoid bandwidth charges.
- 1. Sign in to the [Azure portal](https://portal.azure.com/) and find your storage account.
- 1. In the left navigation pane, select **Containers**.
- 1. [Create a container](../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container) named "spanish-museums". Use the default public access level.
- 1. In the "spanish-museums" container, select **Upload** to upload the files from your local **spanish-museums** folder.
+ + Choose the StorageV2 (general purpose V2).
-You should have 10 files containing French and Spanish descriptions of national museums located in Spain.
+1. In Azure portal, open your Azure Storage page and create a container. You can use the default public access level.
- :::image type="content" source="media/cognitive-search-quickstart-blob/museums-container.png" alt-text="List of docx files in a blob container" border="true":::
+1. In Container, select **Upload** to upload the sample files you downloaded in the first step. Notice that you have a wide range of content types, including images and application files that are not full text searchable in their native formats.
+
+ :::image type="content" source="media/cognitive-search-quickstart-blob/sample-data.png" alt-text="Screenshot of source files in Azure Blob Storage." border="false":::
You are now ready to move on the Import data wizard.
You are now ready to move on the Import data wizard.
1. Sign in to the [Azure portal](https://portal.azure.com/) with your Azure account.
-1. [Find your search service](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Storage%2storageAccounts/) and on the Overview page, click **Import data** on the command bar to set up cognitive enrichment in four steps.
+1. [Find your search service](https://ms.portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Storage%2storageAccounts/) and on the Overview page, select **Import data** on the command bar to set up cognitive enrichment in four steps.
- :::image type="content" source="medi.png" alt-text="Screenshot of the Import data command" border="true":::
+ :::image type="content" source="medi.png" alt-text="Screenshot of the Import data command." border="true":::
### Step 1 - Create a data source
-1. In **Connect to your data**, choose **Azure Blob Storage**. Choose an existing connection to the storage account and container you created. Give the data source a name, and use default values for the rest.
+1. In **Connect to your data**, choose **Azure Blob Storage**.
+
+1. Choose an existing connection to the storage account and select the container you created. Give the data source a name, and use default values for the rest.
- :::image type="content" source="media/cognitive-search-quickstart-blob/connect-to-spanish-museums.png" alt-text="Azure blob configuration" border="true":::
+ :::image type="content" source="media/cognitive-search-quickstart-blob/blob-datasource.png" alt-text="Screenshot of the data source definition page." border="true":::
+
+ Continue to the next page.
### Step 2 - Add cognitive skills
-Next, configure AI enrichment to invoke language detection, text translation, and entity recognition.
+Next, configure AI enrichment to invoke OCR, image analysis, and natural language processing.
-1. For this quickstart, you can use the **Free** Cognitive Services resource. The sample data consists of 10 files, so the daily, per-indexer allotment of 20 free transactions on Cognitive Services is sufficient for this quickstart.
+1. For this quickstart, we are using the **Free** Cognitive Services resource. The sample data consists of 14 files, so the free allotment of 20 transaction on Cognitive Services is sufficient for this quickstart.
- :::image type="content" source="media/cognitive-search-quickstart-blob/free-enrichments.png" alt-text="Attach free Cognitive Services processing" border="true":::
+ :::image type="content" source="media/cognitive-search-quickstart-blob/cog-search-attach.png" alt-text="Screenshot of the Attach Cognitive Services tab." border="true":::
-1. In the same page, expand **Add enrichments** and make five selections:
+1. Expand **Add enrichments** and make six selections.
- Choose entity recognition (people, organizations, locations)
+ Enable OCR to add image analysis skills to wizard page.
- Choose language detection and text translation
+ Choose entity recognition (people, organizations, locations) and image analysis skills (tags, captions).
- :::image type="content" source="media/cognitive-search-quickstart-blob/select-entity-lang-enrichments.png" alt-text="Attach Cognitive Services select services for skillset" border="true":::
+ :::image type="content" source="media/cognitive-search-quickstart-blob/skillset.png" alt-text="Screenshot of the skillset definition page." border="true":::
- In blobs, the "Content" field contains the content of the file. In the sample data, the content is multiple paragraphs about a given museum, in either French or Spanish. The "Granularity" is the field itself. Some skills work better on smaller chunks of text, but for the skills in this quickstart, field granularity is sufficient.
+ Continue to the next page.
### Step 3 - Configure the index
-An index contains your searchable content and the **Import data** wizard can usually infer the schema for you by sampling the data. In this step, review the generated schema and potentially revise any settings. Below is the default schema created for the demo data set.
+An index contains your searchable content and the **Import data** wizard can usually create the schema for you by sampling the data source. In this step, review the generated schema and potentially revise any settings. Below is the default schema created for the demo Blob data set.
For this quickstart, the wizard does a good job setting reasonable defaults:
-+ Default fields are based on properties for existing blobs plus new fields to contain enrichment output (for example, `people`, `organizations`, `locations`). Data types are inferred from metadata and by data sampling.
++ Default fields are based on metadata properties for existing blobs, plus the new fields for the enrichment output (for example, `people`, `organizations`, `locations`). Data types are inferred from metadata and by data sampling. + Default document key is *metadata_storage_path* (selected because the field contains unique values).
-+ Default attributes are **Retrievable** and **Searchable**. **Searchable** allows full text search a field. **Retrievable** means field values can be returned in results. The wizard assumes you want these fields to be retrievable and searchable because you created them via a skillset.
-
-+ Select the filterable checkbox for "Language". The wizard won't set the folder for you, but the ability to filter by language is useful in this demo given that there are multiple languages.
++ Default attributes are **Retrievable** and **Searchable**. **Searchable** allows full text search a field. **Retrievable** means field values can be returned in results. The wizard assumes you want these fields to be retrievable and searchable because you created them via a skillset. Select **Filterable** if you want to use fields in a filter expression.
- :::image type="content" source="media/cognitive-search-quickstart-blob/index-fields-lang-entities.png" alt-text="Index fields" border="true":::
+ :::image type="content" source="media/cognitive-search-quickstart-blob/index-fields.png" alt-text="Screenshot of the index definition page." border="true":::
-Marking a field as **Retrievable** doesn't mean that the field *must* be present in the search results. You can precisely control search results composition by using the **$select** query parameter to specify which fields to include. For text-heavy fields like `content`, the **$select** parameter is your solution for shaping manageable search results to the human users of your application, while ensuring client code has access to all the information it needs via the **Retrievable** attribute.
+Marking a field as **Retrievable** does not mean that the field *must* be present in the search results. You can control search results composition by using the **$select** query parameter to specify which fields to include.
+
+Continue to the next page.
### Step 4 - Configure the indexer
-The indexer is a high-level resource that drives the indexing process. It specifies the data source name, a target index, and frequency of execution. The **Import data** wizard creates several objects, and of them is always an indexer that you can run repeatedly.
+The indexer drives the indexing process. It specifies the data source name, a target index, and frequency of execution. The **Import data** wizard creates several objects, including an indexer that you can reset and run repeatedly.
-1. In the **Indexer** page, you can accept the default name and click the **Once** schedule option to run it immediately.
+1. In the **Indexer** page, you can accept the default name and select **Once** to run it immediately.
- :::image type="content" source="media/cognitive-search-quickstart-blob/indexer-spanish-museum.png" alt-text="Indexer definition" border="true":::
+ :::image type="content" source="media/cognitive-search-quickstart-blob/indexer-def.png" alt-text="Screenshot of the indexer definition page." border="true":::
1. Click **Submit** to create and simultaneously run the indexer. ## Monitor status
-Cognitive skills indexing takes longer to complete than typical text-based indexing. To monitor progress, go to the Overview page and select the **Indexers** tab in the middle of page.
+Cognitive skills indexing takes longer to complete than typical text-based indexing, especially OCR and image analysis. To monitor progress, go to the Overview page and select **Indexers** in the middle of page.
- :::image type="content" source="media/cognitive-search-quickstart-blob/indexer-status-spanish-museums.png" alt-text="Indexer status" border="true":::
+ :::image type="content" source="media/cognitive-search-quickstart-blob/indexer-notification.png" alt-text="Screenshot of the indexer status page." border="true":::
-To check details about execution status, select an indexer from the list.
+To check details about execution status, select an indexer from the list, and then select **Success** (or **Failed**) to view execution details.
+
+In this demo, there is one warning. It tells you that a PNG file in the data source doesn't provide a text input to Entity Recognition. This warning occurs because the upstream OCR skill didn't recognize any text in the image, and thus could not provide a text input to the downstream Entity Recognition skill.
+
+Warnings are common in skillset execution. As you become familiar with how skills iterate over your data, you'll begin to notice patterns and learn which warnings are safe to ignore.
## Query in Search explorer
-After an index is created, you can run queries to return results. In the portal, use **Search explorer** for this task.
+After an index is created, run queries in **Search explorer** to return results.
-1. On the search service dashboard page, click **Search explorer** on the command bar.
+1. On the search service dashboard page, select **Search explorer** on the command bar.
1. Select **Change Index** at the top to select the index you created.
-1. In Query string, enter a search string to query the index, such as `search="picasso museum" &$select=people,organizations,locations,language,translated_text &$count=true &$filter=language eq 'fr'`, and then select **Search**.
-
- :::image type="content" source="media/cognitive-search-quickstart-blob/search-explorer-query-string-spanish-museums.png" alt-text="Query string in search explorer" border="true":::
+1. Enter a search string to query the index, such as `search=Satya Nadella&$select=people,organizations,locations&$count=true`.
-Results are returned as JSON, which can be verbose and hard to read, especially in large documents originating from Azure blobs. Some tips for searching in this tool include the following techniques:
+Results are returned as verbose JSON, which can be hard to read, especially in large documents. Some tips for searching in this tool include the following techniques:
-+ Append `$select` to specify which fields to include in results.
++ Append `$select` to limit the fields returned in results. + Use CTRL-F to search within the JSON for specific properties or terms.
- :::image type="content" source="media/cognitive-search-quickstart-blob/search-explorer-results-spanish-museums.png" alt-text="Search explorer example" border="true":::
+Query strings are case-sensitive so if you get an "unknown field" message, check **Fields** or **Index Definition (JSON)** to verify name and case.
+
+ :::image type="content" source="media/cognitive-search-quickstart-blob/search-explorer.png" alt-text="Screenshot of the the Search explorer page." border="true":::
+
+## Takeaways
+
+You've now created your first skillset and learned important concepts useful for prototyping an enriched search solution using your own data.
+
+Some key concepts that we hope you picked up include the dependency on Azure data sources. A skillset is bound to an indexer, and indexers are Azure and source-specific. Although this quickstart uses Azure Blob Storage, other Azure data sources are possible. For more information, see [Indexers in Azure Cognitive Search](search-indexer-overview.md).
+
+Another important concept is that skills operate over content types, and when working with heterogeneous content, some inputs will be skipped. Also, large files or fields might exceed the indexer limits of your service tier. It's normal to see warnings when these events occur.
+
+Output is directed to a search index, and there is a mapping between name-value pairs created during indexing and individual fields in your index. Internally, the portal sets up [annotations](cognitive-search-concept-annotations-syntax.md) and defines a [skillset](cognitive-search-defining-skillset.md), establishing the order of operations and general flow. These steps are hidden in the portal, but when you start writing code, these concepts become important.
-Query strings are case-sensitive so if you get an "unknown field" message, check **Fields** or **Index Definition (JSON)** to verify name and case.
+Finally, you learned that can verify content by querying the index. In the end, what Azure Cognitive Search provides is a searchable index, which you can query using either the [simple](/rest/api/searchservice/simple-query-syntax-in-azure-search) or [fully extended query syntax](/rest/api/searchservice/lucene-query-syntax-in-azure-search). An index containing enriched fields is like any other. If you want to incorporate standard or [custom analyzers](search-analyzers.md), [scoring profiles](/rest/api/searchservice/add-scoring-profiles-to-a-search-index), [synonyms](search-synonyms.md), [faceted navigation](search-faceted-navigation.md), geo-search, or any other Azure Cognitive Search feature, you can certainly do so.
## Clean up resources
When you're working in your own subscription, it's a good idea at the end of a p
You can find and manage resources in the portal, using the **All resources** or **Resource groups** link in the left-navigation pane.
-If you're using a free service, remember that you're limited to three indexes, indexers, and data sources. You can delete individual items in the portal to stay under the limit.
+If you are using a free service, remember that you are limited to three indexes, indexers, and data sources. You can delete individual items in the portal to stay under the limit.
## Next steps
-Cognitive Search has other built-in skills that can be exercised in the Import data wizard. As a next step, try the OCR and image analysis skills to create text-searchable content from image files.
+You can create skillsets using the portal, .NET SDK, or REST API. To further your knowledge, try the REST API using Postman and more sample data.
> [!div class="nextstepaction"]
-> [Quickstart: Use OCR and image analysis to create searchable content](cognitive-search-quickstart-ocr.md)
+> [Tutorial: Extract text and structure from JSON blobs using REST APIs ](cognitive-search-tutorial-blob.md)
search Cognitive Search Quickstart Ocr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-quickstart-ocr.md
- Title: "Quickstart: OCR and image analysis"-
-description: Use the Import Data wizard and AI cognitive skills to apply OCR and image analysis to identify and extract searchable text from image files.
----- Previously updated : 05/31/2022---
-# Quickstart: Apply OCR and image analysis using the Import data wizard
-
-Learn how AI enrichment in Azure Cognitive Search adds Optical Character Recognition (OCR) and image analysis to create searchable content from image files.
-
-In this quickstart, you'll run the **Import data** wizard to analyze visual content in JPG files. The content consists of photographs of signs. Output is a searchable index containing captions, tags, and text identified through OCR, all of which is queryable in the portal using [Search explorer](search-explorer.md).
-
-To prepare, you'll create a few resources and upload sample files before running the wizard.
-
-Prefer to start with code? Try the [.NET tutorial](cognitive-search-tutorial-blob-dotnet.md), [Python tutorial](cognitive-search-tutorial-blob-python.md), or [REST tutorial](cognitive-search-tutorial-blob-dotnet.md) instead.
-
-## Prerequisites
-
-Before you begin, have the following prerequisites in place:
-
-+ An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/).
-
-+ Azure Cognitive Search . [Create a service](search-create-service-portal.md) or [find an existing service](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Search%2FsearchServices). You can use a free service for this quickstart.
-
-+ Azure Storage account with Blob Storage. [Create a storage account](../storage/common/storage-account-create.md?tabs=azure-portal) or [find an existing account](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Storage%2storageAccounts/).
-
- + Choose the same subscription if you want the wizard to find your storage account and set up the connection.
- + Choose the same region as Azure Cognitive Search to avoid bandwidth charges.
- + Choose the StorageV2 (general purpose V2).
-
-> [!NOTE]
-> This quickstart uses [Cognitive Services](https://azure.microsoft.com/services/cognitive-services/) for the AI. Because the workload is so small, Cognitive Services is tapped behind the scenes for free processing for up to 20 transactions. You can complete this exercise without having to create a Cognitive Services resource.
-
-## Set up your data
-
-In the following steps, set up a blob container in Azure Storage to store heterogeneous content files.
-
-1. [Download sample data](https://github.com/Azure-Samples/azure-search-sample-data) from GitHub. There are multiple data sets. Use the files in the **unsplash-images\jpg-signs** folder for this quickstart.
-
-1. Upload the sample data to a blob container.
-
- 1. Sign in to the [Azure portal](https://portal.azure.com/) and find your storage account.
- 1. In the left navigation pane, select **Containers**.
- 1. [Create a container](../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container) named "signs". Use the default public access level.
- 1. In the "signs" container, select **Upload** to upload the files from your local **unsplash-images\jpg-signs** folder.
-
-You should have 10 files containing photographs of signs.
-
-There is a second subfolder that includes landmark buildings. If you want to [attach a Cognitive Services key](cognitive-search-attach-cognitive-services.md), you can include these files as well to see how image analysis works over image files that don't include embedded text. The key is necessary for jobs that exceed the free allotment.
-
-You are now ready to move on the Import data wizard.
-
-## Run the Import data wizard
-
-1. Sign in to the [Azure portal](https://portal.azure.com/) with your Azure account.
-
-1. [Find your search service](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Storage%2storageAccounts/) and on the Overview page, click **Import data** on the command bar to set up cognitive enrichment in four steps.
-
- :::image type="content" source="medi.png" alt-text="Screenshot of the Import data command" border="true":::
-
-### Step 1 - Create a data source
-
-1. In **Connect to your data**, choose **Azure Blob Storage**. Choose an existing connection to the storage account and container you created. Give the data source a name, and use default values for the rest.
-
- :::image type="content" source="media/cognitive-search-quickstart-blob/connect-to-signs.png" alt-text="Azure blob configuration" border="true":::
-
-### Step 2 - Add cognitive skills
-
-Next, configure AI enrichment to invoke OCR and image analysis.
-
-1. For this quickstart, you can use the **Free** Cognitive Services resource. The sample data consists of 19 files, so the daily, per-indexer allotment of 20 free transactions on Cognitive Services is sufficient for this quickstart.
-
- :::image type="content" source="media/cognitive-search-quickstart-blob/free-enrichments.png" alt-text="Attach free Cognitive Services processing" border="true":::
-
-1. In the same page, expand **Add enrichments** and make tree selections:
-
- Enable OCR and merge all text into merged_content field.
-
- Choose "Generate tags from images" and "Generate captions from images".
-
- :::image type="content" source="media/cognitive-search-quickstart-blob/select-ocr-image-enrichments.png" alt-text="Attach Cognitive Services select services for skillset" border="true":::
-
- For image analysis, images are split from text during document cracking. The "merged_content" field re-associates text and images in the AI enrichment pipeline.
-
-### Step 3 - Configure the index
-
-An index contains your searchable content and the **Import data** wizard can usually infer the schema for you by sampling the data. In this step, review the generated schema and potentially revise any settings. Below is the default schema created for the demo data set.
-
-For this quickstart, the wizard does a good job setting reasonable defaults:
-
-+ Default fields are based on properties for existing blobs plus new fields to contain enrichment output (for example, `text`, `layoutText`, `imageCaption`). Data types are inferred from metadata and by data sampling.
-
-+ Default document key is *metadata_storage_path* (selected because the field contains unique values).
-
-+ Default attributes are **Retrievable** and **Searchable**. **Searchable** allows full text search a field. **Retrievable** means field values can be returned in results. The wizard assumes you want these fields to be retrievable and searchable because you created them via a skillset.
-
- :::image type="content" source="media/cognitive-search-quickstart-blob/index-fields-ocr-images.png" alt-text="Index fields" border="true":::
-
-Marking a field as **Retrievable** doesn't mean that the field *must* be present in the search results. You can precisely control search results composition by using the **$select** query parameter to specify which fields to include. For text-heavy fields like `content`, the **$select** parameter is your solution for shaping manageable search results to the human users of your application, while ensuring client code has access to all the information it needs via the **Retrievable** attribute.
-
-### Step 4 - Configure the indexer
-
-The indexer is a high-level resource that drives the indexing process. It specifies the data source name, a target index, and frequency of execution. The **Import data** wizard creates several objects, and of them is always an indexer that you can run repeatedly.
-
-1. In the **Indexer** page, you can accept the default name and click the **Once** schedule option to run it immediately.
-
- :::image type="content" source="media/cognitive-search-quickstart-blob/indexer-signs.png" alt-text="Indexer definition" border="true":::
-
-1. Click **Submit** to create and simultaneously run the indexer.
-
-## Monitor status
-
-Cognitive skills indexing takes longer to complete than typical text-based indexing. To monitor progress, go to the Overview page and select the **Indexers** tab in the middle of page.
-
- :::image type="content" source="media/cognitive-search-quickstart-blob/indexer-status-signs.png" alt-text="Indexer status" border="true":::
-
-To check details about execution status, select an indexer from the list.
-
-## Query in Search explorer
-
-After an index is created, you can run queries to return results. In the portal, use **Search explorer** for this task.
-
-1. On the search service dashboard page, click **Search explorer** on the command bar.
-
-1. Select **Change Index** at the top to select the index you created.
-
-1. In Query string, enter a search string to query the index, such as `search=sign&searchFields=imageTags&$select=text,imageCaption,imageTags&$count=true`, and then select **Search**.
-
- :::image type="content" source="media/cognitive-search-quickstart-blob/search-explorer-query-string-signs.png" alt-text="Query string in search explorer" border="true":::
-
-Results are returned as JSON, which can be verbose and hard to read, especially in large documents originating from Azure blobs. Some tips for searching in this tool include the following techniques:
-
-+ Append `$select` to specify which fields to include in results.
-+ Append `searchField` to scope full text search to specific fields.
-+ Use CTRL-F to search within the JSON for specific properties or terms.
-
- :::image type="content" source="media/cognitive-search-quickstart-blob/search-explorer-results-signs.png" alt-text="Search explorer example" border="true":::
-
-Query strings are case-sensitive so if you get an "unknown field" message, check **Fields** or **Index Definition (JSON)** to verify name and case.
-
-## Clean up resources
-
-When you're working in your own subscription, it's a good idea at the end of a project to identify whether you still need the resources you created. Resources left running can cost you money. You can delete resources individually or delete the resource group to delete the entire set of resources.
-
-You can find and manage resources in the portal, using the **All resources** or **Resource groups** link in the left-navigation pane.
-
-If you're using a free service, remember that you're limited to three indexes, indexers, and data sources. You can delete individual items in the portal to stay under the limit.
-
-## Next steps
-
-Cognitive Search has other built-in skills that can be exercised in the Import data wizard. The next quickstart uses entity recognition, language detection, and text translation.
-
-> [!div class="nextstepaction"]
-> [Quickstart: Translate text and recognize entities using the Import data wizard](cognitive-search-quickstart-blob.md)
search Search Get Started Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-get-started-portal.md
Create your first search index using the **Import data** wizard and a built-in sample data source consisting of fictitious hotel data. The wizard guides you through the creation of a search index (hotels-sample-index) so that you can write interesting queries within minutes.
-Although you won't use the options in this quickstart, the wizard includes a page for AI enrichment so that you can extract text and structure from image files and unstructured text. For a similar walkthrough that includes AI enrichment, see [text translation and entity skillset](cognitive-search-quickstart-blob.md) or [OCR image skillset](cognitive-search-quickstart-ocr.md) quickstarts.
+Although you won't use the options in this quickstart, the wizard includes a page for AI enrichment so that you can extract text and structure from image files and unstructured text. For a similar walkthrough that includes AI enrichment, see + [Quickstart: Create a skillset](cognitive-search-quickstart-blob.md).
## Prerequisites
search Search Manage Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-manage-powershell.md
New-AzSearchService -ResourceGroupName <resource-group-name> `
-IdentityType SystemAssigned ```
+### Create an S3HD service
+
+To create a [S3HD](./search-sku-tier.md#tier-descriptions) service, a combination of `-Sku` and `-HostingMode` is used. Set `-Sku` to `Standard3` and `-HostingMode` to `HighDensity`.
+
+```azurepowershell-interactive
+New-AzSearchService -ResourceGroupName <resource-group-name> `
+ -Name <search-service-name> `
+ -Sku Standard3 `
+ -Location "West US" `
+ -PartitionCount 1 -ReplicaCount 3 `
+ -HostingMode HighDensity
+```
+ ## Create a service with a private endpoint [Private Endpoints](../private-link/private-endpoint-overview.md) for Azure Cognitive Search allow a client on a virtual network to securely access data in a search index over a [Private Link](../private-link/private-link-overview.md). The private endpoint uses an IP address from the [virtual network address space](../virtual-network/ip-services/private-ip-addresses.md) for your search service. Network traffic between the client and the search service traverses over the virtual network and a private link on the Microsoft backbone network, eliminating exposure from the public internet. For more details, see
security Recover From Identity Compromise https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/recover-from-identity-compromise.md
In addition to the recommended actions listed above, we recommend that you consi
## Next steps -- **Get help from inside Microsoft products**, including the Microsoft 365 Defender portal, Microsoft 365 compliance center, and Office 365 Security & Compliance Center by selecting the **Help** (**?**) button in the top navigation bar.
+- **Get help from inside Microsoft products**, including the Microsoft 365 Defender portal, Microsoft Purview compliance portal, and Office 365 Security & Compliance Center by selecting the **Help** (**?**) button in the top navigation bar.
- **For deployment assistance**, contact us at [FastTrack](https://fasttrack.microsoft.com)
service-fabric Service Fabric Local Linux Cluster Windows Wsl2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-local-linux-cluster-windows-wsl2.md
For manual installation of the Service Fabric runtime and common SDK, follow the
5. Inside genie namespace, SF SDK can also be installed as mentioned under Script Installation or Manual Installation steps in [Set up a linux local cluster](service-fabric-get-started-linux.md)
-6. Provide sudo privileges to current user by making an entry (e.g. <USERNAME> ALL = (ALL) NOPASSWD:ALL) in /etc/sudoers
+6. Provide sudo privileges to current user by making an entry `<USERNAME\> ALL = (ALL) NOPASSWD:ALL` in /etc/sudoers
## Set up a local cluster Service Fabric inside WSL2 VM is recommended to manage from host windows
sudo ./SetupServiceFabric.sh --servicefabricruntime=/mnt/c/Users/testuser/Downlo
If using Ubuntu-18.04 distribution, SF data is located at \\wsl$\Ubuntu-18.04\home\sfuser\sfdevcluster from Windows host. ## Next steps
-* Learn about [Service Fabric support options](service-fabric-support.md)
+* Learn about [Service Fabric support options](service-fabric-support.md)
storage Azure Defender Storage Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/azure-defender-storage-configure.md
Previously updated : 05/12/2022 Last updated : 05/31/2022
Microsoft Defender for Storage is now enabled for all storage accounts in this s
### [Portal](#tab/azure-portal) 1. Launch the [Azure portal](https://portal.azure.com/).
-1. Navigate to your storage account. Under **Settings**, select **Advanced security**.
+1. Navigate to your storage account. Under **Security + networking**, select **Security**.
1. Select **Enable Microsoft Defender for Storage**.
- :::image type="content" source="media/azure-defender-storage-configure/enable-azure-defender-portal.png" alt-text="Screenshot showing how to enable an account for Microsoft Defender for Storage.":::
+ :::image type="content" source="media/azure-defender-storage-configure/enable-azure-defender-portal.png" alt-text="Screenshot showing how to enable a storage account for Microsoft Defender for Storage.":::
Microsoft Defender for Storage is now enabled for this storage account.
storage File Sync Cloud Tiering Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-cloud-tiering-overview.md
Enabling proactive recalling may also result in increased bandwidth usage on the
:::image type="content" source="media/storage-sync-files-deployment-guide/proactive-download.png" alt-text="An image showing the Azure file share download behavior for a server endpoint currently in effect and a button to open a menu that allows to change it.":::
-For more information on proactive recall, see [Deploy Azure File Sync](file-sync-deployment-guide.md#proactively-recall-new-and-changed-files-from-an-azure-file-share).
+For more information on proactive recall, see [Deploy Azure File Sync](file-sync-deployment-guide.md#optional-proactively-recall-new-and-changed-files-from-an-azure-file-share).
## Tiered vs. locally cached file behavior
storage File Sync Deployment Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-deployment-guide.md
We strongly recommend that you read [Planning for an Azure Files deployment](../
- [Create a file share](../files/storage-how-to-create-file-share.md?toc=%2fazure%2fstorage%2ffilesync%2ftoc.json) for a step-by-step description of how to create a file share. 2. **SMB security settings** on the storage account must allow **SMB 3.1.1** protocol version, **NTLM v2** authentication and **AES-128-GCM** encryption. To check the SMB security settings on the storage account, see [SMB security settings](../files/files-smb-protocol.md#smb-security-settings). 3. At least one supported instance of **Windows Server** to sync with Azure File Sync. For more information about supported versions of Windows Server and recommended system resources, see [Windows file server considerations](file-sync-planning.md#windows-file-server-considerations).
-4. **Optional**: If you intend to use Azure File Sync with a Windows Server Failover Cluster, the **File Server for general use** role must be configured prior to installing the Azure File Sync agent on each node in the cluster. For more information on how to configure the **File Server for general use** role on a Failover Cluster, see [Deploying a two-node clustered file server](https://docs.microsoft.com/windows-server/failover-clustering/deploy-two-node-clustered-file-server).
+4. **Optional**: If you intend to use Azure File Sync with a Windows Server Failover Cluster, the **File Server for general use** role must be configured prior to installing the Azure File Sync agent on each node in the cluster. For more information on how to configure the **File Server for general use** role on a Failover Cluster, see [Deploying a two-node clustered file server](/windows-server/failover-clustering/deploy-two-node-clustered-file-server).
> [!NOTE] > The only scenario supported by Azure File Sync is Windows Server Failover Cluster with Clustered Disks. See [Failover Clustering](file-sync-planning.md#failover-clustering) for Azure File Sync.
We strongly recommend that you read [Planning for an Azure Files deployment](../
2. **SMB security settings** on the storage account must allow **SMB 3.1.1** protocol version, **NTLM v2** authentication and **AES-128-GCM** encryption. To check the SMB security settings on the storage account, see [SMB security settings](../files/files-smb-protocol.md#smb-security-settings). 3. At least one supported instance of **Windows Server** to sync with Azure File Sync. For more information about supported versions of Windows Server and recommended system resources, see [Windows file server considerations](file-sync-planning.md#windows-file-server-considerations).
-4. **Optional**: If you intend to use Azure File Sync with a Windows Server Failover Cluster, the **File Server for general use** role must be configured prior to installing the Azure File Sync agent on each node in the cluster. For more information on how to configure the **File Server for general use** role on a Failover Cluster, see [Deploying a two-node clustered file server](https://docs.microsoft.com/windows-server/failover-clustering/deploy-two-node-clustered-file-server).
+4. **Optional**: If you intend to use Azure File Sync with a Windows Server Failover Cluster, the **File Server for general use** role must be configured prior to installing the Azure File Sync agent on each node in the cluster. For more information on how to configure the **File Server for general use** role on a Failover Cluster, see [Deploying a two-node clustered file server](/windows-server/failover-clustering/deploy-two-node-clustered-file-server).
> [!NOTE] > The only scenario supported by Azure File Sync is Windows Server Failover Cluster with Clustered Disks. See [Failover Clustering](file-sync-planning.md#failover-clustering) for Azure File Sync.
We strongly recommend that you read [Planning for an Azure Files deployment](../
2. **SMB security settings** on the storage account must allow **SMB 3.1.1** protocol version, **NTLM v2** authentication and **AES-128-GCM** encryption. To check the SMB security settings on the storage account, see [SMB security settings](../files/files-smb-protocol.md#smb-security-settings). 3. At least one supported instance of **Windows Server** to sync with Azure File Sync. For more information about supported versions of Windows Server and recommended system resources, see [Windows file server considerations](file-sync-planning.md#windows-file-server-considerations).
-4. **Optional**: If you intend to use Azure File Sync with a Windows Server Failover Cluster, the **File Server for general use** role must be configured prior to installing the Azure File Sync agent on each node in the cluster. For more information on how to configure the **File Server for general use** role on a Failover Cluster, see [Deploying a two-node clustered file server](https://docs.microsoft.com/windows-server/failover-clustering/deploy-two-node-clustered-file-server).
+4. **Optional**: If you intend to use Azure File Sync with a Windows Server Failover Cluster, the **File Server for general use** role must be configured prior to installing the Azure File Sync agent on each node in the cluster. For more information on how to configure the **File Server for general use** role on a Failover Cluster, see [Deploying a two-node clustered file server](/windows-server/failover-clustering/deploy-two-node-clustered-file-server).
> [!NOTE] > The only scenario supported by Azure File Sync is Windows Server Failover Cluster with Clustered Disks. See [Failover Clustering](file-sync-planning.md#failover-clustering) for Azure File Sync.
A server endpoint represents a specific location on a registered server, such as
[!INCLUDE [storage-files-sync-create-server-endpoint](../../../includes/storage-files-sync-create-server-endpoint.md)]
-## Configure firewall and virtual network settings
+## Optional: Configure firewall and virtual network settings
### Portal
-If you'd like to configure your Azure File sync to work with firewall and virtual network settings, do the following:
+If you'd like to configure Azure File Sync to work with firewall and virtual network settings, do the following:
1. From the Azure portal, navigate to the storage account you want to secure. 1. Select **Networking** on the left menu.
If you'd like to configure your Azure File sync to work with firewall and virtua
![Configuring firewall and virtual network settings to work with Azure File sync](media/storage-sync-files-deployment-guide/firewall-and-vnet.png)
-## SMB over QUIC on a server endpoint
-Although the Azure file share (cloud endpoint) is a full SMB endpoint capable of direct access from the cloud or on-premises, customers that desire accessing the file share data cloud-side often deploy an Azure File Sync server endpoint on a Windows Server instance hosted on an Azure VM. The most common reason to have an additional server endpoint rather than accessing the Azure file share directly is that changes made directly on the Azure file share may take up to 24 hours or longer to be discovered by Azure File Sync, while changes made on a server endpoint are discovered nearly immediately and synced to all other server and cloud-endpoints.
-
-This configuration is extremely common in environments where a substantial portion of users are not on-premises, such as when users are working from home or from the road. Traditionally, accessing any file share with SMB over the public internet, including both file shares hosted on Windows File Server or on Azure Files directly, is very difficult since most organizations and ISPs block port 445. You can work around this limitation with [private endpoints and VPNs](file-sync-networking-overview.md#private-endpoints), however Windows Server 2022 Azure Edition provides an additional access strategy: SMB over the QUIC transport protocol.
-
-SMB over QUIC communicates over port 443, which most organizations and ISPs have open to support HTTPS traffic. Using SMB over QUIC greatly simplifies the networking required to access a file share hosted on an Azure File Sync server endpoint for clients using Windows 11 or greater. To learn more about how to setup and configure SMB over QUIC on Windows Server Azure Edition, see [SMB over QUIC for Windows File Server](/windows-server/storage/file-server/smb-over-quic).
-
-## Onboarding with Azure File Sync
-
-The recommended steps to onboard on Azure File Sync for the first time with zero downtime while preserving full file fidelity and access control list (ACL) are as follows:
-
-1. Deploy a Storage Sync Service.
-1. Create a sync group.
-1. Install Azure File Sync agent on the server with the full data set.
-1. Register that server and create a server endpoint on the share.
-1. Let sync do the full upload to the Azure file share (cloud endpoint).
-1. After the initial upload is complete, install Azure File Sync agent on each of the remaining servers.
-1. Create new file shares on each of the remaining servers.
-1. Create server endpoints on new file shares with cloud tiering policy, if desired. (This step requires additional storage to be available for the initial setup.)
-1. Let Azure File Sync agent do a rapid restore of the full namespace without the actual data transfer. After the full namespace sync, sync engine will fill the local disk space based on the cloud tiering policy for the server endpoint.
-1. Ensure sync completes and test your topology as desired.
-1. Redirect users and applications to this new share.
-1. You can optionally delete any duplicate shares on the servers.
-
-If you don't have extra storage for initial onboarding and would like to attach to the existing shares, you can pre-seed the data in the Azure files shares. This approach is suggested, if and only if you can accept downtime and absolutely guarantee no data changes on the server shares during the initial onboarding process.
-
-1. Ensure that data on any of the servers can't change during the onboarding process.
-1. Pre-seed Azure file shares with the server data using any data transfer tool over the SMB. Robocopy, for example. You can also use AzCopy over REST. Be sure to use AzCopy with the appropriate switches to preserve ACLs timestamps and attributes.
-1. Create Azure File Sync topology with the desired server endpoints pointing to the existing shares.
-1. Let sync finish reconciliation process on all endpoints.
-1. Once reconciliation is complete, you can open shares for changes.
-
-Currently, pre-seeding approach has a few limitations -
-- Data changes on the server before the sync topology is fully up and running can cause conflicts on the server endpoints.-- After the cloud endpoint is created, Azure File Sync runs a process to detect the files in the cloud before starting the initial sync. The time taken to complete this process varies depending on the various factors like network speed, available bandwidth, and number of files and folders. For the rough estimation in the preview release, detection process runs approximately at 10 files/sec. Hence, even if pre-seeding runs fast, the overall time to get a fully running system may be significantly longer when data is pre-seeded in the cloud.-
-## Self-service restore through Previous Versions and VSS (Volume Shadow Copy Service)
+## Optional: Self-service restore through Previous Versions and VSS (Volume Shadow Copy Service)
Previous Versions is a Windows feature that allows you to utilize server-side VSS snapshots of a volume to present restorable versions of a file to an SMB client. This enables a powerful scenario, commonly referred to as self-service restore, directly for information workers instead of depending on the restore from an IT admin.
Enable-StorageSyncSelfServiceRestore [-DriveLetter] <string> [[-Force]]
VSS snapshots are taken of an entire volume. By default, up to 64 snapshots can exist for a given volume, granted there is enough space to store the snapshots. VSS handles this automatically. The default snapshot schedule takes two snapshots per day, Monday through Friday. That schedule is configurable via a Windows Scheduled Task. The above PowerShell cmdlet does two things:
-1. It configures Azure File Syncs cloud tiering on the specified volume to be compatible with previous versions and guarantees that a file can be restored from a previous version, even if it was tiered to the cloud on the server.
+1. It configures Azure File Sync's cloud tiering on the specified volume to be compatible with previous versions and guarantees that a file can be restored from a previous version, even if it was tiered to the cloud on the server.
1. It enables the default VSS schedule. You can then decide to modify it later. > [!NOTE]
The default maximum number of VSS snapshots per volume (64) as well as the defau
If a maximum of 64 VSS snapshots per volume is not the correct setting for you, then [change that value via a registry key](/windows/win32/backup/registry-keys-for-backup-and-restore#maxshadowcopies). For the new limit to take effect, you need to re-run the cmdlet to enable previous version compatibility on every volume it was previously enabled, with the -Force flag to take the new maximum number of VSS snapshots per volume into account. This will result in a newly calculated number of compatible days. Please note that this change will only take effect on newly tiered files and overwrite any customizations on the VSS schedule you might have made.
-VSS snapshots by default can consume up to 10% of the volume space. To adjust the amount of storage that can be used for VSS snapshots, use the [vssadmin resize shadowstorage](https://docs.microsoft.com/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/cc788050(v=ws.11)) command.
+VSS snapshots by default can consume up to 10% of the volume space. To adjust the amount of storage that can be used for VSS snapshots, use the [vssadmin resize shadowstorage](/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/cc788050(v=ws.11)) command.
<a id="proactive-recall"></a>
-## Proactively recall new and changed files from an Azure file share
+## Optional: Proactively recall new and changed files from an Azure file share
-With agent version 11, a new mode becomes available on a server endpoint. This mode allows globally distributed companies to have the server cache in a remote region pre-populated even before local users are accessing any files. When enabled on a server endpoint, this mode will cause this server to recall files that have been created or changed in the Azure file share.
+Azure File Sync has a mode that allows globally distributed companies to have the server cache in a remote region pre-populated even before local users are accessing any files. When enabled on a server endpoint, this mode will cause this server to recall files that have been created or changed in the Azure file share.
### Scenario
Set-AzStorageSyncServerEndpoint -InputObject <PSServerEndpoint> -LocalCacheMode
+## Optional: SMB over QUIC on a server endpoint
+Although the Azure file share (cloud endpoint) is a full SMB endpoint capable of direct access from the cloud or on-premises, customers that desire accessing the file share data cloud-side often deploy an Azure File Sync server endpoint on a Windows Server instance hosted on an Azure VM. The most common reason to have an additional server endpoint rather than accessing the Azure file share directly is that changes made directly on the Azure file share may take up to 24 hours or longer to be discovered by Azure File Sync, while changes made on a server endpoint are discovered nearly immediately and synced to all other server and cloud-endpoints.
+
+This configuration is extremely common in environments where a substantial portion of users are not on-premises, such as when users are working from home or from the road. Traditionally, accessing any file share with SMB over the public internet, including both file shares hosted on Windows File Server or on Azure Files directly, is very difficult since most organizations and ISPs block port 445. You can work around this limitation with [private endpoints and VPNs](file-sync-networking-overview.md#private-endpoints), however Windows Server 2022 Azure Edition provides an additional access strategy: SMB over the QUIC transport protocol.
+
+SMB over QUIC communicates over port 443, which most organizations and ISPs have open to support HTTPS traffic. Using SMB over QUIC greatly simplifies the networking required to access a file share hosted on an Azure File Sync server endpoint for clients using Windows 11 or greater. To learn more about how to setup and configure SMB over QUIC on Windows Server Azure Edition, see [SMB over QUIC for Windows File Server](/windows-server/storage/file-server/smb-over-quic).
+
+## Onboarding with Azure File Sync
+
+The recommended steps to onboard on Azure File Sync for the first time with zero downtime while preserving full file fidelity and access control list (ACL) are as follows:
+
+1. Deploy a Storage Sync Service.
+1. Create a sync group.
+1. Install Azure File Sync agent on the server with the full data set.
+1. Register that server and create a server endpoint on the share.
+1. Let sync do the full upload to the Azure file share (cloud endpoint).
+1. After the initial upload is complete, install Azure File Sync agent on each of the remaining servers.
+1. Create new file shares on each of the remaining servers.
+1. Create server endpoints on new file shares with cloud tiering policy, if desired. (This step requires additional storage to be available for the initial setup.)
+1. Let Azure File Sync agent do a rapid restore of the full namespace without the actual data transfer. After the full namespace sync, sync engine will fill the local disk space based on the cloud tiering policy for the server endpoint.
+1. Ensure sync completes and test your topology as desired.
+1. Redirect users and applications to this new share.
+1. You can optionally delete any duplicate shares on the servers.
+
+If you don't have extra storage for initial onboarding and would like to attach to the existing shares, you can pre-seed the data in the Azure files shares. This approach is suggested, if and only if you can accept downtime and absolutely guarantee no data changes on the server shares during the initial onboarding process.
+
+1. Ensure that data on any of the servers can't change during the onboarding process.
+1. Pre-seed Azure file shares with the server data using any data transfer tool over the SMB. Robocopy, for example. You can also use AzCopy over REST. Be sure to use AzCopy with the appropriate switches to preserve ACLs timestamps and attributes.
+1. Create Azure File Sync topology with the desired server endpoints pointing to the existing shares.
+1. Let sync finish reconciliation process on all endpoints.
+1. Once reconciliation is complete, you can open shares for changes.
+
+Currently, pre-seeding approach has a few limitations -
+- Data changes on the server before the sync topology is fully up and running can cause conflicts on the server endpoints.
+- After the cloud endpoint is created, Azure File Sync runs a process to detect the files in the cloud before starting the initial sync. The time taken to complete this process varies depending on the various factors like network speed, available bandwidth, and number of files and folders. For the rough estimation in the preview release, detection process runs approximately at 10 files/sec. Hence, even if pre-seeding runs fast, the overall time to get a fully running system may be significantly longer when data is pre-seeded in the cloud.
+ ## Migrate a DFS Replication (DFS-R) deployment to Azure File Sync To migrate a DFS-R deployment to Azure File Sync:
storage File Sync Disaster Recovery Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-disaster-recovery-best-practices.md
If you enable cloud tiering, don't implement an on-premises backup solution. Wit
If you decide to use an on-premises backup solution, backups should be performed on a server in the sync group with cloud tiering disabled. When performing a restore, use the volume-level or file-level restore options. Files restored using the file-level restore option will sync to all endpoints in the sync group and existing files will be replaced with the version restored from backup. Volume-level restores won't replace newer file versions in the cloud endpoint or other server endpoints.
-[Volume Shadow Copy Service (VSS) snapshots](file-sync-deployment-guide.md#self-service-restore-through-previous-versions-and-vss-volume-shadow-copy-service) (including the **Previous Versions** tab) are supported on volumes with cloud tiering enabled. This allows you to perform self-service restores instead of relying on an admin to perform restores for you. However, you must enable previous version compatibility through PowerShell, which will increase your snapshot storage costs. VSS snapshots don't protect against disasters on the server endpoint itself, so they should only be used alongside cloud-side backups. For details, see [Self Service restore through Previous Versions and VSS](file-sync-deployment-guide.md#self-service-restore-through-previous-versions-and-vss-volume-shadow-copy-service).
+[Volume Shadow Copy Service (VSS) snapshots](file-sync-deployment-guide.md#optional-self-service-restore-through-previous-versions-and-vss-volume-shadow-copy-service) (including the **Previous Versions** tab) are supported on volumes with cloud tiering enabled. This allows you to perform self-service restores instead of relying on an admin to perform restores for you. However, you must enable previous version compatibility through PowerShell, which will increase your snapshot storage costs. VSS snapshots don't protect against disasters on the server endpoint itself, so they should only be used alongside cloud-side backups. For details, see [Self Service restore through Previous Versions and VSS](file-sync-deployment-guide.md#optional-self-service-restore-through-previous-versions-and-vss-volume-shadow-copy-service).
## Data redundancy
storage File Sync Planning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-planning.md
description: Plan for a deployment with Azure File Sync, a service that allows y
Previously updated : 05/27/2022 Last updated : 06/01/2022
We'll use an example to illustrate how to estimate the amount of free space woul
In this case, Azure File Sync would need about 209,500,000 KiB (209.5 GiB) of space for this namespace. Add this amount to any additional free space that is desired in order to figure out how much free space is required for this disk. ### Failover Clustering
-1. Windows Server Failover Clustering is supported by Azure File Sync for the "File Server for general use" deployment option. For more information on how to configure the "File Server for general use" role on a Failover Cluster, see [Deploying a two-node clustered file server](https://docs.microsoft.com/windows-server/failover-clustering/deploy-two-node-clustered-file-server).
+1. Windows Server Failover Clustering is supported by Azure File Sync for the "File Server for general use" deployment option. For more information on how to configure the "File Server for general use" role on a Failover Cluster, see [Deploying a two-node clustered file server](/windows-server/failover-clustering/deploy-two-node-clustered-file-server).
2. The only scenario supported by Azure File Sync is Windows Server Failover Cluster with Clustered Disks 3. Failover Clustering is not supported on "Scale-Out File Server for application data" (SOFS) or on Clustered Shared Volumes (CSVs) or local disks.
When Data Deduplication is enabled on a volume with cloud tiering enabled, Dedup
Note the volume savings only apply to the server; your data in the Azure file share will not be deduped. > [!Note]
-> To support Data Deduplication on volumes with cloud tiering enabled on Windows Server 2019, Windows update [KB4520062 - October 2019](https://support.microsoft.com/help/4520062) or a later monthly rollup update must be installed and Azure File Sync agent version 12.0.0.0 or newer is required.
+> To support Data Deduplication on volumes with cloud tiering enabled on Windows Server 2019, Windows update [KB4520062 - October 2019](https://support.microsoft.com/help/4520062) or a later monthly rollup update must be installed.
**Windows Server 2012 R2** Azure File Sync does not support Data Deduplication and cloud tiering on the same volume on Windows Server 2012 R2. If Data Deduplication is enabled on a volume, cloud tiering must be disabled.
If you have an existing Windows file server 2012R2 or newer, Azure File Sync can
Check out the [Azure File Sync and Azure file share migration overview](../files/storage-files-migration-overview.md?toc=%2fazure%2fstorage%2ffilesync%2ftoc.json) article where you can find detailed guidance for your scenario. ## Antivirus
-Because antivirus works by scanning files for known malicious code, an antivirus product might cause the recall of tiered files, resulting in high egress charges. In versions 4.0 and above of the Azure File Sync agent, tiered files have the secure Windows attribute FILE_ATTRIBUTE_RECALL_ON_DATA_ACCESS set. We recommend consulting with your software vendor to learn how to configure their solution to skip reading files with this attribute set (many do it automatically).
+Because antivirus works by scanning files for known malicious code, an antivirus product might cause the recall of tiered files, resulting in high egress charges. Tiered files have the secure Windows attribute FILE_ATTRIBUTE_RECALL_ON_DATA_ACCESS set and we recommend consulting with your software vendor to learn how to configure their solution to skip reading files with this attribute set (many do it automatically).
Microsoft's in-house antivirus solutions, Windows Defender and System Center Endpoint Protection (SCEP), both automatically skip reading files that have this attribute set. We have tested them and identified one minor issue: when you add a server to an existing sync group, files smaller than 800 bytes are recalled (downloaded) on the new server. These files will remain on the new server and will not be tiered since they do not meet the tiering size requirement (>64kb).
If you prefer to use an on-premises backup solution, backups should be performed
> Bare-metal (BMR) restore can cause unexpected results and is not currently supported. > [!Note]
-> VSS snapshots (including Previous Versions tab) are supported on volumes which have cloud tiering enabled. However, you must enable previous version compatibility through PowerShell. [Learn how](file-sync-deployment-guide.md#self-service-restore-through-previous-versions-and-vss-volume-shadow-copy-service).
+> VSS snapshots (including Previous Versions tab) are supported on volumes which have cloud tiering enabled. However, you must enable previous version compatibility through PowerShell. [Learn how](file-sync-deployment-guide.md#optional-self-service-restore-through-previous-versions-and-vss-volume-shadow-copy-service).
## Data Classification If you have data classification software installed, enabling cloud tiering may result in increased cost for two reasons:
storage File Sync Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-troubleshoot.md
To see these errors, run the **FileSyncErrorsReport.ps1** PowerShell script (loc
| 0x80c8027C | -2134375812 | ECS_E_ACCESS_DENIED_EFS | The file is encrypted by an unsupported solution (like NTFS EFS). | Decrypt the file and use a supported encryption solution. For a list of support solutions, see the [Encryption](file-sync-planning.md#encryption) section of the planning guide. | | 0x80c80283 | -2160591491 | ECS_E_ACCESS_DENIED_DFSRRO | The file is located on a DFS-R read-only replication folder. | File is located on a DFS-R read-only replication folder. Azure Files Sync does not support server endpoints on DFS-R read-only replication folders. See [planning guide](file-sync-planning.md#distributed-file-system-dfs) for more information. | | 0x80070005 | -2147024891 | ERROR_ACCESS_DENIED | The file has a delete pending state. | No action required. File will be deleted once all open file handles are closed. |
-| 0x80c86044 | -2134351804 | ECS_E_AZURE_AUTHORIZATION_FAILED | The file cannot be synced because the firewall and virtual network settings on the storage account are enabled and the server does not have access to the storage account. | Add the Server IP address or virtual network by following the steps documented in the [Configure firewall and virtual network settings](file-sync-deployment-guide.md?tabs=azure-portal#configure-firewall-and-virtual-network-settings) section in the deployment guide. |
+| 0x80c86044 | -2134351804 | ECS_E_AZURE_AUTHORIZATION_FAILED | The file cannot be synced because the firewall and virtual network settings on the storage account are enabled and the server does not have access to the storage account. | Add the Server IP address or virtual network by following the steps documented in the [Configure firewall and virtual network settings](file-sync-deployment-guide.md?tabs=azure-portal#optional-configure-firewall-and-virtual-network-settings) section in the deployment guide. |
| 0x80c80243 | -2134375869 | ECS_E_SECURITY_DESCRIPTOR_SIZE_TOO_LARGE | The file cannot be synced because the security descriptor size exceeds the 64 KiB limit. | To resolve this issue, remove access control entries (ACE) on the file to reduce the security descriptor size. | | 0x8000ffff | -2147418113 | E_UNEXPECTED | The file cannot be synced due to an unexpected error. | If the error persists for several days, please open a support case. | | 0x80070020 | -2147024864 | ERROR_SHARING_VIOLATION | The file cannot be synced because it's in use. The file will be synced when it's no longer in use. | No action required. |
This error occurs because the Azure File Sync agent cannot access the Azure file
1. [Verify the storage account exists.](#troubleshoot-storage-account) 2. [Ensure the Azure file share exists.](#troubleshoot-azure-file-share) 3. [Ensure Azure File Sync has access to the storage account.](#troubleshoot-rbac)
-4. [Verify the firewall and virtual network settings on the storage account are configured properly (if enabled)](file-sync-deployment-guide.md?tabs=azure-portal#configure-firewall-and-virtual-network-settings)
+4. [Verify the firewall and virtual network settings on the storage account are configured properly (if enabled)](file-sync-deployment-guide.md?tabs=azure-portal#optional-configure-firewall-and-virtual-network-settings)
<a id="-2134351804"></a>**Sync failed because the request is not authorized to perform this operation.**
This error occurs because the Azure File Sync agent is not authorized to access
1. [Verify the storage account exists.](#troubleshoot-storage-account) 2. [Ensure the Azure file share exists.](#troubleshoot-azure-file-share)
-3. [Verify the firewall and virtual network settings on the storage account are configured properly (if enabled)](file-sync-deployment-guide.md?tabs=azure-portal#configure-firewall-and-virtual-network-settings)
+3. [Verify the firewall and virtual network settings on the storage account are configured properly (if enabled)](file-sync-deployment-guide.md?tabs=azure-portal#optional-configure-firewall-and-virtual-network-settings)
4. [Ensure Azure File Sync has access to the storage account.](#troubleshoot-rbac) <a id="-2134364064"></a><a id="cannot-resolve-storage"></a>**The storage account name used could not be resolved.**
This error occurs because the Azure File Sync agent is not authorized to access
Test-NetConnection -ComputerName <storage-account-name>.file.core.windows.net -Port 443 ``` 2. [Verify the storage account exists.](#troubleshoot-storage-account)
-3. [Verify the firewall and virtual network settings on the storage account are configured properly (if enabled)](file-sync-deployment-guide.md?tabs=azure-portal#configure-firewall-and-virtual-network-settings)
+3. [Verify the firewall and virtual network settings on the storage account are configured properly (if enabled)](file-sync-deployment-guide.md?tabs=azure-portal#optional-configure-firewall-and-virtual-network-settings)
> [!Note] > Once network connectivity to the Azure File Sync service is restored, sync may not resume immediately. By default, Azure File Sync will initiate a sync session every 30 minutes if no changes are detected within the server endpoint location. To force a sync session, restart the Storage Sync Agent (FileSyncSvc) service or make a change to a file or directory within the server endpoint location.
This error occurs because the Azure File Sync agent is not authorized to access
| **Remediation required** | Yes | 1. [Verify the storage account exists.](#troubleshoot-storage-account)
-2. [Verify the firewall and virtual network settings on the storage account are configured properly (if enabled)](file-sync-deployment-guide.md?tabs=azure-portal#configure-firewall-and-virtual-network-settings)
+2. [Verify the firewall and virtual network settings on the storage account are configured properly (if enabled)](file-sync-deployment-guide.md?tabs=azure-portal#optional-configure-firewall-and-virtual-network-settings)
<a id="-2134364014"></a>**Sync failed due to storage account locked.**
This error occurs when the Azure subscription is suspended. Sync will be reenabl
| **Error string** | ECS_E_SERVER_BLOCKED_BY_NETWORK_ACL | | **Remediation required** | Yes |
-This error occurs when the Azure file share is inaccessible because of a storage account firewall or because the storage account belongs to a virtual network. Verify the firewall and virtual network settings on the storage account are configured properly. For more information, see [Configure firewall and virtual network settings](file-sync-deployment-guide.md?tabs=azure-portal#configure-firewall-and-virtual-network-settings).
+This error occurs when the Azure file share is inaccessible because of a storage account firewall or because the storage account belongs to a virtual network. Verify the firewall and virtual network settings on the storage account are configured properly. For more information, see [Configure firewall and virtual network settings](file-sync-deployment-guide.md?tabs=azure-portal#optional-configure-firewall-and-virtual-network-settings).
<a id="-2134375911"></a>**Sync failed due to a problem with the sync database.**
Verify you have the latest Azure File Sync agent version installed and give the
| **Error string** | ECS_E_MGMT_STORAGEACLSBYPASSNOTSET | | **Remediation required** | Yes |
-This error occurs if the firewall and virtual network settings are enabled on the storage account and the "Allow trusted Microsoft services to access this storage account" exception is not checked. To resolve this issue, follow the steps documented in the [Configure firewall and virtual network settings](file-sync-deployment-guide.md?tabs=azure-portal#configure-firewall-and-virtual-network-settings) section in the deployment guide.
+This error occurs if the firewall and virtual network settings are enabled on the storage account and the "Allow trusted Microsoft services to access this storage account" exception is not checked. To resolve this issue, follow the steps documented in the [Configure firewall and virtual network settings](file-sync-deployment-guide.md?tabs=azure-portal#optional-configure-firewall-and-virtual-network-settings) section in the deployment guide.
<a id="-2147024891"></a>**Sync failed with access denied due to security settings on the storage account or NTFS permissions on the server.**
This error occurs if the firewall and virtual network settings are enabled on th
This error can occur if Azure File Sync cannot access the storage account due to security settings or if the NT AUTHORITY\SYSTEM account does not have permissions to the System Volume Information folder on the volume where the server endpoint is located. Note, if individual files are failing to sync with ERROR_ACCESS_DENIED, perform the steps documented in the [Troubleshooting per file/directory sync errors](?tabs=portal1%252cazure-portal#troubleshooting-per-filedirectory-sync-errors) section. 1. Verify the **SMB security settings** on the storage account are allowing **SMB 3.1.1** protocol version, **NTLM v2** authentication and **AES-128-GCM** encryption. To check the SMB security settings on the storage account, see [SMB security settings](../files/files-smb-protocol.md#smb-security-settings).
-2. [Verify the firewall and virtual network settings on the storage account are configured properly (if enabled)](file-sync-deployment-guide.md?tabs=azure-portal#configure-firewall-and-virtual-network-settings)
+2. [Verify the firewall and virtual network settings on the storage account are configured properly (if enabled)](file-sync-deployment-guide.md?tabs=azure-portal#optional-configure-firewall-and-virtual-network-settings)
3. Verify the **NT AUTHORITY\SYSTEM** account has permissions to the System Volume Information folder on the volume where the server endpoint is located by performing the following steps: a. Download [Psexec](/sysinternals/downloads/psexec) tool.
If files fail to be recalled:
| 0x80070079 | -2147942521 | ERROR_SEM_TIMEOUT | The file failed to recall due to an I/O timeout. This issue can occur for several reasons: server resource constraints, poor network connectivity or an Azure storage issue (for example, throttling). | No action required. If the error persists for several hours, please open a support case. | | 0x80070036 | -2147024842 | ERROR_NETWORK_BUSY | The file failed to recall due to a network issue. | If the error persists, check network connectivity to the Azure file share. | | 0x80c80037 | -2134376393 | ECS_E_SYNC_SHARE_NOT_FOUND | The file failed to recall because the server endpoint was deleted. | To resolve this issue, see [Tiered files are not accessible on the server after deleting a server endpoint](?tabs=portal1%252cazure-portal#tiered-files-are-not-accessible-on-the-server-after-deleting-a-server-endpoint). |
-| 0x80070005 | -2147024891 | ERROR_ACCESS_DENIED | The file failed to recall due to an access denied error. This issue can occur if the firewall and virtual network settings on the storage account are enabled and the server does not have access to the storage account. | To resolve this issue, add the Server IP address or virtual network by following the steps documented in the [Configure firewall and virtual network settings](file-sync-deployment-guide.md?tabs=azure-portal#configure-firewall-and-virtual-network-settings) section in the deployment guide. |
+| 0x80070005 | -2147024891 | ERROR_ACCESS_DENIED | The file failed to recall due to an access denied error. This issue can occur if the firewall and virtual network settings on the storage account are enabled and the server does not have access to the storage account. | To resolve this issue, add the Server IP address or virtual network by following the steps documented in the [Configure firewall and virtual network settings](file-sync-deployment-guide.md?tabs=azure-portal#optional-configure-firewall-and-virtual-network-settings) section in the deployment guide. |
| 0x80c86002 | -2134351870 | ECS_E_AZURE_RESOURCE_NOT_FOUND | The file failed to recall because it's not accessible in the Azure file share. | To resolve this issue, verify the file exists in the Azure file share. If the file exists in the Azure file share, upgrade to the latest Azure File Sync [agent version](file-sync-release-notes.md#supported-versions). | | 0x80c8305f | -2134364065 | ECS_E_EXTERNAL_STORAGE_ACCOUNT_AUTHORIZATION_FAILED | The file failed to recall due to authorization failure to the storage account. | To resolve this issue, verify [Azure File Sync has access to the storage account](?tabs=portal1%252cazure-portal#troubleshoot-rbac). | | 0x80c86030 | -2134351824 | ECS_E_AZURE_FILE_SHARE_NOT_FOUND | The file failed to recall because the Azure file share is not accessible. | Verify the file share exists and is accessible. If the file share was deleted and recreated, perform the steps documented in the [Sync failed because the Azure file share was deleted and recreated](?tabs=portal1%252cazure-portal#-2134375810) section to delete and recreate the sync group. |
storage Storage Files Scale Targets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-scale-targets.md
There are two main types of storage accounts for Azure Files:
| Maximum size of a file share | <ul><li>100 TiB, with large file share feature enabled<sup>2</sup></li><li>5 TiB, default</li></ul> | 100 TiB | | Maximum number of files in a file share | No limit | No limit | | Maximum request rate (Max IOPS) | <ul><li>20,000, with large file share feature enabled<sup>2</sup></li><li>1,000 or 100 requests per 100 ms, default</li></ul> | <ul><li>Baseline IOPS: 3000 + 1 IOPS per GiB, up to 100,000</li><li>IOPS bursting: Max (10000, 3x IOPS per GiB), up to 100,000</li></ul> |
-| Throughput (ingress + egress) for a single file share (MiB/sec) | <ul><li>Up to 300 MiB/sec, with large file share feature enabled<sup>2</sup></li><li>Up to 60 MiB/sec, default</li></ul> | 100 + CEILING(0.04 * ProvisionedGiB) + CEILING(0.06 * ProvisionedGiB) |
+| Throughput (ingress + egress) for a single file share (MiB/sec) | <ul><li>Up to 300 MiB/sec, with large file share feature enabled<sup>2</sup></li><li>Up to 60 MiB/sec, default</li></ul> | 100 + CEILING(0.04 * ProvisionedStorageGiB) + CEILING(0.06 * ProvisionedStorageGiB) |
| Maximum number of share snapshots | 200 snapshots | 200 snapshots | | Maximum object name length (total pathname including all directories and filename) | 2,048 characters | 2,048 characters | | Maximum individual pathname component length (in the path \A\B\C\D, each letter represents a directory or file that is an individual component) | 255 characters | 255 characters |
storage Understanding Billing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/understanding-billing.md
description: Learn how to interpret the provisioned and pay-as-you-go billing mo
Previously updated : 4/19/2022 Last updated : 06/02/2022
Azure Files provides two distinct billing models: provisioned and pay-as-you-go.
<iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/m5_-GsKv4-o" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe> :::column-end::: :::column:::
- This video is an interview that discusses the basics of the Azure Files billing model, including how to optimize Azure file shares to achieve the lowest costs possible and how to compare Azure Files to other file storage offerings on-premises and in the cloud.
+ This video is an interview that discusses the basics of the Azure Files billing model. It covers how to optimize Azure file shares to achieve the lowest costs possible and how to compare Azure Files to other file storage offerings on-premises and in the cloud.
:::column-end::: :::row-end:::
The following table shows how common operating systems measure and label storage
| Linux distributions | Commonly base-2, some software may use base-10 | Inconsistent labeling, alignment between measurement and labeling depends on the software package. | | macOS, iOS, and iPad OS | Base-10 | [Consistently labels as base-10](https://support.apple.com/HT201402). |
-Check with your operating system vendor if your operating system is not listed.
+Check with your operating system vendor if your operating system isn't listed.
## File share total cost of ownership checklist
-If you are migrating to Azure Files from on-premises or comparing Azure Files to other cloud storage solutions, you should consider the following factors to ensure a fair, apples-to-apples comparison:
+If you're migrating to Azure Files from on-premises or comparing Azure Files to other cloud storage solutions, you should consider the following factors to ensure a fair, apples-to-apples comparison:
-- **How do you pay for storage, IOPS, and bandwidth?** With Azure Files, the billing model you use depends on whether you are deploying [premium](#provisioned-model) or [standard](#pay-as-you-go-model) file shares. Most cloud solutions have models that align with the principles of either provisioned storage, such as price determinism and simplicity, or pay-as-you-go storage, which can optimize costs by only charging you for what you actually use. Of particular interest for provisioned models are minimum provisioned share size, the provisioning unit, and the ability to increase and decrease provisioning.
+- **How do you pay for storage, IOPS, and bandwidth?** With Azure Files, the billing model you use depends on whether you're deploying [premium](#provisioned-model) or [standard](#pay-as-you-go-model) file shares. Most cloud solutions have models that align with the principles of either provisioned storage, such as price determinism and simplicity, or pay-as-you-go storage, which can optimize costs by only charging you for what you actually use. Of particular interest for provisioned models are minimum provisioned share size, the provisioning unit, and the ability to increase and decrease provisioning.
-- **Are there any methods to optimize storage costs?** With Azure Files, you can use [capacity reservations](#reserve-capacity) to achieve an up to 36% discount on storage. Other solutions may employ storage efficiency strategies like deduplication or compression to optionally optimize storage, but remember, these storage optimization strategies often have non-monetary costs, such as reducing performance. Azure Files capacity reservations have no side effects on performance.
+- **Are there any methods to optimize storage costs?** With Azure Files, you can use [capacity reservations](#reserve-capacity) to achieve an up to 36% discount on storage. Other solutions may employ strategies like deduplication or compression to optionally optimize storage efficiency. However, these storage optimization strategies often have non-monetary costs, such as reducing performance. Azure Files capacity reservations have no side effects on performance.
-- **How do you achieve storage resiliency and redundancy?** With Azure Files, storage resiliency and redundancy are baked into the product offering. All tiers and redundancy levels ensure that data is highly available and at least three copies of your data are accessible. When considering other file storage options, consider whether storage resiliency and redundancy is built-in or something you must assemble yourself.
+- **How do you achieve storage resiliency and redundancy?** With Azure Files, storage resiliency and redundancy are baked into the product offering. All tiers and redundancy levels ensure that data is highly available and at least three copies of your data are accessible. When considering other file storage options, consider whether storage resiliency and redundancy is built in or something you must assemble yourself.
- **What do you need to manage?** With Azure Files, the basic unit of management is a storage account. Other solutions may require additional management, such as operating system updates or virtual resource management (VMs, disks, network IP addresses, etc.).
If you are migrating to Azure Files from on-premises or comparing Azure Files to
Azure Files supports storage capacity reservations, which enable you to achieve a discount on storage by pre-committing to storage utilization. You should consider purchasing reserved instances for any production workload, or dev/test workloads with consistent footprints. When you purchase reserved capacity, your reservation must specify the following dimensions: - **Capacity size**: Capacity reservations can be for either 10 TiB or 100 TiB, with more significant discounts for purchasing a higher capacity reservation. You can purchase multiple reservations, including reservations of different capacity sizes to meet your workload requirements. For example, if your production deployment has 120 TiB of file shares, you could purchase one 100 TiB reservation and two 10 TiB reservations to meet the total capacity requirements.-- **Term**: Reservations can be purchased for either a one-year or three-year term, with more significant discounts for purchasing a longer reservation term.
+- **Term**: Reservations can be purchased for either a one-year or three-year term, with more significant discounts for purchasing a longer reservation term.
- **Tier**: The tier of Azure Files for the capacity reservation. Reservations for Azure Files currently are available for the premium, hot, and cool tiers. - **Location**: The Azure region for the capacity reservation. Capacity reservations are available in a subset of Azure regions. - **Redundancy**: The storage redundancy for the capacity reservation. Reservations are supported for all redundancies Azure Files supports, including LRS, ZRS, GRS, and GZRS.
-Once you purchase a capacity reservation, it will automatically be consumed by your existing storage utilization. If you use more storage than you have reserved, you will pay list price for the balance not covered by the capacity reservation. Transaction, bandwidth, data transfer, and metadata storage charges are not included in the reservation.
+Once you purchase a capacity reservation, it will automatically be consumed by your existing storage utilization. If you use more storage than you have reserved, you'll pay list price for the balance not covered by the capacity reservation. Transaction, bandwidth, data transfer, and metadata storage charges aren't included in the reservation.
For more information on how to purchase storage reservations, see [Optimize costs for Azure Files with reserved capacity](files-reserve-capacity.md).
Azure Files uses a provisioned model for premium file shares. In a provisioned b
The provisioned size of the file share can be increased at any time but can be decreased only after 24 hours since the last increase. After waiting for 24 hours without a quota increase, you can decrease the share quota as many times as you like, until you increase it again. IOPS/throughput scale changes will be effective within a few minutes after the provisioned size change.
-It is possible to decrease the size of your provisioned share below your used GiB. If you do this, you will not lose data, but you will still be billed for the size used and receive the performance of the provisioned share, not the size used.
+It's possible to decrease the size of your provisioned share below your used GiB. If you do, you won't lose data, but you'll still be billed for the size used and receive the performance of the provisioned share, not the size used.
### Provisioning method
-When you provision a premium file share, you specify how many GiBs your workload requires. Each GiB that you provision entitles you to additional IOPS and throughput on a fixed ratio. In addition to the baseline IOPS for which you are guaranteed, each premium file share supports bursting on a best effort basis. The formulas for IOPS and throughput are as follows:
+When you provision a premium file share, you specify how many GiBs your workload requires. Each GiB that you provision entitles you to more IOPS and throughput on a fixed ratio. In addition to the baseline IOPS for which you are guaranteed, each premium file share supports bursting on a best effort basis. The formulas for IOPS and throughput are as follows:
| Item | Value | |-|-| | Minimum size of a file share | 100 GiB | | Provisioning unit | 1 GiB |
-| Baseline IOPS formula | `MIN(3000 + 1 * ProvisionedGiB, 100000)` |
-| Burst limit | `MIN(MAX(10000, 3 * ProvisionedGiB), 100000)` |
+| Baseline IOPS formula | `MIN(3000 + 1 * ProvisionedStorageGiB, 100000)` |
+| Burst limit | `MIN(MAX(10000, 3 * ProvisionedStorageGiB), 100000)` |
| Burst credits | `(BurstLimit - BaselineIOPS) * 3600` |
-| Throughput rate (ingress + egress) (MiB/sec) | `100 + CEILING(0.04 * ProvisionedGiB) + CEILING(0.06 * ProvisionedGiB)` |
+| Throughput rate (ingress + egress) (MiB/sec) | `100 + CEILING(0.04 * ProvisionedStorageGiB) + CEILING(0.06 * ProvisionedStorageGiB)` |
The following table illustrates a few examples of these formulae for the provisioned share sizes:
The following table illustrates a few examples of these formulae for the provisi
Effective file share performance is subject to machine network limits, available network bandwidth, IO sizes, and parallelism, among many other factors. For example, based on internal testing with 8 KiB read/write IO sizes, a single Windows virtual machine without SMB Multichannel enabled, *Standard F16s_v2*, connected to premium file share over SMB could achieve 20K read IOPS and 15K write IOPS. With 512 MiB read/write IO sizes, the same VM could achieve 1.1 GiB/s egress and 370 MiB/s ingress throughput. The same client can achieve up to \~3x performance if SMB Multichannel is enabled on the premium shares. To achieve maximum performance scale, [enable SMB Multichannel](files-smb-protocol.md#smb-multichannel) and spread the load across multiple VMs. Refer to [SMB Multichannel performance](storage-files-smb-multichannel-performance.md) and [troubleshooting guide](storage-troubleshooting-files-performance.md) for some common performance issues and workarounds. ### Bursting
-If your workload needs the extra performance to meet peak demand, your share can use burst credits to go above the share's baseline IOPS limit to give the share the performance it needs to meet the demand. Premium file shares can burst their IOPS up to 4,000 or up to a factor of three, whichever is a higher value. Bursting is automated and operates based on a credit system. Bursting works on a best effort basis, and the burst limit is not a guarantee.
+If your workload needs the extra performance to meet peak demand, your share can use burst credits to go above the share's baseline IOPS limit to give the share the performance it needs to meet the demand. Bursting is automated and operates based on a credit system. Bursting works on a best effort basis, and the burst limit isn't a guarantee.
-Credits accumulate in a burst bucket whenever traffic for your file share is below baseline IOPS. For example, a 100 GiB share has 500 baseline IOPS. If actual traffic on the share was 100 IOPS for a specific 1-second interval, then the 400 unused IOPS are credited to a burst bucket. Similarly, an idle 1 TiB share accrues burst credit at 1,424 IOPS. These credits will then be used later when operations would exceed the baseline IOPS.
+Credits accumulate in a burst bucket whenever traffic for your file share is below baseline IOPS. Earned credits are used later to enable burst when operations would exceed the baseline IOPS.
Whenever a share exceeds the baseline IOPS and has credits in a burst bucket, it will burst up to the maximum allowed peak burst rate. Shares can continue to burst as long as credits are remaining, but this is based on the number of burst credits accrued. Each IO beyond baseline IOPS consumes one credit, and once all credits are consumed, the share would return to the baseline IOPS.
Share credits have three states:
- Declining, when the file share is using more than the baseline IOPS and in the bursting mode. - Constant, when the files share is using exactly the baseline IOPS, there are either no credits accrued or used.
-New file shares start with the full number of credits in its burst bucket. Burst credits will not be accrued if the share IOPS fall below baseline IOPS due to throttling by the server.
+New file shares start with the full number of credits in its burst bucket. Burst credits won't be accrued if the share IOPS fall below baseline IOPS due to throttling by the server.
## Pay-as-you-go model
-Azure Files uses a pay-as-you-go business model for standard file shares. In a pay-as-you-go business model, the amount you pay is determined by how much you actually use, rather than based on a provisioned amount. At a high level, you pay a cost for the amount of logical data stored, and then an additional set of transactions based on your usage of that data. A pay-as-you-go model can be cost-efficient, because you don't need to overprovision to account for future growth or performance requirements, or deprovision if your workload and data footprint vary over time. On the other hand, a pay-as-you-go model can also be difficult to plan as part of a budgeting process, because the pay-as-you-go billing model is driven by end-user consumption.
+Azure Files uses a pay-as-you-go business model for standard file shares. In a pay-as-you-go business model, the amount you pay is determined by how much you actually use, rather than based on a provisioned amount. At a high level, you pay a cost for the amount of logical data stored, and then an additional set of transactions based on your usage of that data. A pay-as-you-go model can be cost-efficient, because you don't need to overprovision to account for future growth or performance requirements. You also don't need to deprovision if your workload and data footprint vary over time. On the other hand, a pay-as-you-go model can also be difficult to plan as part of a budgeting process, because the pay-as-you-go billing model is driven by end-user consumption.
### Differences in standard tiers When you create a standard file share, you pick between the following tiers: transaction optimized, hot, and cool. All three tiers are stored on the exact same standard storage hardware. The main difference for these three tiers is their data at-rest storage prices, which are lower in cooler tiers, and the transaction prices, which are higher in the cooler tiers. This means: - Transaction optimized, as the name implies, optimizes the price for high transaction workloads. Transaction optimized has the highest data at-rest storage price, but the lowest transaction prices.-- Hot is for active workloads that do not involve a large number of transactions, and has a slightly lower data at-rest storage price, but slightly higher transaction prices as compared to transaction optimized. Think of it as the middle ground between the transaction optimized and cool tiers.-- Cool optimizes the price for workloads that do not have much activity, offering the lowest data at-rest storage price, but the highest transaction prices.
+- Hot is for active workloads that don't involve a large number of transactions, and has a slightly lower data at-rest storage price, but slightly higher transaction prices as compared to transaction optimized. Think of it as the middle ground between the transaction optimized and cool tiers.
+- Cool optimizes the price for workloads that don't have much activity, offering the lowest data at-rest storage price, but the highest transaction prices.
-If you put an infrequently accessed workload in the transaction optimized tier, you will pay almost nothing for the few times in a month that you make transactions against your share, but you will pay a high amount for the data storage costs. If you were to move this same share to the cool tier, you would still pay almost nothing for the transaction costs, simply because you are infrequently making transactions for this workload, but the cool tier has a much cheaper data storage price. Selecting the appropriate tier for your use case allows you to considerably reduce your costs.
+If you put an infrequently accessed workload in the transaction optimized tier, you'll pay almost nothing for the few times in a month that you make transactions against your share. However, you'll pay a high amount for the data storage costs. If you moved this same share to the cool tier, you'd still pay almost nothing for the transaction costs, simply because you're infrequently making transactions for this workload. However, the cool tier has a much cheaper data storage price. Selecting the appropriate tier for your use case allows you to considerably reduce your costs.
-Similarly, if you put a highly accessed workload in the cool tier, you will pay a lot more in transaction costs, but less for data storage costs. This can lead to a situation where the increased costs from the transaction prices increase outweigh the savings from the decreased data storage price, leading you to pay more money on cool than you would have on transaction optimized. For some usage levels, it's possible that the hot tier will be the most cost efficient, and the cool tier will be more expensive than transaction optimized.
+Similarly, if you put a highly accessed workload in the cool tier, you'll pay a lot more in transaction costs, but less for data storage costs. This can lead to a situation where the increased costs from the transaction prices increase outweigh the savings from the decreased data storage price, leading you to pay more money on cool than you would have on transaction optimized. For some usage levels, it's possible that the hot tier will be the most cost efficient, and the cool tier will be more expensive than transaction optimized.
Your workload and activity level will determine the most cost efficient tier for your standard file share. In practice, the best way to pick the most cost efficient tier involves looking at the actual resource consumption of the share (data stored, write transactions, etc.). ### Choosing a tier
-Regardless of how you migrate existing data into Azure Files, we recommend initially creating the file share in transaction optimized tier due to the large number of transactions incurred during migration. After your migration is complete and you've operated for a few days/weeks with regular usage, you can plug your transaction counts into the [pricing calculator](https://azure.microsoft.com/pricing/calculator/) to figure out which tier is best suited for your workload.
+Regardless of how you migrate existing data into Azure Files, we recommend initially creating the file share in transaction optimized tier due to the large number of transactions incurred during migration. After your migration is complete and you've operated for a few days or weeks with regular usage, you can plug your transaction counts into the [pricing calculator](https://azure.microsoft.com/pricing/calculator/) to figure out which tier is best suited for your workload.
Because standard file shares only show transaction information at the storage account level, using the storage metrics to estimate which tier is cheaper at the file share level is an imperfect science. If possible, we recommend deploying only one file share in each storage account to ensure full visibility into billing.
To see previous transactions:
### What are transactions? Transactions are operations or requests against Azure Files to upload, download, or otherwise manipulate the contents of the file share. Every action taken on a file share translates to one or more transactions, and on standard shares that use the pay-as-you-go billing model, that translates to transaction costs.
-There are five basic transaction categories: write, list, read, other, and delete. All operations done via the REST API or SMB are bucketed into one of these 4 categories as follows:
+There are five basic transaction categories: write, list, read, other, and delete. All operations done via the REST API or SMB are bucketed into one of these categories:
| Transaction bucket | Management operations | Data operations | |-|-|-|
There are five basic transaction categories: write, list, read, other, and delet
| Delete transactions | <ul><li>`DeleteShare`</li></ul> | <ul><li>`ClearRange`</li><li>`DeleteDirectory`</li></li>`DeleteFile`</li></ul> | > [!Note]
-> NFS 4.1 is only available for premium file shares, which use the provisioned billing model. Transactions do not affect billing for premium file shares.
+> NFS 4.1 is only available for premium file shares, which use the provisioned billing model. Transactions don't affect billing for premium file shares.
## Provisioned/quota, logical size, and physical size Azure Files tracks three distinct quantities with respect to share capacity: -- **Provisioned size or quota**: With both premium and standard file shares, you specify the maximum size that the file share is allowed to grow to. In premium file shares, this value is called the provisioned size, and whatever amount you provision is what you pay for, regardless of how much you actually use. In standard file shares, this value is called quota and does not directly affect your bill. Provisioned size is a required field for premium file shares, while standard file shares will default if not directly specified to the maximum value supported by the storage account, either 5 TiB or 100 TiB, depending on the storage account type and settings.
+- **Provisioned size or quota**: With both premium and standard file shares, you specify the maximum size that the file share is allowed to grow to. In premium file shares, this value is called the provisioned size, and whatever amount you provision is what you pay for, regardless of how much you actually use. In standard file shares, this value is called quota and does not directly affect your bill. Provisioned size is a required field for premium file shares. For standard file shares, if provisioned size isn't directly specified, the share will default to the maximum value supported by the storage account. This is either 5 TiB or 100 TiB, depending on the storage account type and settings.
-- **Logical size**: The logical size of a file share or file relates to how big it is without considering how it is actually stored, where additional optimizations may be applied. One way to think about this is that the logical size of the file is how many KiB/MiB/GiB will be transferred over the wire if you copy it to a different location. In both premium and standard file shares, the total logical size of the file share is what is used for enforcement against provisioned size/quota. In standard file shares, the logical size is the quantity used for the data at-rest usage billing. Logical size is referred to as "size" in the Windows properties dialog for a file/folder and as "content length" by Azure Files metrics.
+- **Logical size**: The logical size of a file share or file relates to how big it is without considering how it's actually stored, where additional optimizations may be applied. One way to think about this is that the logical size of the file is how many KiB/MiB/GiB will be transferred over the wire if you copy it to a different location. In both premium and standard file shares, the total logical size of the file share is what is used for enforcement against provisioned size/quota. In standard file shares, the logical size is the quantity used for the data at-rest usage billing. Logical size is referred to as "size" in the Windows properties dialog for a file/folder and as "content length" by Azure Files metrics.
-- **Physical size**: The physical size of the file relates to the size of the file as encoded on disk. This may align with the file's logical size, or it may be smaller, depending on how the file has been written to by the operating system. A common reason for the logical size and physical size to be different is through the use of [sparse files](/windows/win32/fileio/sparse-files). The physical size of the files in the share is used for snapshot billing, although allocated ranges are shared between snapshots if they are unchanged (differential storage). To learn more about how snapshots are billed in Azure Files, see [Snapshots](#snapshots).
+- **Physical size**: The physical size of the file relates to the size of the file as encoded on disk. This may align with the file's logical size, or it may be smaller, depending on how the file has been written to by the operating system. A common reason for the logical size and physical size to be different is by using [sparse files](/windows/win32/fileio/sparse-files). The physical size of the files in the share is used for snapshot billing, although allocated ranges are shared between snapshots if they are unchanged (differential storage). To learn more about how snapshots are billed in Azure Files, see [Snapshots](#snapshots).
## Snapshots
-Azure Files supports snapshots, which are similar to volume shadow copies (VSS) on Windows File Server. Snapshots are always differential from the live share and from each other, meaning that you are always paying only for what's different in each snapshot. For more information on share snapshots, see [Overview of snapshots for Azure Files](storage-snapshots-files.md).
+Azure Files supports snapshots, which are similar to volume shadow copies (VSS) on Windows File Server. Snapshots are always differential from the live share and from each other, meaning that you're always paying only for what's different in each snapshot. For more information on share snapshots, see [Overview of snapshots for Azure Files](storage-snapshots-files.md).
-Snapshots do not count against file share size limits, although you are limited to a specific number of snapshots. To see the current snapshot limits, see [Azure file share scale targets](storage-files-scale-targets.md#azure-file-share-scale-targets).
+Snapshots do not count against file share size limits, although you're limited to a specific number of snapshots. To see the current snapshot limits, see [Azure file share scale targets](storage-files-scale-targets.md#azure-file-share-scale-targets).
Snapshots are always billed based on the differential storage utilization of each snapshot, however this looks slightly different between premium file shares and standard file shares: -- In premium file shares, snapshots are billed against their own snapshot meter, which has a reduced price over the provisioned storage price. This means that you will see a separate line item on your bill representing snapshots for premium file shares for each FileStorage storage account on your bill.
+- In premium file shares, snapshots are billed against their own snapshot meter, which has a reduced price over the provisioned storage price. This means that you'll see a separate line item on your bill representing snapshots for premium file shares for each FileStorage storage account on your bill.
-- In standard file shares, snapshots are billed as part of the normal used storage meter, although you are still only billed for the differential cost of the snapshot. This means that you will not see a separate line item on your bill representing snapshots for each standard storage account containing Azure file shares. This also means that differential snapshot usage counts against capacity reservations that are purchased for standard file shares.
+- In standard file shares, snapshots are billed as part of the normal used storage meter, although you're still only billed for the differential cost of the snapshot. This means that you won't see a separate line item on your bill representing snapshots for each standard storage account containing Azure file shares. This also means that differential snapshot usage counts against capacity reservations that are purchased for standard file shares.
Value-added services for Azure Files may use snapshots as part of their value proposition. See [value-added services for Azure Files](#value-added-services) for more information on how snapshots are used. ## Value-added services
-Like on-premises storage solutions which offer first- and third-party features/product integrations to bring additional value to the hosted file shares, Azure Files provides integration points for first- and third-party products to integrate with customer-owned file shares. Although these solutions may provide considerable extra value to Azure Files, you should consider the additional costs that these services add to the total cost of an Azure Files solution.
+Like on-premises storage solutions that offer first- and third-party features and product integrations to add value to the hosted file shares, Azure Files provides integration points for first- and third-party products to integrate with customer-owned file shares. Although these solutions may provide considerable extra value to Azure Files, you should consider the extra costs that these services add to the total cost of an Azure Files solution.
-Costs are generally broken down into three buckets:
+Costs are broken down into three buckets:
-- **Licensing costs for the value-added service.** These may come in the form of a fixed cost per customer, end user (sometimes referred to as a "head cost"), Azure file share or storage account, or in units of storage utilization, such as a fixed cost for every 500 GiB chunk of data in the file share.
+- **Licensing costs for the value-added service.** These may come in the form of a fixed cost per customer, end user (sometimes called a "head cost"), Azure file share or storage account. They may also be based on units of storage utilization, such as a fixed cost for every 500 GiB chunk of data in the file share.
- **Transaction costs for the value-added service.** Some value-added services have their own concept of transactions distinct from what Azure Files views as a transaction. These transactions will show up on your bill under the value-added service's charges; however, they relate directly to how you use the value-added service with your file share. -- **Azure Files costs for using a value-added service.** Azure Files does not directly charge customers costs for adding value-added services, but as part of adding value to the Azure file share, the value-added service might increase the costs that you see on your Azure file share. This is easy to see with standard file shares, because standard file shares have a pay-as-you-go model with transaction charges. If the value-added service does transactions against the file share on your behalf, they will show up in your Azure Files transaction bill even though you didn't directly do those transactions yourself. This applies to premium file shares as well, although it may be less noticeable. Additional transactions against premium file shares from value-added services count against your provisioned IOPS numbers, meaning that value-added services may require provisioning additional storage to have enough IOPS or throughput available for your workload.
+- **Azure Files costs for using a value-added service.** Azure Files does not directly charge customers costs for adding value-added services, but as part of adding value to the Azure file share, the value-added service might increase the costs that you see on your Azure file share. This is easy to see with standard file shares, because standard file shares have a pay-as-you-go model with transaction charges. If the value-added service does transactions against the file share on your behalf, they will show up in your Azure Files transaction bill even though you didn't directly do those transactions yourself. This applies to premium file shares as well, although it may be less noticeable. Additional transactions against premium file shares from value-added services count against your provisioned IOPS numbers, meaning that value-added services may require provisioning more storage to have enough IOPS or throughput available for your workload.
When computing the total cost of ownership for your file share, you should consider the costs of Azure Files and of all value-added services that you would like to use with Azure Files.
When considering the total cost of ownership for a solution deployed using Azure
To optimize costs for Azure Files with Azure File Sync, you should consider the tier of your file share. For more information on how to pick the tier for each file share, see [choosing a file share tier](#choosing-a-tier).
-If you are migrating to Azure File Sync from StorSimple, see [Comparing the costs of StorSimple to Azure File Sync](../file-sync/file-sync-storsimple-cost-comparison.md).
+If you're migrating to Azure File Sync from StorSimple, see [Comparing the costs of StorSimple to Azure File Sync](../file-sync/file-sync-storsimple-cost-comparison.md).
### Azure Backup
-Azure Backup provides a serverless backup solution for Azure Files that seamlessly integrates with your file shares, as well as other value-added services such as Azure File Sync. Azure Backup for Azure Files is a snapshot-based backup solution, meaning that Azure Backup provides a scheduling mechanism for automatically taking snapshots on an administrator-defined schedule and a user-friendly interface for restoring deleted files/folders or the entire share to a particular point in time. To learn more about Azure Backup for Azure Files, see [About Azure file share backup](../../backup/azure-file-share-backup-overview.md?toc=/azure/storage/files/toc.json).
+Azure Backup provides a serverless backup solution for Azure Files that seamlessly integrates with your file shares, and with other value-added services such as Azure File Sync. Azure Backup for Azure Files is a snapshot-based backup solution that provides a scheduling mechanism for automatically taking snapshots on an administrator-defined schedule. It also provides a user-friendly interface for restoring deleted files/folders or the entire share to a particular point in time. To learn more about Azure Backup for Azure Files, see [About Azure file share backup](../../backup/azure-file-share-backup-overview.md?toc=/azure/storage/files/toc.json).
-When considering the costs of using Azure Backup to back up your Azure file shares, you should consider the following:
+When considering the costs of using Azure Backup to back up your Azure file shares, consider the following:
-- **Protected instance licensing cost for Azure file share data.** Azure Backup charges a protected instance licensing cost per storage account containing backed up Azure file shares. A protected instance is defined as 250 GiB of Azure file share storage. Storage accounts containing less than 250 GiB of Azure file share storage are subject to a fractional protected instance cost. See [Azure Backup pricing](https://azure.microsoft.com/pricing/details/backup/) for more information (note that you must select *Azure Files* from the list of services Azure Backup can protect).
+- **Protected instance licensing cost for Azure file share data.** Azure Backup charges a protected instance licensing cost per storage account containing backed up Azure file shares. A protected instance is defined as 250 GiB of Azure file share storage. Storage accounts containing less than 250 GiB of Azure file share storage are subject to a fractional protected instance cost. For more information, see [Azure Backup pricing](https://azure.microsoft.com/pricing/details/backup/). Note that you must select *Azure Files* from the list of services Azure Backup can protect.
- **Azure Files costs.** Azure Backup increases the costs of Azure Files in the following ways:
- - **Differential costs from Azure file share snapshots.** Azure Backup automates taking Azure file share snapshots on an administrator-defined schedule. Snapshots are always differential; however, the additional cost added to the total bill depends on the length of time snapshots are kept and the amount of churn on the file share during that time, because that dictates how different the snapshot is from the live file share and therefore how much additional data is stored by Azure Files.
+ - **Differential costs from Azure file share snapshots.** Azure Backup automates taking Azure file share snapshots on an administrator-defined schedule. Snapshots are always differential; however, the additional cost added to the total bill depends on the length of time snapshots are kept and the amount of churn on the file share during that time. This dictates how different the snapshot is from the live file share and therefore how much additional data is stored by Azure Files.
- **Transaction costs from restore operations.** Restore operations from the snapshot to the live share will cause transactions. For standard file shares, this means that reads from snapshots/writes from restores will be billed as normal file share transactions. For premium file shares, these operations are counted against the provisioned IOPS for the file share.
Microsoft Defender provides support for Azure Files as part of its Microsoft Def
Microsoft Defender for Storage does not support antivirus capabilities for Azure file shares.
-The main cost from Microsoft Defender for Storage is an additional set of transaction costs that the product levies on top of the transactions that are done against the Azure file share. Although these costs are based on the transactions incurred in Azure Files, they are not part of the billing for Azure Files, but rather are part of the Microsoft Defender pricing. Microsoft Defender for Storage charges a transaction rate even on premium file shares, where Azure Files includes transactions as part of IOPS provisioning. The current transaction rate can be found on [Microsoft Defender for Cloud pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/) under the *Microsoft Defender for Storage* table row.
+The main cost from Microsoft Defender for Storage is an additional set of transaction costs that the product levies on top of the transactions that are done against the Azure file share. Although these costs are based on the transactions incurred in Azure Files, they aren't part of the billing for Azure Files, but rather are part of the Microsoft Defender pricing. Microsoft Defender for Storage charges a transaction rate even on premium file shares, where Azure Files includes transactions as part of IOPS provisioning. The current transaction rate can be found on [Microsoft Defender for Cloud pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/) under the *Microsoft Defender for Storage* table row.
Transaction heavy file shares will incur significant costs using Microsoft Defender for Storage. Based on these costs, you may wish to opt-out of Microsoft Defender for Storage for specific storage accounts. For more information, see [Exclude a storage account from Microsoft Defender for Storage protections](../../defender-for-cloud/defender-for-storage-exclude.md).
stream-analytics Quick Create Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/quick-create-bicep.md
+
+ Title: Quickstart - Create an Azure Stream Analytics job using Bicep
+description: This quickstart shows how to use Bicep to create an Azure Stream Analytics job.
++++++ Last updated : 05/17/2022++
+# Quickstart: Create an Azure Stream Analytics job using Bicep
+
+In this quickstart, you use Bicep to create an Azure Stream Analytics job. Once the job is created, you validate the deployment.
++
+## Prerequisites
+
+To complete this article, you need to have an Azure subscription. [Create one for free](https://azure.microsoft.com/free/).
+
+## Review the Bicep file
+
+The Bicep file used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/streamanalytics-create/).
++
+The Azure resource defined in the Bicep file is [Microsoft.StreamAnalytics/StreamingJobs](/azure/templates/microsoft.streamanalytics/streamingjobs): create an Azure Stream Analytics job.
+
+## Deploy the Bicep file
+
+1. Save the Bicep file as **main.bicep** to your local computer.
+1. Deploy the Bicep file using either Azure CLI or Azure PowerShell.
+
+ # [CLI](#tab/CLI)
+
+ ```azurecli
+ az group create --name exampleRG --location eastus
+ az deployment group create --resource-group exampleRG --template-file main.bicep --parameters streamAnalyticsJobName =<job-name> numberOfStreamingUnits=<int>
+ ```
+
+ # [PowerShell](#tab/PowerShell)
+
+ ```azurepowershell
+ New-AzResourceGroup -Name exampleRG -Location eastus
+ New-AzResourceGroupDeployment -ResourceGroupName exampleRG -TemplateFile ./main.bicep -streamAnalyticsJobName "<job-name>" -numberOfStreamingUnits <int>
+ ```
+
+
+
+ You need to provide values for the following parameters:
+
+ - **streamAnalyticsJobName**: Replace **\<job-name\>** with the Stream Analytics job name. The name can contain alphanumeric characters and hyphens, and it must be at least 3-63 characters long.
+ - **numberOfStreamingUnits**: Replace **\<int\>** with the number of Streaming Units. Allowed values include: 1, 3, 6, 12, 18, 24, 30, 36, 42, and 48.
+
+ > [!NOTE]
+ > When the deployment finishes, you should see a message indicating the deployment succeeded.
+
+## Review deployed resources
+
+You can either use the Azure portal to check the Azure Stream Analytics job or use the following Azure CLI or Azure PowerShell script to list the resource.
+
+### Azure CLI
+
+Use the Azure portal, Azure CLI, or Azure PowerShell to list the deployed resources in the resource group.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az resource list --resource-group exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Get-AzResource -ResourceGroupName exampleRG
+```
+++
+## Clean up resources
+
+If you plan to continue on to subsequent tutorials, you may wish to leave these resources in place. When no longer needed, delete the resource group, which deletes the Azure Stream Analytics job. To delete the resource group by using Azure CLI or Azure PowerShell:
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az group delete --name exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Remove-AzResourceGroup -Name exampleRG
+```
+++
+## Next steps
+
+In this quickstart, you created an Azure Stream Analytics job using Bicep and validated the deployment. To learn how to create your own Bicep files using Visual Studio Code, continue on to the following article:
+
+> [!div class="nextstepaction"]
+> [Quickstart: Create Bicep files with Visual Studio Code](../azure-resource-manager/bicep/quickstart-create-bicep-use-visual-studio-code.md)
synapse-analytics Implementation Success Assess Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/implementation-success-assess-environment.md
+
+ Title: "Synapse implementation success methodology: Assess environment"
+description: "Learn how to assess your environment to help evaluate the solution design and make informed technology decisions to implement Azure Synapse Analytics."
+++++ Last updated : 05/31/2022++
+# Synapse implementation success methodology: Assess environment
++
+The first step when implementing Azure Synapse Analytics is to assessment your environment. An assessment provides you with the opportunity to gather all the available information about your existing environment, environmental requirements, project requirements, constraints, timelines, and pain points. This information will form the basis of later evaluations and checkpoint activities. It will prove invaluable when it comes time to validate and compare against the project solution as it's planned, designed, and developed. We recommend that you dedicate a good amount of time to gather all the information and be sure to have necessary discussions with relevant groups. Relevant groups can include project stakeholders, business users, solution designers, and subject matter experts (SMEs) of the existing solution and environment.
+
+The assessment will become a guide to help you evaluate the solution design and make informed technology recommendations to implement Azure Synapse.
+
+## Workload assessment
+
+The workload assessment is concerned with the environment, analytical workload roles, ETL/ELT, networking and security, the Azure environment, and data consumption.
+
+### Environment
+
+For the environment, evaluate the following points.
+
+- Describe your existing analytical workload:
+ - What are the workloads (like data warehouse or big data)?
+ - How is this workload helping the business? What are the use case scenarios?
+ - What is the business driver for this analytical platform and for potential migration?
+ - Gather details about the existing architecture, design, and implementation choices.
+ - Gather details about all existing upstream and downstream dependent components and consumers.
+- Are you migrating an existing data warehouse (like Microsoft SQL Server, Microsoft Analytics Platform System (APS), Netezza, Snowflake, or Teradata)?
+- Are you migrating a big data platform (like Cloudera or Hortonworks)?
+- Gather the architecture and dataflow diagrams for the current analytical environment.
+- Where are the data sources for your planned analytical workloads located (Azure, other cloud providers, or on-premises)?
+- What is the total size of existing datasets (historical and incremental)? What is the current rate of growth of your dataset(s)? What is the projected rate of growth of your datasets for the next 2-5 years?
+- Do you have an existing data lake? Gather as much detail as possible about file types (like Parquet or CSV), file sizes, and security configuration.
+- Do you have semi-structured or unstructured data to process and analyze?
+- Describe the nature of the data processing (batch or real-time processing).
+- Do you need interactive data exploration from relational data, data lake, or other sources?
+- Do you need real-time data analysis and exploration from operational data sources?
+- What are the pain points and limitations in the current environment?
+- What source control and DevOps tools are you using today?
+- Do you have a use case to build a hybrid (cloud and on-premises) analytical solution, cloud only, or multi-cloud?
+- Gather information on the existing cloud environment. Is it a single-cloud provider or a multi-cloud provider?
+- Gather plans about the future cloud environment. Will it be a single-cloud provider or a multi-cloud provider?
+- What are the RPO/RTO/HA/SLA requirements in the existing environment?
+- What are the RPO/RTO/HA/SLA requirements in the planned environment?
+
+### Analytical workload roles
+
+For the analytical workload roles, evaluate the following points.
+
+- Describe the different roles (data scientist, data engineer, data analyst, and others).
+- Describe the analytical platform access control requirement for these roles.
+- Identify the platform owner who's responsible to provision compute resources and grant access.
+- Describe how different data roles currently collaborate.
+- Are there multiple teams collaborating on the same analytical platform? If so, what's the access control and isolation requirements for each of these teams?
+- What are the client tools that end users use to interact with the analytical platform?
+
+### ETL/ELT, transformation, and orchestration
+
+For ETL/ELT, transformation, and orchestration, evaluate the following points.
+
+- What tools are you using today for data ingestion (ETL or ELT)?
+- Where do these tools exist in the existing environment (on-premises or the cloud)?
+- What are your current data load and update requirements (real-time, micro batch, hourly, daily, weekly, or monthly)?
+- Describe the transformation requirements for each layer (big data, data lake, data warehouse).
+- What is the current programming approach for transforming the data (no-code, low-code, programming like SQL, Python, Scala, C#, or other)?
+- What is the preferred planned programming approach to transform the data (no-code, low-code, programming like SQL, Python, Scala, C#, or other)?
+- What tools are currently in use for data orchestration to automate the data-driven process?
+- Where are the data sources for your existing ETL located (Azure, other cloud provider, or on-premises)?
+- What are the existing data consumptions tools (reporting, BI tools, open-source tools) that require integration with the analytical platform?
+- What are the planned data consumptions tools (reporting, BI tools, open-source tools) that will require integration with the analytical platform?
+
+### Networking and security
+
+For networking and security, evaluate the following points.
+
+- What regulatory requirements do you have for your data?
+- If your data contains customer content, payment card industry (PCI), or Health Insurance Portability and Accountability Act of 1996 (HIPAA) data, has your security group certified Azure for this data? If so, for which Azure services?
+- Describe your user authorization and authentication requirements.
+- Are there security issues that could limit access to data during implementation?
+- Is there test data available to use during development and testing?
+- Describe the organizational network security requirements on the analytical compute and storage (private network, public network, or firewall restrictions).
+- Describe the network security requirements for client tools to access analytical compute and storage (peered network, private endpoint, or other).
+- Describe the current network setup between on-premises and Azure (Azure ExpressRoute, site-to-site, or other).
+
+Use the following checklists of possible requirements to guide your assessment.
+
+- Data protection:
+ - In-transit encryption
+ - Encryption at rest (default keys or customer-managed keys)
+ - Data discovery and classification
+- Access control:
+ - Object-level security
+ - Row-level security
+ - Column-level security
+ - Dynamic data masking
+- Authentication:
+ - SQL login
+ - Azure Active Directory (Azure AD)
+ - Multi-factor authentication (MFA)
+- Network security:
+ - Virtual networks
+ - Firewall
+ - Azure ExpressRoute
+- Threat protection:
+ - Threat detection
+ - Auditing
+ - Vulnerability assessment
+
+For more information, see the [Azure Synapse Analytics security white paper](security-white-paper-introduction.md).
+
+### Azure environment
+
+For the Azure environment, evaluate the following points.
+
+- Are you currently using Azure? Is it used for production workloads?
+- If you're using Azure, which services are you using? Which regions are you using?
+- Do you use Azure ExpressRoute? What's its bandwidth?
+- Do you have budget approval to provision the required Azure services?
+- How do you currently provision and manage resources (Azure Resource Manager (ARM) or Terraform)?
+- Is your key team familiar with Synapse Analytics? Is any training required?
+
+### Data consumption
+
+For data consumption, evaluate the following points.
+
+- Describe how and what tools you currently use to perform activities like ingest, explore, prepare, and data visualization.
+- Identify what tools you plan to use to perform activities like ingest, explore, prepare, and data visualization.
+- What applications are planned to interact with the analytical platform (Microsoft Power BI, Microsoft Excel, Microsoft SQL Server Reporting Services, Tableau, or others)?
+- Identify all data consumers.
+- Identify data export and data sharing requirements.
+
+## Azure Synapse services assessment
+
+The Azure Synapse services assessment is concerned with the services within Azure Synapse. Azure Synapse has the following components for compute and data movement:
+
+- **Synapse SQL:** A distributed query system for Transact-SQL (T-SQL) that enables data warehousing and data virtualization scenarios. It also extends T-SQL to address streaming and machine learning (ML) scenarios. Synapse SQL offers both *serverless* and *dedicated* resource models.
+- **Serverless SQL pool:** A distributed data processing system, built for large-scale data and computational functions. There's no infrastructure to set up or clusters to maintain. This service is suited for unplanned or burst workloads. Recommended scenarios include quick data exploration on files directly on the data lake, logical data warehouse, and data transformation of raw data.
+- **Dedicated SQL pool:** Represents a collection of analytic resources that are provisioned when using Synapse SQL. The size of a dedicated SQL pool (formerly SQL DW) is determined by Data Warehousing Units (DWU). This service is suited for a data warehouse with predictable, high performance continuous workloads over data stored in SQL tables. 
+- **Apache Spark pool:** Deeply and seamlessly integrates Apache Spark, which is the most popular open source big data engine used for data preparation, data engineering, ETL, and ML.
+- **Data integration pipelines:** Azure Synapse contains the same data integration engine and experiences as Azure Data Factory (ADF). They allow you to create rich at-scale ETL pipelines without leaving Azure Synapse.
+
+To help determine the best SQL pool type (dedicated or serverless), evaluate the following points.
+
+- Do you want to build a traditional relational data warehouse by reserving processing power for data stored in SQL tables?
+- Do your use cases demand predictable performance?
+- Do you want to a build a logical data warehouse on top of a data lake?
+- Do you want to query data directly from a data lake?
+- Do you want to explore data from a data lake?
+
+The following table compares the two Synapse SQL pool types.
+
+| **Comparison** | **Dedicated SQL pool** | **Serverless SQL pool** |
+|:-|:-|:-|
+| Value propositions | Fully managed capabilities of a data warehouse. Predictable and high performance for continuous workloads. Optimized for managed (loaded) data. | Easy to get started and explore data lake data. Better total cost of ownership (TCO) for ad hoc and intermittent workloads. Optimized for querying data in a data lake. |
+| Workloads | *Ideal for continuous workloads.* Loading boosts performance, with more complexity. Charging per DWU (when sized well) will be cost-beneficial. | *Ideal for ad hoc or intermittent workloads.* There's no need to load data, so it's easier to start and run. Charging per usage will be cost-beneficial. |
+| Query performance | *Delivers high concurrency and low latency.* Supports rich caching options, including materialized views. There's the ability to choose trade-offs with workload management (WLM). | *Not suited for dashboarding queries.* Millisecond response times aren't expected. It works only on external data. |
+
+### Dedicated SQL pool assessment
+
+For the dedicated SQL pool assessment, evaluate the following platform points.
+
+- What is the current data warehouse platform (Microsoft SQL Server, Netezza, Teradata, Greenplum, or other)?
+- For a migration workload, determine the make and model of your appliance for each environment. Include details of CPUs, GPUs, and memory.
+- For an appliance migration, when was the hardware purchased? Has the appliance been fully depreciated? If not, when will depreciation end? And, how much capital expenditure is left?
+- Are there any hardware and network architecture diagrams?
+- Where are the data sources for your planned data warehouse located (Azure, other cloud provider, or on-premises)?
+- What are the data hosting platforms of the data sources for your data warehouse (Microsoft SQL Server, Azure SQL Database, DB2, Oracle, Azure Blob Storage, AWS, Hadoop, or other)?
+- Are any of the data sources data warehouses? If so, which ones?
+- Identify all ETL, ELT, and data loading scenarios (batch windows, streaming, near real-time). Identify existing service level agreements (SLAs) for each scenario and document the expected SLAs in the new environment.
+- What is the current data warehouse size?
+- What rate of dataset growth is being targeted for the dedicated SQL pool?
+- Describe the environments you're using today (development, test, or production).
+- Which tools are currently in place for data movement (ADF, Microsoft SQL Server Integration Services (SSIS), robocopy, Informatica, SFTP, or others)?
+- Are you planning to load real-time or near real-time data?
+
+Evaluate the following database points.
+
+- What is the number of objects in each data warehouse (schemas, tables, views, stored procedures, functions)?
+- Is it a star schema, snowflake schema or other design?
+- What are the largest tables in terms of size and number of records?
+- What are the widest tables in terms of the number of columns?
+- Is there already a data model designed for your data warehouse? Is it a Kimball, Inmon, or star schema design?
+- Are Slowly Changing Dimensions (SCDs) in use? If so, which types?
+- Will a semantic layer be implemented by using relational data marts or Analysis Services (tabular or multidimensional), or another product?
+- What are the HA/RPO/RTO/data archiving requirements?
+- What are the region replication requirements?
+
+Evaluate the following workload characteristics.
+
+- What is the estimated number of concurrent users or jobs accessing the data warehouse during *peak hours*?
+- What is the estimated number of concurrent users or jobs accessing the data warehouse during *off peak hours*?
+- Is there a period of time when there will be no users or jobs?
+- What are your query execution performance expectations for interactive queries?
+- What are your data load performance expectations for daily/weekly/monthly data loads or updates?
+- What are your query execution expectations for reporting and analytical queries?
+- How complex will the most commonly executed queries be?
+- What percentage of your total dataset size is your active dataset?
+- Approximately what percentage of the workload is anticipated for loading or updating, batch processing or reporting, interactive query, and analytical processing?
+- Identify the data consuming patterns and platforms:
+ - Current and planned reporting method and tools.
+ - Which application or analytical tools will access the data warehouse?
+ - Number of concurrent queries?
+ - Average number of active queries at any point in time?
+ - What is the nature of data access (interactive, ad hoc, export, or others)?
+ - Data roles and complete description of their data requirements.
+ - Maximum number of concurrent connections.
+- Query performance SLA pattern by:
+ - Dashboard users.
+ - Batch reporting.
+ - ML users.
+ - ETL process.
+- What are the security requirements for the existing environment and for the new environment (row-level security, column-level security, access control, encryption, and others)?
+- Do you have requirements to integrate ML model scoring with T-SQL?
+
+### Serverless SQL pool assessment
+
+Synapse Serverless SQL pool supports three major use cases.
+
+- **Basic discovery and exploration:** Quickly reason about the data in various formats (Parquet, CSV, JSON) in your data lake, so you can plan how to extract insights from it.
+- **Logical data warehouse:** Provide a relational abstraction on top of raw or disparate data without relocating and transforming data, allowing an always-current view of your data.
+- **Data transformation:** Simple, scalable, and performant way to transform data in the lake by using T-SQL, so it can be fed to BI and other tools or loaded into a relational data store (Synapse SQL databases, Azure SQL Database, or others).
+
+Different data roles can benefit from serverless SQL pool:
+
+- **Data engineers** can explore the data lake, transform and prepare data by using this service, and simplify their data transformation pipelines.
+- **Data scientists** can quickly reason about the contents and structure of the data in the data lake, thanks to features such as OPENROWSET and automatic schema inference.
+- **Data analysts** can [explore data and Spark external tables](../sql/develop-storage-files-spark-tables.md) created by data scientists or data engineers by using familiar T-SQL statements or their favorite query tools.
+- **BI professionals** can quickly [create Power BI reports on top of data in the data lake](../sql/tutorial-connect-power-bi-desktop.md) and Spark tables.
+
+> [!NOTE]
+> The T-SQL language is used in both dedicated SQL pool and the serverless SQL pool, however there are some differences in the set of supported features. For more information about T-SQL features supported in Synapse SQL (dedicated and serverless), see [Transact-SQL features supported in Azure Synapse SQL](../sql/overview-features.md).
+
+For the serverless SQL pool assessment, evaluate the following points.
+
+- Do you have use cases to discover and explore data from a data lake by using relational queries (T-SQL)?
+- Do you have use cases to build a logical data warehouse on top of a data lake?
+- Identify whether there are use cases to transform data in the data lake without first moving data from the data lake.
+- Is your data already in Azure Data Lake Storage (ADLS) or Azure Blob Storage?
+- If your data is already in ADLS, do you have a good partition strategy in the data lake?
+- Do you have operational data in Azure Cosmos DB? Do you have use cases for real-time analytics on Azure Cosmos DB without impacting transactions?
+- Identify the file types in the data lake.
+- Identify the query performance SLA. Does your use case demand predictable performance and cost?
+- Do you have unplanned or bursty SQL analytical workloads?
+- Identify the data consuming pattern and platforms:
+ - Current and planned reporting method and tools.
+ - Which application or analytical tools will access the serverless SQL pool?
+ - Average number of active queries at any point in time.
+ - What is the nature of data access (interactive, ad hoc, export, or others)?
+ - Data roles and complete description of their data requirements.
+ - Maximum number of concurrent connections.
+ - Query complexity?
+- What are the security requirements (access control, encryption, and others)?
+- What is the required T-SQL functionality (stored procedures or functions)?
+- Identify the number of queries that will be sent to the serverless SQL pool and the result set size of each query.
+
+> [!TIP]
+> If you're new to serverless SQL pools, we recommend you work through the [Build data analytics solutions using Azure Synapse serverless SQL pools](/learn/paths/build-data-analytics-solutions-using-azure-synapse-serverless-sql-pools/) learning path.
+
+### Spark pool assessment
+
+Spark pools in Azure Synapse enable the following key scenarios.
+
+- **Data engineering/Data preparation:** Apache Spark includes many language features to support preparation and processing of large volumes of data. Preparation and processing can make the data more valuable and allow it to be consumed by other Azure Synapse services. It's enabled through multiple languages (C#, Scala, PySpark, Spark SQL) and by using supplied libraries for processing and connectivity.
+- **Machine learning:** Apache Spark comes with [MLlib](https://spark.apache.org/mllib/), which is an ML library built on top of Spark that you can use from a Spark pool. Spark pools also include Anaconda, which is a Python distribution that comprises various packages for data science including ML. In addition, Apache Spark on Synapse provides pre-installed libraries for [Microsoft Machine Learning](https://mmlspark.blob.core.windows.net/website/https://docsupdatetracker.net/index.html), which is a fault-tolerant, elastic, and RESTful ML framework. When combined with built-in support for notebooks, you have a rich environment for creating ML applications.
+
+> [!NOTE]
+> For more information, see [Apache Spark in Azure Synapse Analytics](../spark/apache-spark-overview.md).
+>
+> Also, Azure Synapse is compatible with Linux Foundation Delta Lake. Delta Lake is an open-source storage layer that brings ACID (atomicity, consistency, isolation, and durability) transactions to Apache Spark and big data workloads. For more information, see [What is Delta Lake](../spark/apache-spark-what-is-delta-lake.md).
+
+For the Spark pool assessment, evaluate the following points.
+
+- Identify the workloads that require data engineering or data preparation.
+- Clearly define the types of transformations.
+- Identify whether you have unstructured data to process.
+- When you're migrating from an existing Spark/Hadoop workload:
+ - What is the existing big data platform (Cloudera, Hortonworks, cloud services, or other)?
+ - If it's a migration from on-premises, has hardware depreciated or licenses expired? If not, when will depreciation or expiry happen?
+ - What is the existing cluster type?
+ - What are the required libraries and Spark versions?
+ - Is it a Hadoop migration to Spark?
+ - What are the current or preferred programming languages?
+ - What is the type of workload (big data, ML, or other)?
+ - What are the existing and planned client tools and reporting platforms?
+ - What are the security requirements?
+ - Are there any current pain points and limitations?
+- Do you plan to use, or are currently using, Delta Lake?
+- How do you manage packages today?
+- Identify the required compute cluster types.
+- Identify whether cluster customization is required.
+
+> [!TIP]
+> If you're new to Spark pools, we recommend you work through the [Perform data engineering with Azure Synapse Apache Spark Pools](/learn/paths/perform-data-engineering-with-azure-synapse-apache-spark-pools/) learning path.
+
+## Next steps
+
+In the [next article](implementation-success-evaluate-workspace-design.md) in the *Azure Synapse success by design* series, learn how to evaluate the Synapse workspace design and validate that it meets guidelines and requirements.
synapse-analytics Implementation Success Evaluate Data Integration Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/implementation-success-evaluate-data-integration-design.md
+
+ Title: "Synapse implementation success methodology: Evaluate data integration design"
+description: "Learn how to evaluate the data integration design and validate that it meets guidelines and requirements."
+++++ Last updated : 05/31/2022++
+# Synapse implementation success methodology: Evaluate data integration design
++
+Azure Synapse Analytics contains the same data integration engine and experiences as Azure Data Factory (ADF), allowing you to create rich at-scale ETL pipelines without leaving Azure Synapse Analytics.
++
+This article describes how to evaluate the design of the data integration components for your project. Specifically, it helps you to determine whether Azure Synapse pipelines are the best fit for your data integration requirements. Time invested in evaluating the design prior to solution development can help to eliminate unexpected design changes that may impact on your project timeline or cost.
+
+## Fit gap analysis
+
+You should perform a thorough fit gap analysis of your data integration strategy. If you choose Azure Synapse pipelines as the data integration tool, review the following points to ensure they're the best fit for your data integration requirements and orchestration. Even if you choose different data integration tools, you should still review the following points to validate that all key design points have been considered and that your chosen tool will support your solution needs. This information should have been captured during your assessment performed earlier in this methodology.
+
+- Review your data sources and destinations (targets):
+ - Validate that source and destination stores are [supported data stores](/azure/data-factory/connector-overview).
+ - If they're not supported, check whether you can use the [extensible options](/azure/data-factory/connector-overview#integrate-with-more-data-stores).
+- Review the triggering points of your data integration and the frequency:
+ - Azure Synapse pipelines support schedule, tumbling window, and storage event triggers.
+ - Validate the minimum recurrence interval and supported storage events against your requirements.
+- Review the required modes of data integration:
+ - Scheduled, periodic, and triggered batch processing can be effectively designed in Azure Synapse pipelines.
+ - To implement Change Data Capture (CDC) functionality, use third-party products or create a custom solution.
+ - To support real-time streaming, use [Azure Event Hubs](/azure/event-hubs/event-hubs-about), [Azure Event Hubs from Apache Kafka](/azure/event-hubs/event-hubs-for-kafka-ecosystem-overview), or [Azure IoT Hub](/azure/iot-hub/iot-concepts-and-iot-hub).
+ - To run Microsoft SQL Server Integration Services (SSIS) packages, you can [lift and shift SSIS workloads to the cloud](/sql/integration-services/lift-shift/ssis-azure-lift-shift-ssis-packages-overview?view=sql-server-ver15&preserve-view=true).
+- Review the compute design:
+ - Does the compute required for the pipelines need to be serverless or provisioned?
+ - Azure Synapse pipelines support both modes of integration runtime (IR): serverless or self-hosted on a Windows machine.
+ - Validate [ports and firewalls](/azure/data-factory/create-self-hosted-integration-runtime?tabs=data-factory#ports-and-firewalls) and [proxy setting](/azure/data-factory/create-self-hosted-integration-runtime?tabs=data-factory#proxy-server-considerations) when using the self-hosted IR (provisioned).
+- Review security requirements, networking and firewall configuration of the environment and compare them to the security, networking and firewall configuration design:
+ - Review how the data sources are secured and networked.
+ - Review how the target data stores are secured and networked. Azure Synapse pipelines have different [data access strategies](/azure/data-factory/data-access-strategies) that provide a secure way to connect data stores via private endpoints or virtual networks.
+ - Use [Azure Key Vault](/azure/key-vault/general/basic-concepts) to store credentials whenever applicable.
+ - Use ADF for customer-managed key (CMK) encryption of credentials and store them in the self-hosted IR.
+- Review the design for ongoing monitoring of all data integration components.
+
+## Architecture considerations
+
+As you review the data integration design, consider the following recommendations and guidelines to ensure that the data integration components of your solution will provide ongoing operational excellence, performance efficiency, reliability, and security.
+
+### Operational excellence
+
+For operational excellence, evaluate the following points.
+
+- **Environment:** When planning your environments, segregate them by development/test, user acceptance testing (UAT), and production. Use the folder organizational options to organize your pipelines and datasets by business/ETL jobs to support better maintainability. Use [annotations](https://azure.microsoft.com/resources/videos/azure-friday-enhanced-monitoring-capabilities-and-tagsannotations-in-azure-data-factory/) to tag your pipelines so you can easily monitor them. Create reusable pipelines by using parameters, and iteration and conditional activities.
+- **Monitoring and alerting:** Synapse workspaces include the [Monitor Hub](../get-started-monitor.md), which has rich monitoring information of each and every pipeline run. It also integrates with [Log Analytics](/azure/azure-monitor/logs/log-analytics-overview) for further log analysis and alerting. You should implement these features to provide proactive error notifications. Also, use *Upon Failure* paths to implement customized [error handling](https://techcommunity.microsoft.com/t5/azure-data-factory/understanding-pipeline-failures-and-error-handling/ba-p/1630459).
+- **Automated deployment and testing:** Azure Synapse pipelines are built into Synapse workspace, so you can take advantage of workspace automation and deployment. Use [ARM templates](../quickstart-deployment-template-workspaces.md) to minimize manual activities when creating Synapse workspaces. Also, [integrate Synapse workspaces with Azure DevOps](../cicd/continuous-integration-delivery.md#set-up-a-release-pipeline-in-azure-devops) to build code versioning and automate publication.
+
+### Performance efficiency
+
+For performance efficiency, evaluate the following points.
+
+- Follow [performance guidance](/azure/data-factory/copy-activity-performance) and [optimization features](/azure/data-factory/copy-activity-performance-features) when working with the copy activity.
+- Choose optimized connectors for data transfer instead of generic connectors. For example, use PolyBase instead of bulk insert when moving data from Azure Data Lake Storage Gen2 (ALDS Gen2) to a dedicated SQL pool.
+- When creating a new Azure IR, set the region location as [auto-resolve](/azure/data-factory/concepts-integration-runtime#azure-ir-location) or select the same region as the data stores.
+- For self-hosted IR, choose the [Azure virtual machine (VM) size](/azure/data-factory/copy-activity-performance-features#self-hosted-integration-runtime-scalability) based on the integration requirements.
+- Choose a stable network connection, like [Azure ExpressRoute](/azure/expressroute/expressroute-introduction), for fast and consistent bandwidth.
+
+### Reliability
+
+When you execute a pipeline by using Azure IR, it's serverless in nature and so it provides resiliency out of the box. There's little for customers to manage. However, when a pipeline runs in a self-hosted IR, we recommend that you run it by using a [high availability configuration](/azure/data-factory/create-self-hosted-integration-runtime?tabs=data-factory#high-availability-and-scalability) in Azure VMs. This configuration ensures integration pipelines aren't broken even when a VM goes offline. Also, we recommend that you use Azure ExpressRoute for a fast and reliable network connection between on-premises and Azure.
+
+### Security
+
+A secured data platform is one of the key requirements of every organization. You should thoroughly plan security for the entire platform rather than individual components. Here are some security guidelines for Azure Synapse pipeline solutions.
+
+- Secure data movement to the cloud by using [Azure Synapse private endpoints](https://techcommunity.microsoft.com/t5/azure-architecture-blog/understanding-azure-synapse-private-endpoints/ba-p/2281463).
+- Use Azure Active Directory (Azure AD) [managed identities](/azure/active-directory/managed-identities-azure-resources/overview) for authentication.
+- Use Azure role-based access control (RBAC) and [Synapse RBAC](../security/synapse-workspace-synapse-rbac.md) for authorization.
+- Store credentials, secrets, and keys in Azure Key Vault rather than in the pipeline. For more information, see [Use Azure Key Vault secrets in pipeline activities](/azure/data-factory/how-to-use-azure-key-vault-secrets-pipeline-activities).
+- Connect to on-premises resources via Azure ExpressRoute or VPN over private endpoints.
+- Enable the **Secure output** and **Secure input** options in pipeline activities when parameters store secrets or passwords.
+
+## Next steps
+
+In the [next article](implementation-success-evaluate-dedicated-sql-pool-design.md) in the *Azure Synapse success by design* series, learn how to evaluate your dedicated SQL pool design to identify issues and validate that it meets guidelines and requirements.
synapse-analytics Implementation Success Evaluate Dedicated Sql Pool Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/implementation-success-evaluate-dedicated-sql-pool-design.md
+
+ Title: "Synapse implementation success methodology: Evaluate dedicated SQL pool design"
+description: "Learn how to evaluate your dedicated SQL pool design to identify issues and validate that it meets guidelines and requirements."
+++++ Last updated : 05/31/2022++
+# Synapse implementation success methodology: Evaluate dedicated SQL pool design
++
+You should evaluate your [dedicated SQL pool](../sql-data-warehouse/sql-data-warehouse-overview-what-is.md) design to identify issues and validate that it meets guidelines and requirements. By evaluating the design *before solution development begins*, you can avoid blockers and unexpected design changes. That way, you protect the project's timeline and budget.
+
+Synapse SQL has a scale-out architecture that distributes computational data processing across multiple nodes. Compute is separate from storage, which enables you to scale compute independently of the data in your system. For more information, see [Dedicated SQL pool (formerly SQL DW) architecture in Azure Synapse Analytics](../sql-data-warehouse/massively-parallel-processing-mpp-architecture.md).
+
+## Assessment analysis
+
+During the [assessment stage](implementation-success-assess-environment.md), you collected information about how the original system was deployed and details of the structures that were implemented. That information can now help you to identify gaps between what's implemented and what needs to be developed. For example, now's the time to consider the impact of designing round-robin tables instead of hash distributed tables, or the performance benefits of correctly using replicated tables.
+
+## Review the target architecture
+
+To successfully deploy a dedicated SQL pool, it's important to adopt an architecture that's aligned with business requirements. For more information, see [Data warehousing in Microsoft Azure](/azure/architecture/data-guide/relational-dat).
+
+## Migration path
+
+A migration project for Azure Synapse is similar to any other database migration. You should consider that there might be differences between the original system and Azure Synapse.
+
+Ensure that you have a clear migration path established for:
+
+- Database objects, scripts, and queries
+- Data transfer (export from source and transit to the cloud)
+- Initial data load into Azure Synapse
+- Logins and users
+- Data access control (row-level security)
+
+For more information, see [Migrate a data warehouse to a dedicated SQL pool in Azure Synapse Analytics](../migration-guides/migrate-to-synapse-analytics-guide.md).
+
+## Feature gaps
+
+Determine whether the original system depends on features that aren't supported by Azure Synapse. Unsupported features in dedicated SQL pools include certain data types, like XML and spatial data types, and cursors.
+
+For more information, see:
+
+- [Table data types for dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics](../sql-data-warehouse/sql-data-warehouse-tables-data-types.md#identify-unsupported-data-types)
+- [Transact-SQL features supported in Azure Synapse SQL](../sql/overview-features.md)
+
+## Dedicated SQL pool testing
+
+As with any other project, you should conduct tests to ensure that your dedicated SQL pool delivers the required business needs. It's critical to test data quality, data integration, security, and performance.
+
+## Next steps
+
+In the [next article](implementation-success-evaluate-serverless-sql-pool-design.md) in the *Azure Synapse success by design* series, learn how to evaluate your Spark pool design to identify issues and validate that it meets guidelines and requirements.
synapse-analytics Implementation Success Evaluate Project Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/implementation-success-evaluate-project-plan.md
+
+ Title: "Synapse implementation success methodology: Evaluate project plan"
+description: "Learn how to evaluate your modern data warehouse project plan before the project starts."
+++++ Last updated : 05/31/2022++
+# Synapse implementation success methodology: Evaluate project plan
++
+In the lifecycle of the project, the most important and extensive planning is done *before implementation*. This article describes how to conduct a high-level review of your project plan. The aim is to ensure it contains critical artifacts and information to deliver a successful solution. It includes checklists of items that you should complete and approve before the project starts.
+
+A detailed review should follow the high-level project plan review. The detailed review should focus on the specific Azure Synapse components identified during the [assessment stage](implementation-success-assess-environment.md).
+
+## Evaluate the project plan
+
+Work through the following two high-level checklists, taking care to verify that each task aligns with the information gathered during the [assessment stage](implementation-success-assess-environment.md).
+
+First, ensure that your project plan defines the following points.
+
+> [!div class="checklist"]
+> - **The core resource team:** Assemble a group of key people that have expertise crucial to the project.
+> - **Scope:** Document how the project scope will be defined, verified, measured, and how the work breakdown will be defined and assigned.
+> - **Schedule:** Define the time duration required to complete the project.
+> - **Cost:** Estimate costs for internal and external resources, including infrastructure, hardware, and software.
+
+Second, having defined and assigned the work breakdown, prepare the following artifacts.
+
+> [!div class="checklist"]
+> - **Migration plan:** Document the plan to migrate from your current system to Azure Synapse. Incorporate tasks for executing the migration within the project plan scope and schedule.
+> - **Success criteria:** Define the critical success criteria for stakeholders (or the project sponsor), including go and no-go criteria.
+> - **Quality assurance:** Define how to conduct code reviews, and the development, staging, and production promotion approval processes.
+> - **Test plan:** Define test cases, success criteria for unit, integration, user testing, and metrics to validate all deliverables. Incorporate tasks for developing and executing the test plans within the project plan scope and schedule.
+
+## Evaluate project plan detailed tasks
+
+Once the high-level project plan review is complete and approved, the next step is to drill down into each component of the project plan.
+
+Identify the project plan components that address each aspect of Azure Synapse as it's intended for use in your solution. Also, validate that the project plan accounts for all the effort and resources required to develop, test, deploy, and operate your solution by evaluating:
+
+- The workspace project plan.
+- The data integration project plan.
+- The dedicated SQL pool project plan.
+- The serverless SQL pool project plan.
+- The Spark pool project plan.
+
+## Next steps
+
+In the [next article](implementation-success-evaluate-solution-development-environment-design.md) in the *Azure Synapse success by design* series, learn how to evaluate the environments for your modern data warehouse project to support development, testing, and production.
synapse-analytics Implementation Success Evaluate Serverless Sql Pool Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/implementation-success-evaluate-serverless-sql-pool-design.md
+
+ Title: "Synapse implementation success methodology: Evaluate serverless SQL pool design"
+description: "Learn how to evaluate your serverless SQL pool design to identify issues and validate that it meets guidelines and requirements."
+++++ Last updated : 05/31/2022++
+# Synapse implementation success methodology: Evaluate serverless SQL pool design
++
+You should evaluate your [serverless SQL pool](../sql/on-demand-workspace-overview.md) design to identify issues and validate that it meets guidelines and requirements. By evaluating the design *before solution development begins*, you can avoid blockers and unexpected design changes. That way, you protect the project's timeline and budget.
+
+The architectural separation of storage and compute for modern data, analytical platforms and services has been a trend and frequently used pattern. It provides cost savings and more flexibility allowing independent on-demand scaling of your storage and compute. Synapse SQL serverless extends this pattern by adding the capability to query your data lake data directly. There's no need to worry about compute management when using self-service types of workloads.
+
+## Fit gap analysis
+
+When planning to implement SQL serverless pools within Azure Synapse, you first need to ensure serverless pools are the right fit for your workloads. You should consider operational excellence, performance efficiency, reliability, and security.
+
+### Operational excellence
+
+For operational excellence, evaluate the following points.
+
+- **Solution development environment:** Within this methodology, there's an evaluation of the [solution development environment](implementation-success-evaluate-solution-development-environment-design.md). Identify how the environments (development, test, and production) are designed to support solution development. Commonly, you'll find a production and non-production environments (for development and test). You should find Synapse workspaces in all of the environments. In most cases, you'll be obliged to segregate your production and development/test users and workloads.
+- **Synapse workspace design:** Within this methodology, there's an evaluation of the [Synapse workspace design](implementation-success-evaluate-workspace-design.md). Identify how the workspaces have been designed for your solution. Become familiar with the design and know whether the solution will use a single workspace or whether multiple workspaces form part of the solution. Know why a single or multiple workspace design was chosen. A multi-workspace design is often chosen to enforce strict security boundaries.
+- **Deployment:** SQL serverless is available on-demand with every Synapse workspace, so it doesn't require any special deployment actions. Check regional proximity of the service and that of the Azure Data Lake Storage Gen2 (ADLS Gen2) account that it's connected to.
+- **Monitoring:** Check whether built-in monitoring is sufficient and whether any external services need to be put in place to store historical log data. Log data allows analyzing changes in performance and allows you to define alerting or triggered actions for specific circumstances.
+
+### Performance efficiency
+
+Unlike traditional database engines, SQL serverless doesn't rely on its own optimized storage layer. For that reason, its performance is heavily dependent on how data is organized in ADLS Gen2. For performance efficiency, evaluate the following points.
+
+- **Data ingestion:** Review how data is stored in the data lake. File sizes, the number of files, and folder structure all have an impact on performance. Keep in mind that while some file sizes might work for SQL serverless, they may impose issues for efficient processing or consumption by other engines or applications. You'll need to evaluate the data storage design and validate it against all of the data consumers, including SQL serverless and any other data tools that form part of your solution.
+- **Data placement:** Evaluate whether your design has unified and defined common patterns for data placement. Ensure that directory branching can support your security requirements. There are a few common patterns that can help you keep your time series data organized. Whatever your choice, ensure that it also works with other engines and workloads. Also, validate whether it can help partition auto-discovery for Spark applications and external tables.
+- **Data formats:** In most cases, SQL serverless will offer the best performance and better compatibility feature-wise by using a Parquet format. Verify your performance and compatibility requirements, because while Parquet improves performance - thanks to better compression and reduction of IO (by reading only required columns needed for analysis) - it requires more compute resources. Also, because some source systems don't natively support Parquet as an export format, it could lead to more transformation steps in your pipelines and/or dependencies in your overall architecture.
+- **Exploration:** Every industry is different. In many cases, however, there are common data access patterns found in the most frequent-run queries. Patterns typically involve filtering, and aggregations by dates, categories, or geographic regions. Identify your most common filtering criteria, and relate them to how much data is read/discarded by the most frequent-run queries. Validate whether the information on the data lake is organized to favor your exploration requirements and expectations. For the queries identified in your design and in your assessment, see whether you can eliminate unnecessary partitions in your OPENROWSET path parameter, or - if there are external tables - whether creating more indexes can help.
+
+### Reliability
+
+For reliability, evaluate the following points.
+
+- **Availability:** Validate any availability requirements that were identified during the [assessment stage](implementation-success-assess-environment.md). While there aren't any specific SLAs for SQL serverless, there's a 30-minute timeout for query execution. Identify the longest running queries from your assessment and validate them against your serverless SQL design. A 30-minute timeout could break the expectations for your workload and appear as a service problem.
+- **Consistency:** SQL serverless is designed primarily for read workloads. So, validate whether all consistency checks have been performed during the data lake data provisioning and formation process. Keep abreast of new capabilities, like [Delta Lake](/spark/apache-spark-what-is-delta-lake.md) open-source storage layer, which provides support for ACID (atomicity, consistency, isolation, and durability) guarantees for transactions. This capability allows you to implement effective [lambda or kappa architectures](/azure/architecture/data-guide/big-data/) to support both streaming and batch use cases. Be sure to evaluate your design for opportunities to apply new capabilities but not at the expense of your project's timeline or cost.
+- **Backup:** Review any disaster recovery requirements that were identified during the assessment. Validate them against your SQL serverless design for recovery. SQL serverless itself doesn't have its own storage layer and that would require handling snapshots and backup copies of your data. The data store accessed by serverless SQL is external (ADLS Gen2). Review the recovery design in your project for these datasets.
+
+### Security
+
+Organization of your data is important for building flexible security foundations. In most cases, different processes and users will require different permissions and access to specific sub areas of your data lake or logical data warehouse.
+
+For security, evaluate the following points.
+
+- **Data storage:** Using the information gathered during the [assessment stage](implementation-success-assess-environment.md), identify whether typical *Raw*, *Stage*, and *Curated* data lake areas need to be placed on the same storage account instead of independent storage accounts. The latter might result in more flexibility in terms of roles and permissions. It can also add more input/output operations per second (IOPS) capacity that might be needed if your architecture must support heavy and simultaneous read/write workloads (like real-time or IoT scenarios). Validate whether you need to segregate further by keeping your sandboxed and master data areas on separate storage accounts. Most users won't need to update or delete data, so they don't need write permissions to the data lake, except for sandboxed and private areas.
+- From your assessment information, identify whether any requirements rely on security features like [Always Encrypted](/sql/relational-databases/security/encryption/always-encrypted-database-engine?view=sql-server-ver15&viewFallbackFrom=azure-sqldw-latest&preserve-view=true), [Dynamic data masking](/azure/azure-sql/database/dynamic-data-masking-overview?view=azuresql&preserve-view=true) or [Row-level security](/sql/relational-databases/security/row-level-security?view=azure-sqldw-latest&preserve-view=true). Validate the availability of these features in specific scenarios, like when used with the OPENROWSET function. Anticipate potential workarounds that may be required.
+- From your assessment information, identify what would be the best authentication methods. Consider Azure Active Directory (Azure AD) service principals, shared access signature (SAS), and when and how authentication pass-through can be used and integrated in the exploration tool of choice of the customer. Evaluate the design and validate that the best authentication method as part of the design.
+
+### Other considerations
+
+Review your design and check whether you have put in place [best practices and recommendations](../sql/best-practices-serverless-sql-pool.md). Give special attention to filter optimization and collation to ensure that predicate pushdown works properly.
+
+## Next steps
+
+In the [next article](implementation-success-evaluate-spark-pool-design.md) in the *Azure Synapse success by design* series, learn how to evaluate your Spark pool design to identify issues and validate that it meets guidelines and requirements.
synapse-analytics Implementation Success Evaluate Solution Development Environment Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/implementation-success-evaluate-solution-development-environment-design.md
+
+ Title: "Synapse implementation success methodology: Evaluate solution development environment design"
+description: "Learn how to set up multiple environments for your modern data warehouse project to support development, testing, and production."
+++++ Last updated : 05/31/2022++
+# Synapse implementation success methodology: Evaluate solution development environment design
++
+Solution development and the environment within which it's performed is key to the success of your project. Regardless of your selected project methodology (like waterfall, Agile, or Scrum), you should set up multiple environments to support development, testing, and production. You should also define clear processes for promoting changes between environments.
+
+Setting up a modern data warehouse environment for both production and pre-production use can be complex. Keep in mind that one of the key design decisions is automation. Automation helps increase productivity while minimizing the risk of errors. Further, your environments should support future agile development, including the addition of new workloads, like data science or real-time. During the design review, produce a solution development environment design that will support your solution not only for the current project but also for ongoing support and development of your solution.
+
+## Solution development environment design 
+
+The environment design should include the production environment, which hosts the production solution, and at least one non-production environment. Most environments contain two non-production environments: one for development and another for testing, Quality Assurance (QA), and User Acceptance Testing (UAT). Typically, environments are hosted in separate Azure subscriptions. Consider creating a production subscription, and a non-production subscription. This separation will provide a clear security boundary and delineation between production and non-production.
+
+Ideally, you should establish three environments.
+
+- **Development:** The environment within which your data and analytics solutions are built. Determine whether to provide sandboxes for developers. Sandboxes can allow developers to make and test their changes in isolation, while a shared development environment will host integrated changes from the entire development team.
+- **Test/QA/UAT:** The production-like environment for testing deployments prior to their release to production.
+- **Production:** The final production environment.
+
+### Synapse workspaces
+
+For each Synapse workspace in your solution, the environment should include a production workspace and at least one non-production workspace for development and test/QA/UAT. Use the same name for all pools and artifacts across environments. Consistent naming will ease the promotion of workspaces to other environments.
+
+Promoting a workspace to another workspace is a two-part process:
+
+1. Use an [Azure Resource Manager template (ARM template)](../../azure-resource-manager/templates/overview.md) to create or update workspace resources.
+1. Migrate artifacts like SQL scripts, notebooks, Spark job definitions, pipelines, datasets, and data flows by using [Azure Synapse continuous integration and delivery (CI/CD) tools in Azure DevOps or on GitHub](../cicd/continuous-integration-delivery.md).
+
+### Azure DevOps or GitHub
+
+Ensure that integration with Azure DevOps or GitHub is properly set up. Design a repeatable process that releases changes across development, Test/QA/UAT, and production environments. 
+
+>[!IMPORTANT]
+> We recommend that sensitive configuration data always be stored securely in [Azure Key Vault](/azure/key-vault/general/basic-concepts.md). Use Azure Key Vault to maintain a central, secure location for sensitive configuration data, like database connection strings. That way, appropriate services can access configuration data from within each environment.
+
+## Next steps
+
+In the [next article](implementation-success-evaluate-team-skill-sets.md) in the *Azure Synapse success by design* series, learn how to evaluate your team of skilled resources that will implement your Azure Synapse solution.
synapse-analytics Implementation Success Evaluate Spark Pool Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/implementation-success-evaluate-spark-pool-design.md
+
+ Title: "Synapse implementation success methodology: Evaluate Spark pool design"
+description: "Learn how to evaluate your Spark pool design to identify issues and validate that it meets guidelines and requirements."
+++++ Last updated : 05/31/2022++
+# Synapse implementation success methodology: Evaluate Spark pool design
++
+You should evaluate your [Apache Spark pool](../spark/apache-spark-overview.md) design to identify issues and validate that it meets guidelines and requirements. By evaluating the design *before solution development begins*, you can avoid blockers and unexpected design changes. That way, you protect the project's timeline and budget.
+
+Apache Spark in Synapse brings the Apache Spark parallel data processing to Azure Synapse Analytics. This evaluation provides guidance on when Apache Spark in Azure Synapse is - or isn't - the right fit for your workload. It describes points to consider when you're evaluating your solution design elements that incorporate Spark pools.
+
+## Fit gap analysis
+
+When planning to implement Spark pools with Azure Synapse, first ensure they're the best fit for your workload.
+
+Consider the following points.
+
+- Does your workload require data engineering/data preparation?
+ - Apache Spark works best for workloads that require:
+ - Data cleaning.
+ - Transforming semi-structured data, like XML into relational.
+ - Complex free-text transformation, like fuzzy matching or natural language processing (NLP).
+ - Data preparation for machine learning (ML).
+- Does your workload for data engineering/data preparation involve complex or simple transformations? And, are you looking for a low-code/no-code approach?
+ - For simple transformations, like removing columns, changing column data types, or joining datasets, consider creating an Azure Synapse pipeline by using a data flow activity.
+ - Data flow activities provide a low-code/no-code approach to prepare your data.
+- Does your workload require ML on big data?
+ - Apache Spark works well for large datasets that will be used for ML. If you're using small datasets, consider using [Azure Machine Learning](../../machine-learning/overview-what-is-azure-ml.md) as the compute service.
+- Do you plan to perform data exploration or ad hoc query analysis on big data?
+ - Apache Spark in Azure Synapse provides Python/Scal).
+- Do you have a current Spark/Hadoop workload and do you need a unified big data platform?
+ - Azure Synapse provides a unified analytical platform for working with big data. There are Spark and SQL serverless pools for ad hoc queries, and the dedicated SQL pool for reporting and serving data.
+ - Moving from a Spark/Hadoop workload from on-premises (or another cloud environment) may involve some refactoring that you should take into consideration.
+ - If you're looking for a lift-and-shift approach of your Apache big data environment from on-premises to the cloud, and you need to meet a strict data engineering service level agreement (SLA), consider using [Azure HDInsight](../../hdinsight/hdinsight-overview.md).
+
+## Architecture considerations
+
+To ensure that your Apache Spark pool meets your requirements for operational excellence, performance, reliability, and security, there are key areas to validate in your architecture.
+
+### Operational excellence
+
+For operational excellence, evaluate the following points.
+
+- **Environment:** When configuring your environment, design your Spark pool to take advantage of features such as [autoscale and dynamic allocation](../spark/apache-spark-autoscale.md). Also, to reduce costs, consider enabling the [automatic pause](../spark/apache-spark-pool-configurations.md#automatic-pause) feature.
+- **Package management:** Determine whether required Apache Spark libraries will be used at a workspace, pool, or session level. For more information, see [Manage libraries for Apache Spark in Azure Synapse Analytics](../spark/apache-spark-azure-portal-add-libraries.md).
+- **Monitoring:** Apache Spark in Azure Synapse provides built-in monitoring of [Spark pools](../monitoring/how-to-monitor-spark-pools.md) and [applications](../monitoring/apache-spark-applications.md) with the creation of each spark session. Also consider implementing application monitoring with [Azure Log Analytics](../spark/apache-spark-azure-log-analytics.md) or [Prometheus and Grafana](../spark/use-prometheus-grafana-to-monitor-apache-spark-application-level-metrics.md), which you can use to visualize metrics and logs.
+
+### Performance efficiency
+
+For performance efficiency, evaluate the following points.
+
+- **File size and file type:** File size and the number of files have an impact on performance. Design the architecture to ensure that the file types are conducive to native ingestion with Apache Spark. Also, lean toward fewer large files instead of many small files.
+- **Partitioning:** Identify whether partitioning at the folder and/or file level will be implemented for your workload. *Folder partitions* limit the amount of data to search and read. *File partitions* reduce the amount of data to be searched inside the file - but only apply to specific file formats that should be considered in the initial architecture.
+
+### Reliability
+
+For reliability, evaluate the following points.
+
+- **Availability:** Spark pools have a start time of three to four minutes. It could take longer if there are many libraries to install. When designing batch vs. streaming workloads, identify the SLA for executing the job from your assessment information and determine which architecture best meets your needs. Also, take into consideration that each job execution creates a new Spark pool cluster.
+- **Checkpointing:** Apache Spark streaming has a built-in checkpointing mechanism. Checkpointing allows your stream to recover from the last processed entry should there be a failure on a node in your pool.
+
+### Security
+
+For security, evaluate the following points.
+
+- **Data access:** Data access must be considered for the Azure Data Lake Storage (ADLS) account that's attached to the Synapse workspace. In addition, determine the security levels required to access any data that isn't within the Azure Synapse environment. Refer to the information you collected during the [assessment stage](implementation-success-assess-environment.md).
+- **Networking:** Review the networking information and requirements gathered during your assessment. If the design involves a managed virtual network with Azure Synapse, consider the implications this requirement will have on Apache Spark in Azure Synapse. One implication is the inability to use Spark SQL while accessing data.
+
+## Next steps
+
+In the [next article](implementation-success-evaluate-project-plan.md) in the *Azure Synapse success by design* series, learn how to evaluate your modern data warehouse project plan before the project starts.
+
+For more information on best practices, see [Apache Spark for Azure Synapse Guidance](https://azuresynapsestorage.blob.core.windows.net/customersuccess/Guidance%20Video%20Series/EGUI_Synapse_Spark_Guidance.pdf).
synapse-analytics Implementation Success Evaluate Team Skill Sets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/implementation-success-evaluate-team-skill-sets.md
+
+ Title: "Synapse implementation success methodology: Evaluate team skill sets"
+description: "Learn how to evaluate your team of skilled resources that will implement your Azure Synapse solution."
+++++ Last updated : 05/31/2022++
+# Synapse implementation success methodology: Evaluate team skill sets
++
+Solution development requires a team comprising individuals with many different skills. It's important for the success of your solution that your team has the necessary skills to successfully complete their assigned tasks. This evaluation takes an honest and critical look at the skill level of your project resources, and it provides you with a list of roles that are often required during the implementation of an Azure Synapse solution. Your team needs to possess relevant experience and skills to complete their assigned project tasks within the expected time frame.
+
+## Microsoft learning level definitions
+
+This article uses the Microsoft standard level definitions for describing learning levels.
+
+| Level | Description |
+|:-|:-|
+| 100 | Assumes little or no expertise with the topic, and covers topic concepts, functions, features, and benefits. |
+| 200 | Assumes 100-level knowledge and provides specific details about the topic. |
+| 300 | *Advanced material.* Assumes 200-level knowledge, in-depth understanding of features in a real-world environment, and strong coding skills. Provides a detailed technical overview of a subset of product/technology features, covering architecture, performance, migration, deployment, and development. |
+| 400 | *Expert material.* Assumes a deep level of technical knowledge and experience, and a detailed, thorough understanding of the topic. Provides expert-to-expert interaction and coverage of specialized topics. |
+
+## Roles, resources, and readiness
+
+Successfully delivering an Azure Synapse solution involves many different roles and skill sets. This topic describes roles commonly required to implement a successful project. Not all of these roles will be required for all projects, and not all of these roles will be required for the entire duration of the project. However, these roles will be required to complete some critical project tasks. You should evaluate the skill level of the individuals executing tasks to ensure their success in completing their job.
+
+Refer to your [project plan](implementation-success-evaluate-project-plan.md) and verify that these resources and roles were identified. Also, check to see if your project plan identifies other resources and roles. In many cases, you may find that individuals belong to more than one role. For example, the Azure administrator could also be your Azure network administrator. It's also possible that a role in your organization is split between multiple individuals. For example, the Synapse administrator doesn't get involved in Synapse SQL security. In this case, adjust your evaluation accordingly.
+
+Evaluate the following points.
+
+- Identify the roles that will be required by your solution implementation.
+- Identify the specific individuals in your project that will fulfill each role.
+- Identify the specific project tasks that will be performed by each individual.
+- Assign a [learning level](#microsoft-learning-leveldefinitions) to each individual for their tasks and roles.
+
+Typically, a successful implementation requires that each individual has at least a level-300 proficiency for the tasks they'll perform. It's highly recommended that individuals at level-200 (or below) be provided with guidance and instruction to raise their level of understanding prior to beginning their project tasks. In this case, involve a level-300 (or above) individual to mentor and review. It's recommended that you adjust the project plan timeline and effort estimates to factor in learning new skills.
+
+> [!NOTE]
+> We recommend you align your roles with the built-in roles. There are two sets of built-in roles: [RBAC roles for Azure Synapse](../security/synapse-workspace-synapse-rbac-roles.md) and [RBAC roles built into Azure](../../role-based-access-control/built-in-roles.md). These two sets of built-in roles and permissions are independent.
+
+### Azure administrator
+
+The *Azure administrator* manages administrative aspects of Azure. They're responsible for subscriptions, region identification, resource groups, monitoring, and portal access. They also provision resources, like resource groups, storage accounts, Azure Data Factory (ADF), Microsoft Purview, and more.
+
+### Security administrator
+
+The *security administrator* must have local knowledge of the existing security landscape and requirements. This role collaborates with the [Synapse administrator](#synapse-administrator), [Synapse database administrator](#synapse-database-administrator), [Synapse Spark administrator](#synapse-spark-administrator), and other roles to set up security requirements. The security administrator could also be an Azure Active Directory (Azure AD) administrator.
+
+### Network administrator
+
+The *network administrator* must have local knowledge of the existing networking landscape and requirements. This role requires Azure networking skills and Synapse networking skills.
+
+### Synapse administrator
+
+The *Synapse administrator* is responsible for the administration of the overall Azure Synapse environment. This role is responsible for the availability and scale of workspace resources, data lake administration, analytics runtimes, and workspace administration and monitoring. This role works closely with all other roles to ensure access to Azure Synapse, the availability of analytics services, and sufficient scale. Other responsibilities include:
+
+- Provision Synapse workspaces.
+- Set up Azure Synapse networking and security requirements.
+- Monitor Synapse workspace activity.
+
+### Synapse database administrator
+
+The *Synapse database administrator* is responsible for the design, implementation, maintenance, and operational aspects of the SQL pools (serverless and dedicated). This role is responsible for the overall availability, consistent performance, and optimizations of the SQL pools. This role is also responsible for managing the security of the data in the databases, granting privileges over the data, and granting or denying user access. Other responsibilities include:
+
+- Perform various dedicated SQL pool administration functions, like provisioning, scale, pauses, resumes, restores, workload management, monitoring, and others.
+- Perform various dedicated SQL pool administration functions, like securing, monitoring, and others.
+- Set up SQL pool database security.
+- Performance tuning and troubleshooting.
+
+### Synapse Spark administrator
+
+The *Synapse Spark administrator* is responsible for the design, implementation, maintenance, and operational aspects of the Spark pools. This role is responsible for the overall availability, consistent performance, and optimizations of the Spark pools. This role is also responsible for managing the security of the data, granting privileges over the data, and granting or denying user access. Other responsibilities include:
+
+- Perform various dedicated Spark pool administration functions, like provisioning, monitoring, and others.
+- Set up Spark pool data security.
+- Notebook troubleshooting and performance.
+- Pipeline Spark execution troubleshooting and performance.
+
+### Synapse SQL pool database developer
+
+The *Synapse pool database developer* is responsible for database design and development. For dedicated SQL pools, responsibilities include table structure and indexing, developing database objects, and schema design. For serverless SQL pools, responsibilities include external tables, views, and schema design. Other responsibilities include:
+
+- Logical and physical database design.
+- Table design, including distribution, indexing, and partitioning.
+- Programming object design and development, including stored procedures and functions.
+- Design and development of other performance optimizations, including materialized views, workload management, and more.
+- Design and implementation of [data protection](security-white-paper-data-protection.md), including data encryption.
+- Design and implementation of [access control](security-white-paper-access-control.md), including object-level security, row-level security, column-level security, dynamic data masking, and Synapse role-based access control.
+- Monitoring, auditing, performance tuning and troubleshooting.
+
+### Spark developer
+
+The *Spark developer* is responsible for creating notebooks and executing Spark processing by using Spark pools.
+
+### Data integration administrator
+
+The *Data integration administrator* is responsible for setting up and securing data integration by using Synapse pipelines, ADF, or third-party integration tools, and for performing all configuration and security functions to support the data integration tools.
+
+For Synapse pipelines and ADF, other responsibilities include setting up the integration runtime (IR), self-hosted integration runtime (SHIR), and/or SSIS integration runtime (SSIS-IR). Knowledge of virtual machine provisioning - on-premises or in Azure - may be required.
+
+### Data integration developer
+
+The *Data integration developer* is responsible for developing ETL/ELT and other data integration processes by using the solution's selected data integration tools.
+
+### Data consumption tools administrator
+
+The *Data consumption tools administrator* is responsible for the data consumption tools. Tools can include [Microsoft Power BI](https://powerbi.microsoft.com/), Microsoft Excel, Tableau, and others. The administrator of each tool will need to set up permissions to grant access to data in Azure Synapse.
+
+### Data engineer
+
+The *Data engineer* role is responsible for implementing data-related artifacts, including data ingestion pipelines, cleansing and transformation activities, and data stores for analytical workloads. It involves using a wide range of data platform technologies, including relational and non-relational databases, file stores, and data streams.
+
+Data engineers are responsible for ensuring that the privacy of data is maintained within the cloud, and spanning from on-premises to the cloud data stores. They also own the management and monitoring of data stores and data pipelines to ensure that data loads perform as expected.
+
+### Data scientist
+
+The *Data scientist* derives value and insights from data. Data scientists find innovative ways to work with data and help teams achieve a rapid return on investment (ROI) on analytics efforts. They work with data curation and advanced search, matching, and recommendation algorithms. Data scientists need access to the highest quality data and substantial amounts of computing resources to extract deep insights.
+
+### Data analyst
+
+The *Data analyst* enables businesses to maximize the value of their data assets. They transform raw data into relevant insights based on identified business requirements. Data analysts are responsible for designing and building scalable data models, cleaning, and transforming data, and presenting advanced analytics in reports and visualizations.
+
+### Azure DevOps engineer
+
+The *Azure DevOps engineer* is responsible for designing and implementing strategies for collaboration, code, infrastructure, source control, security, compliance, continuous integration, testing, delivery, and monitoring of an Azure Synapse project.
+
+## Learning resources and certifications
+
+If you're interested to learn about Microsoft Certifications that may help assess your team's readiness, browse the available certifications for [Azure Synapse Analytics](/learn/certifications/browse/?expanded=azure&products=azure-synapse-analytics).
+
+To complete online, self-paced training, browse the available learning paths and modules for [Azure Synapse Analytics](/learn/browse/?filter-products=synapse&products=azure-synapse-analytics).
+
+## Next steps
+
+In the [next article](implementation-success-perform-operational-readiness-review.md) in the *Azure Synapse success by design* series, learn how to perform an operational readiness review to evaluate your solution for its preparedness to provide optimal services to users.
synapse-analytics Implementation Success Evaluate Workspace Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/implementation-success-evaluate-workspace-design.md
+
+ Title: "Synapse implementation success methodology: Evaluate workspace design"
+description: "Learn how to evaluate the Synapse workspace design and validate that it meets guidelines and requirements."
+++++ Last updated : 05/31/2022++
+# Synapse implementation success methodology: Evaluate workspace design
++
+Synapse workspace is a unified graphical user experience that stitches together your analytical and data processing engines, data lakes, databases, tables, datasets, and reporting artifacts along with code and process orchestration. Considering the number of technologies and services that are integrated into Synapse workspace, ensure that the key components are included in your design.
+
+## Synapse workspace design review
+
+Identify whether your solution design involves one Synapse workspace or multiple workspaces. Determine the drivers of this design. While there might be different reasons, in most cases the reason for multiple workspaces is either security segregation or billing segregation. When determining the number of workspaces and database boundaries, keep in mind that there's a limit of 20 workspaces per subscription.
+
+Identify which elements or services within each workspace need to be shared and with which resources. Resources can include data lakes, integration runtimes (IRs), metadata or configurations, and code. Determine why this particular design was chosen in terms of potential synergies. Ask yourself whether these synergies justify the extra cost and management overhead.
+
+## Data lake design review
+
+We recommended that the data lake (if part of your solution) be properly tiered. You should divide your data lake into three major areas that relate to *Bronze*, *Silver*, and *Gold* datasets. Bronze - or the raw layer - might reside on its own separate storage account because it has stricter access controls due to unmasked sensitive data that it might store.
+
+## Security design review
+
+Review the security design for the workspace and compare it with the information you gathered during the assessment. Ensure all of the requirements are met, and all of the constraints have been taken into account. For ease of management, we recommended that users be organized into groups with appropriate permissions profiling: you can simplify access control by using security groups that align with roles. That way, network administrators can add or remove users from appropriate security groups to manage access.
+
+Serverless SQL pools and Apache Spark tables store their data in an Azure Data Lake Gen2 (ADLS Gen2) container that's associated with the workspace. User-installed Apache Spark libraries are also managed in this same storage account. To enable these use cases, both users and the workspace managed service identity (MSI) must be added to the **Storage Blob Data Contributor** role of the ADLS Gen2 storage container. Verify this requirement against your security requirements.
+
+Dedicated SQL pools provide a rich set of security features to encrypt and mask sensitive data. Both dedicated and serverless SQL pools enable the full surface area of SQL Server permissions including built-in roles, user-defined roles, SQL authentication, and Azure Active Directory (Azure AD) authentication. Review the security design for your solution's dedicated SQL pool and serverless SQL pool access and data.
+
+Review the security plan for your data lake and all the ADLS Gen2 storage accounts (and others) that will form part of your Azure Synapse Analytics solution. ADLS Gen2 storage isn't itself a compute engine and so it doesn't have a built-in ability to selectively mask data attributes. You can apply ADLS Gen2 permissions at the storage account or container level by using role-based access control (RBAC) and/or at the folder or file level by using access control lists (ACLs). Review the design carefully and strive to avoid unnecessary complexity.
+
+Here are some points to consider for the security design.
+
+- Make sure Azure AD set up requirements are included in the design.
+- Check for cross-tenant scenarios. Such issues may arise because some data is in another Azure tenant, or it needs to move to another tenant, or it needs to be accessed by users from another tenant. Ensure these scenarios are considered in your design.
+- What are the roles for each workspace? How will they use the workspace?
+- How is the security designed within the workspace?
+ - Who can view all scripts, notebooks, and pipelines?
+ - Who can execute scripts and pipelines?
+ - Who can create/pause/resume SQL and Spark pools?
+ - Who can publish changes to the workspace?
+ - Who can commit changes to source control?
+- Will pipelines access data by using stored credentials or the workspace managed identity?
+- Do users have the appropriate access to the data lake to browse the data in Synapse Studio?
+- Is the data lake properly secured by using the appropriate combination of RBAC and ACLs?
+- Have the SQL pool user permissions been correctly set for each role (data scientist, developer, administrator, business user, and others)?
+
+## Networking design review
+
+Here are some points to consider for the network design.
+
+- Is connectivity designed between all the resources?
+- What is the networking mechanism to be used (Azure ExpressRoute, public Internet, or private endpoints)?
+- Do you need to be able to securely connect to Synapse Studio?
+- Has data exfiltration been taken into consideration?
+- Do you need to connect to on-premises data sources?
+- Do you need to connect to other cloud data sources or compute engines, such as Azure Machine Learning?
+- Have Azure networking components, like network security groups (NSGs), been reviewed for proper connectivity and data movement?
+- Has integration with the private DNS zones been taken into consideration?
+- Do you need to be able to browse the data lake from within Synapse Studio or simply query data in the data lake with serverless SQL or PolyBase?
+
+Finally, identify all of your data consumers and verify that their connectivity is accounted for in the design. Check that network and security outposts allow your service to access required on-premises sources and that its authentication protocols and mechanisms are supported. In some scenarios, you might need to have more than one self-hosted IR or data gateway for SaaS solutions, like Microsoft Power BI.
+
+## Monitoring design review
+
+Review the design of the monitoring of the Azure Synapse components to ensure they meet the requirements and expectations identified during the assessment. Verify that monitoring of resources and data access has been designed, and that it identifies each monitoring requirement. A robust monitoring solution should be put in place as part of the first deployment to production. That way, failures can be identified, diagnosed, and addressed in a timely manner. Aside from the base infrastructure and pipeline runs, data should also be monitored. Depending on the Azure Synapse components in use, identify the monitoring requirements for each component. For example, if Spark pools form part of the solution, monitor the malformed record store. 
+
+Here are some points to consider for the monitoring design.
+
+- Who can monitor each resource type (pipelines, pools, and others)?
+- How long do database activity logs need to be retained?
+- Will workspace and database log retention use Log Analytics or Azure Storage?
+- Will alerts be triggered in the event of a pipeline error? If so, who should be notified?
+- What threshold level of a SQL pool should trigger an alert? Who should be notified?
+
+## Source control design review
+
+By default, a Synapse workspace applies changes directly to the Synapse service by using the built-in publish functionality. You can enable source control integration, which provides many advantages. Advantages include better collaboration, versioning, approvals, and release pipelines to promote changes through to development, test, and production environments. Azure Synapse allows a single source control repository per workspace, which can be either Azure DevOps Git or GitHub.
+
+Here are some points to consider for the source control design.
+
+- If using Azure DevOps Git, is the Synapse workspace and its repository in the same tenant?
+- Who will be able to access source control?
+- What permissions will each user be granted in source control?
+- Has a branching and merging strategy been developed?
+- Will release pipelines be developed for deployment to different environments?
+- Will an approval process be used for merging and for release pipelines?
+
+> [!NOTE]
+> The design of the development environment is of critical importance to the success of your project. If a development environment has been designed, it will be evaluated in a [separate stage of this methodology](implementation-success-evaluate-solution-development-environment-design.md).
+
+## Next steps
+
+In the [next article](implementation-success-evaluate-data-integration-design.md) in the *Azure Synapse success by design* series, learn how to evaluate the data integration design and validate that it meets guidelines and requirements.
synapse-analytics Implementation Success Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/implementation-success-overview.md
+
+ Title: Azure Synapse implementation success by design
+description: "Learn about the Azure Synapse success series of articles that's designed to help you deliver a successful implementation of Azure Synapse Analytics."
+++++ Last updated : 05/31/2022++
+# Azure Synapse implementation success by design
+
+The *Azure Synapse implementation success by design* series of articles is designed to help you deliver a successful implementation of Azure Synapse Analytics. It describes a methodology to complement your solution implementation project. It includes suggested checks at strategic points during your project that can help assure a successful implementation. It's important to understand that the methodology shouldn't replace or change your chosen project management methodology (Scrum, Agile, or waterfall). Rather, it suggests validations that can improve the success of your project deployment to a production environment.
+
+[Azure Synapse](../overview-what-is.md) is an enterprise analytics service that accelerates time to insight across data warehouses and big data systems. It brings together the best of SQL technologies used in enterprise data warehousing, Spark technologies used for big data, pipelines for data integration and ETL/ELT, and deep integration with other Azure services, such as Power BI, Azure Cosmos DB, and Azure Machine Learning.
++
+The methodology uses a strategic checkpoint approach to assess and monitor the progress of your project. The goals of these checkpoints are:
+
+- Proactive identification of possible issues and blockers.
+- Continuous validation of the solution's fit to the use cases.
+- Successful deployment to production.
+- Smooth operation and monitoring once in production.
+
+The checkpoints are invoked at four milestones during the project:
+
+1. [Project planning](#project-planning-checkpoint)
+1. [Solution development](#solution-development-checkpoint)
+1. [Pre go-live](#pre-go-live-checkpoint)
+1. [Post go-live](#post-go-live-checkpoint)
+
+## Project planning checkpoint
+
+The project planning checkpoint includes the solution evaluation, project plan evaluation, the solution development environment design evaluation, and the team skill sets evaluation.
+
+#### Solution evaluation
+
+Evaluate your entire solution with a focus on how it intends to use Azure Synapse. An assessment involves gathering data that will identify the required components of Azure Synapse, the interfaces each will have with other products, a review of the data sources, the data consumers, the roles, and use cases. This assessment will also gather data about the existing environment including detailed specifications from existing data warehouses, big data environments, and integration and data consumption tooling. The assessment will identify which Azure Synapse components will be implemented and therefore which evaluations and checkpoints should be made throughout the implementation effort. This assessment will also provide additional information to validate the design and implementation against requirements, constraints, and assumptions.
+
+Here's a list of tasks you should complete.
+
+1. [Assess](implementation-success-assess-environment.md) your environment to help evaluate the solution design.
+1. Make informed technology decisions to implement Azure Synapse and identify the solution components to implement.
+1. [Evaluate the workspace design](implementation-success-evaluate-workspace-design.md).
+1. [Evaluate the data integration design](implementation-success-evaluate-data-integration-design.md).
+1. [Evaluate the dedicated SQL pool design](implementation-success-evaluate-dedicated-sql-pool-design.md).
+1. [Evaluate the serverless SQL pool design](implementation-success-evaluate-serverless-sql-pool-design.md).
+1. [Evaluate the Spark pool design](implementation-success-evaluate-spark-pool-design.md).
+1. Review the results of each evaluation and respond accordingly.
+
+#### Project plan evaluation
+
+Evaluate the project plan as it relates to the Azure Synapse requirements that need to be developed. This evaluation isn't about producing a project plan. Rather, the evaluation is about identifying any steps that could lead to blockers or that could impact on the project timeline. Once evaluated, you may need to make adjustments to the project plan.
+
+Here's a list of tasks you should complete.
+
+1. [Evaluate the project plan](implementation-success-evaluate-project-plan.md).
+1. Evaluate project planning specific to the Azure Synapse components you plan to implement.
+1. Review the results of each evaluation and respond accordingly.
+
+#### Solution development environment design evaluation
+
+Evaluate the environment that's to be used to develop the solution. Establish separate development, test, and production environments. Also, it's important to understand that setting up automated deployment and source code control is essential to a successful and smooth development effort.
+
+Here's a list of tasks you should complete.
+
+1. [Evaluate the solution development environment design](implementation-success-evaluate-solution-development-environment-design.md).
+1. Review the results of each evaluation and respond accordingly.
+
+#### Team skill sets evaluation
+
+Evaluate the project team with a focus on their skill level and readiness to implement the Azure Synapse solution. The success of the project depends on having the correct skill sets and experience. Many different skill sets are required to implement an Azure Synapse solution, so ensure you identify gaps and secure suitable resources that have the required skill sets (or arrange for them to complete training). This evaluation is critical at this stage of your project because a lack of the proper skills can impact on both the timeline and the overall success of the project.
+
+Here's a list of tasks you should complete.
+
+1. [Evaluate the team skill sets](implementation-success-evaluate-team-skill-sets.md).
+1. Secure skilled resources, or upskill resources to expand their capabilities.
+1. Review the results of each evaluation and respond accordingly.
+
+### Solution development checkpoint
+
+The solution development checkpoint includes periodic quality checks and additional skill building.
+
+#### Periodic quality checks
+
+During solution development, you should make periodic checks to validate that the solution is being developed according to recommended practices. Check that the project use cases will be satisfied and that enterprise requirements are being met. For the purposes of this methodology, these checks are called *periodic quality checks*.
+
+Implement the following quality checks:
+
+- Quality checks for workspaces.
+- Quality checks for data integration.
+- Quality checks for dedicated SQL pools.
+- Quality checks for serverless SQL pools.
+- Quality checks for Spark pools.
+
+#### Additional skill building
+
+As the project progresses, identify whether more skill sets are needed. Take the time to determine whether more skill sets could improve the quality of the solution. Supplementing the team with more skill sets can help to avoid project delays and project timeline impacts.
+
+### Pre go-live checkpoint
+
+Before deploying your solution to production, we recommend that you perform reviews to assess the preparedness of the solution.
+
+The *pre go-live* checklist provides a final readiness check to successfully deploy to production.
+
+1. [Perform the operational readiness review](implementation-success-perform-operational-readiness-review.md).
+1. [Perform the user readiness and onboarding plan review](implementation-success-perform-user-readiness-and-onboarding-plan-review.md).
+1. Review the results of each review and respond accordingly.
+
+### Post go-live checkpoint
+
+After deploying to production, we recommend that you validate that the solution operates as expected.
+
+The *post go-live* checklist provides a final readiness check to monitor your Azure Synapse solution.
+
+1. [Perform the monitoring review](implementation-success-perform-monitoring-review.md).
+1. Continually monitor your Azure Synapse solution.
+
+## Next steps
+
+In the [next article](implementation-success-assess-environment.md) in the *Azure Synapse implementation success by design* series, learn how to assess your environment to help evaluate the solution design and make informed technology decisions to implement Azure Synapse.
synapse-analytics Implementation Success Perform Monitoring Review https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/implementation-success-perform-monitoring-review.md
+
+ Title: "Synapse implementation success methodology: Perform monitoring review"
+description: "Learn how to perform monitoring of your Azure Synapse solution."
+++++ Last updated : 05/31/2022++
+# Synapse implementation success methodology: Perform monitoring review
++
+Monitoring is a key part of the operationalization of any Azure solution. This article provides guidance on reviewing and configuring the monitoring of your Azure Synapse Analytics environment. Key to this activity is the identification of what needs to be monitored and who needs to review the monitoring results.
+
+Using your solution requirements and other data collected during the [assessment stage](implementation-success-assess-environment.md) and [solution development](implementation-success-evaluate-solution-development-environment-design.md), build a list of important behaviors and activities that need to be monitored in your production environment. As you build this list, identify the groups of users that will need access to monitoring information and build the procedures they can follow to respond to monitoring results.
+
+You can use [Azure Monitor](/azure/azure-monitor/overview) to provide base-level infrastructure metrics, alerts, and logs for most Azure services. Azure diagnostic logs are emitted by a resource to provide rich, frequent data about the operation of that resource. Azure Synapse can write diagnostic logs in Azure Monitor.
+
+For more information, see [Use Azure Monitor with your Azure Synapse Analytics workspace](../monitoring/how-to-monitor-using-azure-monitor.md).
+
+## Monitor dedicated SQL pools
+
+You can monitor a dedicated SQL pool by using Azure Monitor, altering, dynamic management views (DMVs), and Log Analytics.
+
+- **Alerts:** You can set up alerts that send you an email or call a webhook when a certain metric reaches a predefined threshold. For example, you can receive an alert email when the database size grows too large. For more information, see [Create alerts for Azure SQL Database and Azure Synapse Analytics using the Azure portal](/azure/azure-sql/database/alerts-insights-configure-portal).
+- **DMVs:** You can use [DMVs](../sql-data-warehouse/sql-data-warehouse-manage-monitor.md) to monitor workloads to help investigate query executions in SQL pools.
+- **Log Analytics:** [Log Analytics](/azure/azure-monitor/logs/log-analytics-tutorial) is a tool in the Azure portal that you can use to edit and run log queries from data collected by Azure Monitor. For more information, see [Monitor workload - Azure portal](../sql-data-warehouse/sql-data-warehouse-monitor-workload-portal.md).
+
+## Monitor serverless SQL pools
+
+You can monitor a serverless SQL pool by [monitoring your SQL requests](../monitoring/how-to-monitor-sql-requests.md) in Synapse Studio. That way, you can keep an eye on the status of running requests and review details of historical requests.
+
+## Monitor Spark pools
+
+You can [monitor your Apache Spark applications](../monitoring/apache-spark-applications.md) in Synapse Studio. That way, you can keep an eye on the latest status, issues, and progress.
+
+You can enable the Synapse Studio connector that's built in to Log Analytics. You can then collect and send Apache Spark application metrics and logs to your Log Analytics workspace. You can also use an Azure Monitor workbook to visualize the metrics and logs. For more information, see [Monitor Apache Spark applications with Azure Log Analytics](../spark/apache-spark-azure-log-analytics.md).
+
+## Monitor pipelines
+
+ Azure Synapse allows you to create complex pipelines that automate and integrate your data movement, data transformation, and compute activities. You can author and monitor pipelines by using Synapse Studio to keep an eye on the latest status, issues, and progress of your pipelines. For more information, see [Use Synapse Studio to monitor your workspace pipeline runs](../monitoring/how-to-monitor-pipeline-runs.md).
+
+## Next steps
+
+For more information about this article, check out the following resources:
+
+- [Synapse implementation success methodology](implementation-success-overview.md)
+- [Use Azure Monitor with your Azure Synapse Analytics workspace](../monitoring/how-to-monitor-using-azure-monitor.md)
synapse-analytics Implementation Success Perform Operational Readiness Review https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/implementation-success-perform-operational-readiness-review.md
+
+ Title: "Synapse implementation success methodology: Perform operational readiness review"
+description: "Learn how to perform an operational readiness review to evaluate your solution for its preparedness to provide optimal services to users."
+++++ Last updated : 05/31/2022++
+# Synapse implementation success methodology: Perform operational readiness review
++
+Once you build an Azure Synapse Analytics solution and it's ready to deploy, it's important to ensure the operational readiness of that solution. Performing an operational readiness review evaluates the solution for its preparedness to provide optimal services to users. Organizations that invest time and resources in assessing operational readiness before launch have a much higher rate of success. It's also important to conduct an operational readiness review periodically post deployment - perhaps annually - to ensure there isn't any drift from operational expectations.
+
+## Process and focus areas
+
+Process and focus areas include service operational goals, solution readiness, security, monitoring, high availability (HA) and disaster recovery (DR).
+
+### Service operational goals
+
+ Document service expectations from the customer's point of view, and get buy-in from the business on these service expectations. Make any necessary modifications to meet business goals and objectives of the service.
+
+The service level agreement (SLA) of each Azure service varies based on the service. For example, Microsoft guarantees a specific monthly uptime percentage. For more information, see [SLA for Azure Synapse Analytics](https://azure.microsoft.com/support/legal/sla/synapse-analytics/). Ensure these SLAs align with your own business SLAs and document any gaps. It's also important to define any operational level agreements (OLAs) between different teams and ensure that they align with the SLAs.
+
+### Solution readiness
+
+It's important to review solution readiness by using the following points.
+
+- Describe the entire solution architecture calling out critical functionalities of different components and how they interact with each other.
+- Document scalability aspects of your solution. Include specific details about the effort involved in scaling and the impact of it on business. Consider whether it can respond to sudden surges of user activity. Bear in mind that Azure Synapse provides functionality for scaling with minimal downtime.
+- Document any single points of failure in your solution, along with how to recover should such failures occur. Include the impact of such failures on dependent services in order to minimize the impact.
+- Document all dependent services on the solution and their impact.
+
+### Security
+
+Data security and privacy are non-negotiable. Azure Synapse implements a multi-layered security architecture for end-to-end protection of your data. Review security readiness by using the following points.
+
+- **Authentication:** Ensure Azure Active Directory (Azure AD) authentication is used whenever possible. If non-Azure AD authentication is used, ensure strong password mechanisms are in place and that passwords are rotated on a regular basis. For more information, see [Password Guidance](https://www.microsoft.com/research/publication/password-guidance/). Ensure monitoring is in place to detect suspicious actions related to user authentication. Consider using [Azure Identity Protection](/azure/active-directory/identity-protection/overview-identity-protection) to automate the detection and remediation of identity-based risks.
+- **Access control:** Ensure proper access controls are in place following the [principle of least privilege](/azure/active-directory/develop/secure-least-privileged-access). Use security features available with Azure services to strengthen the security of your solution. For example, Azure Synapse provides granular security features, including row-level security (RLS), column-level security, and dynamic data masking. For more information, see [Azure Synapse Analytics security white paper: Access control](security-white-paper-access-control.md).
+- **Threat protection:** Ensure proper threat detection mechanisms are place to prevent, detect, and respond to threats. Azure Synapse provides SQL Auditing, SQL Threat Detection, and Vulnerability Assessment to audit, protect, and monitor databases. For more information, see [Azure Synapse Analytics security white paper: Threat detection](security-white-paper-threat-protection.md).
+
+For more information, see the [Azure Synapse Analytics security white paper](security-white-paper-introduction.md).
+
+### Monitoring
+
+Set and document expectations for monitoring readiness with your business. These expectations should describe:
+
+- How to monitor the entire user experience, and whether it includes monitoring of a single-user experience.
+- The specific metrics of each service to monitor.
+- How and who to notify about poor user experience.
+- Details of proactive health checks.
+- Any mechanisms that are in place that automate actions in response to incidents, for example, raising tickets automatically.
+
+Consider using [Azure Monitor](/azure/azure-monitor/overview) to collect, analyze, and act on telemetry data from your Azure and on-premises environments. Azure Monitor helps you maximize performance and availability of your applications by proactively identify problems in seconds.
+
+List all the important metrics to monitor for each service in your solution along with their acceptable thresholds. For example, the following list includes important metrics to monitor for a dedicated SQL pool:
+
+- `DWULimit`
+- `DWUUsed`
+- `AdaptiveCacheHitPercent`
+- `AdaptiveCacheUsedPercent`
+- `LocalTempDBUsedPercent`
+- `ActiveQueries`
+- `QueuedQueries`
+
+Consider using [Azure Service Health](https://azure.microsoft.com/features/service-health/) to notify you about Azure service incidents and planned maintenance. That way, you can take action to mitigate downtime. You can set up customizable cloud alerts and use a personalized dashboard to analyze health issues, monitor the impact to your cloud resources, get guidance and support, and share details and updates.
+
+Lastly, ensure proper notifications are set up to notify appropriate people when incidents occur. Incidents could be proactive, such as when a certain metric exceeds a threshold, or reactive, such as a failure of a component or service. For more information, see [Overview of alerts in Microsoft Azure](/azure/azure-monitor/alerts/alerts-overview).
+
+### High availability
+
+Define and document *recovery time objective (RTO)* and *recovery point objective (RPO)* for your solution. RTO is how soon the service will be available to users, and RPO is how much data loss would occur in the event of a failover.
+
+Each of the Azure services publishes a set of guidelines and metrics on the expected high availability (HA) of the service. Ensure these HA metrics align with your business expectations. when they don't align, customizations may be necessary to meet your HA requirements. For example, Azure Synapse dedicated SQL pool supports an eight-hour RPO with automatic restore points. If that RPO isn't sufficient, you can set up user-defined restore points with an appropriate frequency to meet your RPO needs. For more information, see [Backup and restore in Azure Synapse dedicated SQL pool](../sql-data-warehouse/backup-and-restore.md).
+
+### Disaster recovery
+
+Define and document a detailed process for disaster recovery (DR) scenarios. DR scenarios can include a failover process, communication mechanisms, escalation process, war room setup, and others. Also document the process for identifying the causes of outages and the steps to take to recover from disasters.
+
+Use the built-in DR mechanisms available with Azure services for building your DR process. For example, Azure Synapse performs a standard geo-backup of SQL dedicated pools once every day to a paired data center. You can use a geo-backup to recover from a disaster at the primary location. You can also set up Azure Data Lake Storage (ADLS) to copy data to another Azure region that's hundreds of miles apart. If there's a disaster at the primary location, a failover can be initiated to transform the secondary storage location into the primary storage location. For more information, see [Disaster recovery and storage account failover](/azure/storage/common/storage-disaster-recovery-guidance).
+
+## Next steps
+
+In the [next article](implementation-success-perform-user-readiness-and-onboarding-plan-review.md) in the *Azure Synapse success by design* series, learn how to perform monitoring of your Azure Synapse solution.
synapse-analytics Implementation Success Perform User Readiness And Onboarding Plan Review https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/implementation-success-perform-user-readiness-and-onboarding-plan-review.md
+
+ Title: "Synapse implementation success methodology: Perform user readiness and onboarding plan review"
+description: "Learn how to perform user readiness and onboarding of new users to ensure successful adoption of your data warehouse."
+++++ Last updated : 05/31/2022++
+# Synapse implementation success methodology: Perform user readiness and onboarding plan review
++
+Training technical people, like administrators and developers, is important to deliver success. Don't overlook that you must also extend training to include end users. Review the use cases and roles identified during the [assessment stage](implementation-success-assess-environment.md), [project planning](implementation-success-evaluate-project-plan.md), and [solution development](implementation-success-evaluate-solution-development-environment-design.md) to ensure that *everyone* is readied for success.
+
+Evaluate your project plan and prepare an onboarding plan for each group of users, including:
+
+- Big data analytics users.
+- Structured data analytics users.
+- Users of each of your identified data consumption tools.
+- Operations support.
+- Help desk and user support.
+
+## Onboarding and readiness
+
+It's unrealistic to expect that users figure out how to use Azure Synapse, even when they have experience with similar technologies. So, plan to reach out to your users to ensure a smooth transition for them to the new environment. Specifically, ensure that:
+
+- Users understand what Azure Synapse does and how it does it.
+- Users understand how to use the Azure Synapse service or the platform that uses it.
+- Onboarding of users is a consistent and continuous process.
+- Users see and understand the value of the new environment.
+
+The onboarding of users starts with explanatory sessions or technical workshops. It also involves giving them access to the platform. The onboarding process can span several months depending on the complexity of the solution, and it should set the right tone for future interactions with the Azure Synapse platform and services.
+
+Take tracking steps and make sure users are capable of completing a set of core tasks that will form part of their daily operations. These tasks will be specific to different groups of user groups, roles, and use cases. Be sure to identify:
+
+- The core actions users need to be able to perform.
+- The steps users must take to perform each action.
+
+Focus your efforts on the tasks that users struggle the most with. Be sure to provide straightforward instructions and processes, yet be mindful of how long it takes for users to complete specific tasks. It's always a good idea to request qualitative feedback to better track user readiness and improve the onboarding experience.
+
+## Next steps
+
+In the [next article](implementation-success-perform-monitoring-review.md) in the *Azure Synapse success by design* series, learn how to monitor your Azure Synapse solution.
synapse-analytics Proof Of Concept Playbook Dedicated Sql Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/proof-of-concept-playbook-dedicated-sql-pool.md
+
+ Title: "Synapse POC playbook: Data warehousing with dedicated SQL pool in Azure Synapse Analytics"
+description: "A high-level methodology for preparing and running an effective Azure Synapse Analytics proof of concept (POC) project for dedicated SQL pool."
+++++ Last updated : 05/23/2022++
+# Synapse POC playbook: Data warehousing with dedicated SQL pool in Azure Synapse Analytics
+
+This article presents a high-level methodology for preparing and running an effective Azure Synapse Analytics proof of concept (POC) project for dedicated SQL pool.
++
+> [!TIP]
+> If you're new to dedicated SQL pools, we recommend you work through the [Work with Data Warehouses using Azure Synapse Analytics](/learn/paths/work-with-data-warehouses-using-azure-synapse-analytics/) learning path.
+
+## Prepare for the POC
+
+Before deciding on your Azure Synapse POC goals, we recommend that you first read the [Azure Synapse SQL architecture](../sql/overview-architecture.md) article to familiarize yourself with how a dedicated SQL pool separates compute and storage to provide industry-leading performance.
+
+### Identify sponsors and potential blockers
+
+Once you're familiar with Azure Synapse, it's time to make sure that your POC has the necessary support and won't hit any roadblocks. You should:
+
+- Identify any restrictions or policies that your organization has about moving data to, and storing data in, the cloud.
+- Identify executive and business sponsorship for a cloud-based data warehouse project.
+- Verify that your workload is appropriate for Azure Synapse. For more information, see [Dedicated SQL pool architecture in Azure Synapse Analytics](../sql-data-warehouse/massively-parallel-processing-mpp-architecture.md).
+
+### Set the timeline
+
+A POC is a scoped, time-bounded exercise with specific, measurable goals and metrics that define success. Ideally, it should have some basis in business reality so that the results are meaningful.
+
+POCs have the best outcome when they're *timeboxed*. Timeboxing allocates a fixed and maximum unit of time to an activity. In our experience, two weeks provides enough time to complete the work without the burden of too many use cases or complex test matrices. Working within this fixed time period, we suggest that you follow this timeline:
+
+1. **Data loading:** Three days or less
+1. **Querying:** Five days or less
+1. **Value added tests:** Two days or less
+
+Here are some tips:
+
+> [!div class="checklist"]
+> - Make realistic estimates of the time that you will require to complete the tasks in your plan.
+> - Recognize that the time to complete your POC will be related to the size of your dataset, the number of database objects (tables, views, and stored procedures), the complexity of the database objects, and the number of interfaces you will test.
+> - If you estimate that your POC will run longer than four weeks, consider reducing the scope to focus only on the most important goals.
+> - Get support from all the lead resources and sponsors for the timeline before commencing the POC.
+
+Once you've determined that there aren't any immediate obstacles and you've set the timeline, you can scope a high-level architecture.
+
+### Create a high-level scoped architecture
+
+A high-level future architecture likely contains many data sources and data consumers, big data components, and possibly machine learning and AI data consumers. To keep your POC goals achievable (and within the bounds of your set timeline), decide which of these components will form part of the POC and which will be excluded.
+
+Additionally, if you're already using Azure, identify the following:
+
+- Any existing Azure resources that you can use during the POC. For example, resources can include Azure Active Directory (Azure AD), or Azure ExpressRoute.
+- What Azure region(s) your organization prefers.
+- A subscription you can use for non-production POC work.
+- The throughput of your network connection to Azure.
+ > [!IMPORTANT]
+ > Be sure to check that your POC can consume some of that throughput without having an adverse effect on production solutions.
+
+### Apply migration options
+
+If you're migrating from a legacy data warehouse system to Azure Synapse, here are some questions to consider:
+
+- Are you migrating and want to make as few changes to existing Extract, Transform, and Load (ETL) processes and data warehouse consumption as possible?
+- Are you migrating but want to do some extensive improvements along the way?
+- Are you building an entirely new data analytics environment (sometimes called a *greenfield project*)?
+
+Next, you need to consider your pain points.
+
+### Identify current pain points
+
+Your POC should contain use cases to prove potential solutions to address your current pain points. Here are some questions to consider:
+
+- What gaps in your current implementation do you expect Azure Synapse to fill?
+- What new business needs are you required to support?
+- What service level agreements (SLAs) are you required to meet?
+- What will be the workloads (for example, ETL, batch queries, analytics, reporting queries, or interactive queries)?
+
+Next, you need to set your POC success criteria.
+
+### Set POC success criteria
+
+Identify why you're doing a POC and be sure to define clear goals. It's also important to know what outputs you want from your POC and what you plan to do with them.
+
+Keep in mind that a POC should be a short and focused effort to quickly prove or test a limited set of concepts. If you have a long list of items to prove, you may want to dive them into multiple POCs. POCs can have gates between them so you can determine whether to proceed to the next POC.
+
+Here are some example POC goals:
+
+- We need to know that the query performance for our big complex reporting queries will meet our new SLAs.
+- We need to know the query performance for our interactive users.
+- We need to know whether our existing ETL processes are a good fit and where improvements need to be made.
+- We need to know whether we can shorten our ETL runtimes and by how much.
+- We need to know that Synapse Analytics has sufficient security capabilities to adequately secure our data.
+
+Next, you need to create a test plan.
+
+### Create a test plan
+
+Using your goals, identify specific tests to run in order to support those goals and provide your identified outputs. It's important to make sure that you have at least one test for each goal and the expected output. Identify specific queries, reports, ETL and other processes that you will run to provide quantifiable results.
+
+Refine your tests by adding multiple testing scenarios to clarify any table structure questions that arise.
+
+Good planning usually defines an effective POC execution. Make sure all stakeholders agree to a written test plan that ties each POC goal to a set of clearly stated test cases and measurements of success.
+
+Most test plans revolve around performance and the expected user experience. What follows is an example of a test plan. It's important to customize your test plan to meet your business requirements. Clearly defining what you are testing will pay dividends later in this process.
+
+|Goal|Test|Expected outcomes|
+||||
+|We need to know that the query performance for our big complex reporting queries will meet our new SLAs|- Sequential test of complex queries<br/>- Concurrency test of complex queries against stated SLAs|- Queries A, B, and C completed in 10, 13, and 21 seconds, respectively<br/>- With 10 concurrent users, queries A, B, and C completed in 11, 15, and 23 seconds, on average|
+|We need to know the query performance for our interactive users|- Concurrency test of selected queries at an expected concurrency level of 50 users.<br/>- Run the preceding query with result set caching|- At 50 concurrent users, average execution time is expected to be under 10 seconds, and without result set caching<br/>- At 50 concurrent users, average execution time is expected to be under five seconds with result set caching|
+|We need to know whether our existing ETL processes can run within the SLA|- Run one or two ETL processes to mimic production loads|- Loading incrementally into a core fact table must complete in less than 20 minutes (including staging and data cleansing)<br/>- Dimension processing needs to take less than five minutes|
+|We need to know that the data warehouse has sufficient security capabilities to secure our data|- Review and enable [network security](security-white-paper-network-security.md) (VNet and private endpoints), [access control](security-white-paper-access-control.md) (row-level security, dynamic data masking)|- Prove that data never leaves our tenant.<br/>- Ensure that customer content is easily secured|
+
+Next, you need to identify and validate the POC dataset.
+
+### Identify and validate the POC dataset
+
+Using the scoped tests, you can now identify the dataset required to execute those tests in Azure Synapse. Review your dataset by considering the following:
+
+- Verify that the dataset adequately represents your production dataset in terms of content, complexity, and scale.
+- Don't use a dataset that's too small (less than 1TB), as you might not achieve representative performance.
+- Don't use a dataset that's too large, as the POC isn't intended to complete a full data migration.
+- Identify the [distribution pattern](../sql-data-warehouse/sql-data-warehouse-tables-distribute.md), [indexing option](../sql-data-warehouse/sql-data-warehouse-tables-index.md), and [partitioning](../sql-data-warehouse/sql-data-warehouse-tables-partition.md) for each table. If there are any questions regarding distribution, indexing, or partitioning, add tests to your POC to answer them. Bear in mind that you may want to test more than one distribution option or indexing option for some tables.
+- Check with the business owners for any blockers for moving the POC dataset to the cloud.
+- Identify any security or privacy concerns.
+
+> [!IMPORTANT]
+> Make sure you check with business owners for any blockers before moving any data to the cloud. Identify any security or privacy concerns or any data obfuscation needs that should be done before moving data to the cloud.
+
+Next, you need to assemble the team of experts.
+
+### Assemble the team
+
+Identify the team members and their commitment to support your POC. Team members should include:
+
+- A project manager to run the POC project.
+- A business representative to oversee requirements and results.
+- An application data expert to source the data for the POC dataset.
+- An Azure Synapse specialist.
+- An expert advisor to optimize the POC tests.
+- Any person who will be required for specific POC project tasks but who aren't required for its entire duration. These supporting resources could include network administrators, Azure administrators, or Azure AD administrators.
+
+> [!TIP]
+> We recommend engaging an expert advisor to assist with your POC. [Microsoft's partner community](https://appsource.microsoft.com/marketplace/partner-dir) has global availability of expert consultants who can help you assess, evaluate, or implement Azure Synapse.
+
+Now that you are fully prepared, it's time to put your POC into practice.
+
+## Put the POC into practice
+
+It's important to keep the following in mind:
+
+- Implement your POC project with the discipline and rigor of any production project.
+- Run the POC according to plan.
+- Have a change request process in place to prevent your POC scope from growing or changing.
+
+Before tests can start, you need to set up the test environment. It involves four stages:
+
+1. Setup
+1. Data loading
+1. Querying
+1. Value added tests
++
+### Setup
+
+You can set up a POC on Azure Synapse by following these steps:
+
+1. Use [this quickstart](../sql-data-warehouse/create-data-warehouse-portal.md) to provision a Synapse workspace and set up storage and permissions according to the POC test plan.
+1. Use [this quickstart](../quickstart-create-sql-pool-portal.md) to add a dedicated SQL pool to the Synapse workspace.
+1. Set up [networking and security](security-white-paper-introduction.md) according to your requirements.
+1. Grant appropriate access to POC team members. See [this article](/azure-sql/database/logins-create-manage) about authentication and authorization for accessing dedicated SQL pools.
+
+> [!TIP]
+> We recommend that you *develop code and unit testing* by using the DW500c service level (or below). We recommend that you *run load and performance tests* by using the DW1000c service level (or above). You can [pause compute of the dedicated SQL pool](../sql-data-warehouse/pause-and-resume-compute-portal.md) at any time to cease compute billing, which will save on costs.
+
+### Data loading
+
+Once you've set up the dedicated SQL pool, you can follow these steps to load data:
+
+1. Load the data into [Azure Blob Storage](../../storage/blobs/storage-blobs-overview.md). For a POC, we recommend that you use a [general-purpose V2 storage account](../../storage/common/storage-account-overview.md) with [locally-redundant storage (LRS)](../../storage/common/storage-redundancy.md#locally-redundant-storage). While there are several tools for migrating data to Azure Blob Storage, the easiest way is to use [Azure Storage Explorer](https://azure.microsoft.com/features/storage-explorer/), which can copy files into a storage container.
+2. Load the data into the dedicated SQL pool. Azure Synapse supports two T-SQL loading methods: [PolyBase](../sql-data-warehouse/design-elt-data-loading.md) and the [COPY](/sql/t-sql/statements/copy-into-transact-sql?view=azure-sqldw-latest&preserve-view=true) statement. You can use SSMS to connect to the dedicated SQL pool to use either method.
+
+When you load data into the dedicated SQL pool for the first time, you need to consider which [distribution pattern](../sql-data-warehouse/sql-data-warehouse-tables-distribute.md) and [index option](../sql-data-warehouse/sql-data-warehouse-tables-index.md) to use. While a dedicated SQL pool supports a variety of both, it's a best practice to rely on default settings. Default settings use round-robin distribution and a clustered columnstore index. If necessary, you can adjust these settings later, which is described later in this article.
+
+The following example shows the COPY load method:
+
+```sql
+--Note when specifying the column list, input field numbers start from 1
+COPY INTO
+ test_1 (Col_1 default 'myStringDefault' 1, Col_2 default 1 3)
+FROM
+ 'https://myaccount.blob.core.windows.net/myblobcontainer/folder1/'
+WITH (
+ FILE_TYPE = 'CSV',
+ CREDENTIAL = (IDENTITY = 'Storage Account Key' SECRET = '<Your_Account_Key>'),
+ FIELDQUOTE = '"',
+ FIELDTERMINATOR = ',',
+ ROWTERMINATOR = '0x0A',
+ ENCODING = 'UTF8',
+ FIRSTROW = 2
+);
+```
+
+### Querying
+
+The primary purpose of an data warehouse is to perform analytics, which requires querying the data warehouse. Most POCs start by running a small number of representative queries against the data warehouse, at first sequentially and then concurrently. You should define both approaches in your test plan.
+
+#### Sequential query tests
+
+It's easy to run sequential query tests in SSMS. It's important to run these tests by using a user with a sufficiently large [resource class](../sql-data-warehouse/resource-classes-for-workload-management.md). A resource class is a pre-determined resource limit in dedicated SQL pool that governs compute resources and concurrency for query execution. For simple queries, we recommend using the pre-defined **staticrc20** resource class. For more complex queries, we recommend using the pre-defined **staticrc40** resource class.
+
+Notice that the following first query uses a [query label](../sql/develop-label.md) to provide a mechanism to keep track of the query. The second query uses the `sys.dm_pdw_exec_requests` dynamic management view to search by the label.
+
+```sql
+/* Use the OPTION(LABEL = '') Syntax to add a query label to track the query in DMVs */
+SELECT TOP (1000)
+ *
+FROM
+ [dbo].[Date]
+OPTION (LABEL = 'Test1');
+
+/* Use sys.dm_pdw_exec_requests to determine query execution duration (ms) */
+SELECT
+ Total_elapsed_time AS [Elapsed_Time_ms],
+ [label]
+FROM
+ sys.dm_pdw_exec_requests
+WHERE
+ [label] = 'Test1';
+```
+
+#### Concurrent query tests
+
+After recording sequential query performance, you can then run multiple queries concurrently. That way, you can simulate a business intelligence workload running against the dedicated SQL pool. The easiest way to run this test is to download a stress testing tool. The most popular tool is [Apache JMeter](https://jmeter.apache.org/download_jmeter.cgi), which is a third-party open source tool.
+
+The tool reports on minimum, maximum, and median query durations for a given concurrency level. For example, suppose that you want to simulate a business intelligence workload that generates 100 concurrent queries. You can setup JMeter to run those 100 concurrent queries in a loop and then review the steady state execution. It can be done with [result set caching](../sql-data-warehouse/performance-tuning-result-set-caching.md) on or off to evaluate the suitability of that feature.
+
+Be sure to document your results. Here's an example of some results:
+
+|Concurrency|# Queries run|DWU|Min duration(s)|Max duration(S)|Median duration(s)|
+|||||||
+|100|1,000|5,000|3|10|5|
+|50|5,000|5,000|3|6|4|
+
+#### Mixed workload tests
+
+Mixed workload testing is an extension of the [concurrent query tests](#concurrent-query-tests). By adding a data loading process into the workload mix, the workload will better simulate a real production workload.
+
+#### Optimize the data
+
+Depending on the query workload running on Azure Synapse, you may need to optimize your data warehouse's distributions and indexes and rerun the tests. For more information, see [Best practices for dedicated SQL pools in Azure Synapse Analytics](../sql-data-warehouse/sql-data-warehouse-best-practices.md).
+
+The most common mistakes seen during setup are:
+
+- Large queries run with a resource class that's too low.
+- The dedicated SQL pool service level DWUs are too low for the workload.
+- Large tables require hash distribution.
+
+To improve query performance, you can:
+
+- Create [materialized views](../sql-data-warehouse/performance-tuning-materialized-views.md), which can accelerate queries involving common aggregations.
+- [Replicate tables](../sql-data-warehouse/design-guidance-for-replicated-tables.md), especially for small dimension tables.
+- [Hash distribute](../sql-data-warehouse/sql-data-warehouse-tables-distribute.md) large fact tables that are joined or aggregated.
+
+### Value added tests
+
+Once query performance testing is complete, it's a good time to test specific features to verify that they satisfy your intended use cases. These features include:
+
+- [Row-level security](/sql/relational-databases/security/row-level-security?view=azure-sqldw-latest&preserve-view=true)
+- [Column-level security](../sql-data-warehouse/column-level-security.md)
+- [Dynamic data masking](/azure/azure-sql/database/dynamic-data-masking-overview)
+- Intra-cluster scaling via [workload isolation](../sql-data-warehouse/sql-data-warehouse-workload-isolation.md)
+
+Finally, you need to interpret your POC results.
+
+## Interpret the POC results
+
+Once you have test results for your data warehouse, it's important to interpret that data. A common approach you can take is to compare the runs in terms of *price/performance*. Simply put, price/performance removes the differences in price per DWU or service hardware and provides a single comparable number for each performance test.
+
+Here's an example:
+
+|Test|Test duration|DWU|$/hr for DWU|Cost of test|
+||||||
+|Test 1|10 min|1000|$12/hr|$2|
+|Test 2|30 min|500|$6/hr|$3|
+
+This example makes it easy to see that **Test 1** at DWU1000 is more cost effective at $2 per test run compared with $3 per test run.
+
+> [!NOTE]
+> You can also use this methodology to compare results *across vendors* in a POC.
+
+In summary, once you complete all the POC tests, you're ready to evaluate the results. Begin by evaluating whether the POC goals have been met and the desired outputs collected. Make note of where additional testing is warranted and additional questions that were raised.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Data lake exploration with serverless SQL pool in Azure Synapse Analytics](proof-of-concept-playbook-serverless-sql-pool.md)
+
+> [!div class="nextstepaction"]
+> [Big data analytics with Apache Spark pool in Azure Synapse Analytics](proof-of-concept-playbook-spark-pool.md)
+
+> [!div class="nextstepaction"]
+> [Azure Synapse Analytics frequently asked questions](../overview-faq.yml)
synapse-analytics Proof Of Concept Playbook Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/proof-of-concept-playbook-overview.md
+
+ Title: Azure Synapse proof of concept playbook
+description: "Introduction to a series of articles that provide a high-level methodology for planning, preparing, and running an effective Azure Synapse Analytics proof of concept project."
+++++ Last updated : 05/23/2022++
+# Azure Synapse proof of concept playbook
+
+Whether it's an enterprise data warehouse migration, a big data re-platforming, or a greenfield implementation, each project traditionally starts with a proof of concept (POC).
+
+The *Synapse proof of concept playbook* is a series of related articles that provide a high-level methodology for planning, preparing, and running an effective Azure Synapse Analytics POC project. The overall objective of a POC is to validate potential solutions to technical problems, such as how to integrate systems or how to achieve certain results through a specific configuration. As emphasized by this series, an effective POC validates that certain concepts have the potential for real-world production application.
+
+> [!TIP]
+> If you're new to Azure Synapse, we recommend you work through the [Introduction to Azure Synapse Analytics](/learn/modules/introduction-azure-synapse-analytics/) module.
+
+## Playbook audiences
+
+The playbook helps you to evaluate the use of Azure Synapse when migrating from an existing workload. We designed it for the following audiences:
+
+- Technical experts who are planning their own in-house Azure Synapse POC project.
+- Business owners who will be part of the execution or evaluation of an Azure Synapse POC project.
+- Anyone looking to learn more about data warehousing POC projects.
+
+## Playbook content
+
+The playbook delivers the following content:
+
+- Guidance on what makes an effective POC.
+- Guidance on how to make valid comparisons between systems.
+- Guidance on the technical aspects of running an Azure Synapse POC.
+- A road map to relevant technical content from Azure Synapse.
+- Guidance on how to evaluate POC results to support business decisions.
+- Guidance on how and where to find additional help.
+
+The playbook includes three subjects:
+
+- [Data warehousing with dedicated SQL pool in Azure Synapse Analytics](proof-of-concept-playbook-dedicated-sql-pool.md)
+- [Data lake exploration with serverless SQL pool in Azure Synapse Analytics](proof-of-concept-playbook-serverless-sql-pool.md)
+- [Big data analytics with Apache Spark pool in Azure Synapse Analytics](proof-of-concept-playbook-spark-pool.md)
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Data warehousing with dedicated SQL pool in Azure Synapse Analytics](proof-of-concept-playbook-dedicated-sql-pool.md)
+
+> [!div class="nextstepaction"]
+> [Data lake exploration with serverless SQL pool in Azure Synapse Analytics](proof-of-concept-playbook-serverless-sql-pool.md)
+
+> [!div class="nextstepaction"]
+> [Big data analytics with Apache Spark pool in Azure Synapse Analytics](proof-of-concept-playbook-spark-pool.md)
+
+> [!div class="nextstepaction"]
+> [Azure Synapse Analytics frequently asked questions](../overview-faq.yml)
synapse-analytics Proof Of Concept Playbook Serverless Sql Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/proof-of-concept-playbook-serverless-sql-pool.md
+
+ Title: "Synapse POC playbook: Data lake exploration with serverless SQL pool in Azure Synapse Analytics"
+description: "A high-level methodology for preparing and running an effective Azure Synapse Analytics proof of concept (POC) project for serverless SQL pool."
+++++ Last updated : 05/23/2022++
+# Synapse POC playbook: Data lake exploration with serverless SQL pool in Azure Synapse Analytics
+
+This article presents a high-level methodology for preparing and running an effective Azure Synapse Analytics proof of concept (POC) project for serverless SQL pool.
++
+## Prepare for the POC
+
+A POC project can help you make an informed business decision about implementing a big data and advanced analytics environment on a cloud-based platform that leverages serverless SQL pool in Azure Synapse. If you need to explore or gain insights from data in the data lake, or optimize your existing data transformation pipeline, you can benefit from using the serverless SQL pool. It's suitable for the following scenarios:
+
+- **Basic discovery and exploration:** Quickly reason about data stored in various formats (Parquet, CSV, JSON) in your data lake, so you can plan how to unlock insights from it.
+- **Logical data warehouse:** Produce a relational abstraction on top of raw or disparate data without relocating or transforming it, providing an always up-to-date view of your data.
+- **Data transformation:** Run simple, scalable, and highly performant data lake queries by using T-SQL. You can feed query results to business intelligence (BI) tools, or load them into a relational database. Target systems can include Azure Synapse dedicated SQL pools or Azure SQL Database.
+
+Different professional roles can benefit from serverless SQL pool:
+
+- **Data engineers** can explore the data lake, transform and prepare data by using serverless SQL pool, and simplify their data transformation pipelines.
+- **Data scientists** can quickly reason about the contents and structure of the data stored in the data lake by using the [OPENROWSET](/sql/t-sql/functions/openrowset-transact-sql?view=sql-server-ver15&viewFallbackFrom=azure-sqldw-latest&preserve-view=true) T-SQL function and its automatic schema inference.
+- **Data analysts** can write T-SQL queries in their preferred query tools, which can connect to serverless SQL pool. They can explore data in Spark external tables that were created by data scientists or data engineers.
+- **BI professionals** can quickly create Power BI reports that connect to data lake or Spark tables.
+
+A serverless SQL pool POC project will identify your key goals and business drivers that serverless SQL pool is designed to support. It will also test key features and gather metrics to support your implementation decisions. A POC isn't designed to be deployed to a production environment. Rather, it's a short-term project that focuses on key questions, and its result can be discarded.
+
+Before you begin planning your serverless SQL Pool POC project:
+
+> [!div class="checklist"]
+> - Identify any restrictions or guidelines your organization has about moving data to the cloud.
+> - Identify executive or business sponsors for a big data and advanced analytics platform project. Secure their support for migration to the cloud.
+> - Identify availability of technical experts and business users to support you during the POC execution.
+
+Before you start preparing for the POC project, we recommend you first read the [serverless SQL pool documentation](../sql/on-demand-workspace-overview.md).
+
+> [!TIP]
+> If you're new to serverless SQL pools, we recommend you work through the [Build data analytics solutions using Azure Synapse serverless SQL pools](/learn/paths/build-data-analytics-solutions-using-azure-synapse-serverless-sql-pools/) learning path.
+
+### Set the goals
+
+A successful POC project requires planning. Start by identify why you're doing a POC to fully understand the real motivations. Motivations could include modernization, cost saving, performance improvement, or integrated experience. Be sure to document clear goals for your POC and the criteria that will define its success. Ask yourself:
+
+> [!div class="checklist"]
+> - What do you want as the outputs of your POC?
+> - What will you do with those outputs?
+> - Who will use the outputs?
+> - What will define a successful POC?
+
+Keep in mind that a POC should be a short and focused effort to quickly prove a limited set of concepts and capabilities. These concepts and capabilities should be representative of the overall workload. If you have a long list of items to prove, you may want to plan more than one POC. In that case, define gates between the POCs to determine whether you need to continue with the next one. Given the different professional roles that can use a serverless SQL pool (and the different scenarios that serverless SQL pool supports), you may choose to execute multiple POCs. For example, one POC could focus on requirements for the data scientist role, such as discovery and exploration of data in different formats. Another could focus on requirements for the data engineering role, such as data transformation and the creation of a logical data warehouse.
+
+As you consider your POC goals, ask yourself the following questions to help you shape the goals:
+
+> [!div class="checklist"]
+> - Are you migrating from an existing big data and advanced analytics platform (on-premises or cloud)?
+> - Are you migrating but want to make as few changes as possible to existing ingestion and data processing?
+> - Are you migrating but want to do some extensive improvements along the way?
+> - Are you building an entirely new big data and advanced analytics platform (greenfield project)?
+> - What are your current pain points? For example, scalability, performance, or flexibility.
+> - What new business requirements do you need to support?
+> - What are the SLAs that you're required to meet?
+> - What will be the workloads? For example, data exploration over different data formats, basic exploration, a logical data warehouse, data preparation and/or transformation, T-SQL interactive analysis, T-SQL querying of Spark tables, or reporting queries over the data lake.
+> - What are the skills of the users who will own the project (should the POC be implemented)?
+
+Here are some examples of POC goal setting:
+
+- Why are we doing a POC?
+ - We need to know if we can explore all of the raw file formats we store by using serverless SQL pool.
+ - We need to know if our data engineers can quickly evaluate new data feeds.
+ - We need to know if data lake query performance by using serverless SQL pool will meet our data exploration requirements.
+ - We need to know if serverless SQL pool is a good choice for some of our visualizations and reporting requirements.
+ - We need to know if serverless SQL pool is a good choice for some of our data ingestion and processing requirements.
+ - We need to know if our move to Azure Synapse will meet our budget.
+- At the conclusion of this PoC:
+ - We will have the data to identify the data transformations that are well suited to serverless SQL pool.
+ - We will have the data to identify when serverless SQL pool can be best used during data visualization.
+ - We will have the data to know the ease with which our data engineers and data scientists can adopt the new platform.
+ - We will have gained insight to better estimate the effort required to complete the implementation or migration project.
+ - We will have a list of items that may need more testing.
+ - Our POC will be successful if we have the data needed and have completed the testing identified to determine how serverless SQL pool will support our cloud-based big data and advance analytics platform.
+ - We will have determined whether we can move to the next phase or whether more POC testing is needed to finalize our decision.
+ - We will be able to make a sound business decision supported by specific data points.
+
+### Plan the project
+
+Use your goals to identify specific tests and to provide the outputs you identified. It's important to make sure that you have at least one test to support each goal and expected output. Also, identify specific data exploration and analysis tasks, specific transformations, and specific existing processing you want to test. Identify a specific dataset and codebase that you can use.
+
+Here's an example of the needed level of specificity in planning:
+
+- **Goal:** We need to know whether data engineers can achieve the equivalent processing of the existing ETL process named "Daily Batch Raw File Validation" within the required SLA.
+- **Output:** We will have the data to determine whether we can use T-SQL queries to execute the "Daily Batch Raw File Validation" ETL process within the required SLA.
+- **Test:** Validation queries A, B, and C are identified by data engineering, and they represent overall data processing needs. Compare the performance of these queries with the benchmark obtained from the existing system.
+
+### Evaluate the POC dataset
+
+Using the specific tests you identified, select a dataset to support the tests. Take time to review this dataset. You should verify that the dataset will adequately represent your future processing in terms of content, complexity, and scale. Don't use a dataset that's too small because it won't deliver representative performance. Conversely, don't use a dataset that's too large because the POC shouldn't become a full data migration. Be sure to obtain the appropriate benchmarks from existing systems so you can use them for performance comparisons.
+
+> [!IMPORTANT]
+> Make sure you check with business owners for any blockers before moving any data to the cloud. Identify any security or privacy concerns or any data obfuscation needs that should be done before moving data to the cloud.
+
+### Create a high-level architecture
+
+Based upon the high-level architecture of your proposed future state architecture, identify the components that will form part of your POC. Your high-level future state architecture likely contains many data sources, numerous data consumers, big data components, and possibly machine learning and artificial intelligence (AI) data consumers. Your POC architecture should specifically identify components that will be part of the POC. Importantly, it should identify any components that won't form part of the POC testing.
+
+If you're already using Azure, identify any resources you already have in place (Azure Active Directory, ExpressRoute, and others) that you can use during the POC. Also identify the Azure regions your organization uses. Now is a great time to identify the throughput of your ExpressRoute connection and to check with other business users that your POC can consume some of that throughput without adverse impact on production systems.
+
+### Identify POC resources
+
+Specifically identify the technical resources and time commitments required to support your POC. Your POC will need:
+
+- A business representative to oversee requirements and results.
+- An application data expert, to source the data for the POC and provide knowledge of the existing processes and logic.
+- A serverless SQL pool expert.
+- An expert advisor, to optimize the POC tests.
+- Resources that will be required for specific components of your POC project, but not necessarily required for the duration of the POC. These resources could include network admins, Azure admins, Active Directory admins, Azure portal admins, and others.
+- Ensure all the required Azure services resources are provisioned and the required level of access is granted, including access to storage accounts.
+- Ensure you have an account that has required data access permissions to retrieve data from all data sources in the POC scope.
+
+> [!TIP]
+> We recommend engaging an expert advisor to assist with your POC. [Microsoft's partner community](https://appsource.microsoft.com/marketplace/partner-dir) has global availability of expert consultants who can help you assess, evaluate, or implement Azure Synapse.
+
+### Set the timeline
+
+Review your POC planning details and business needs to identify a time frame for your POC. Make realistic estimates of the time that will be required to complete the POC goals. The time to complete your POC will be influenced by the size of your POC dataset, the number and complexity of tests, and the number of interfaces to test. If you estimate that your POC will run longer than four weeks, consider reducing the POC scope to focus on the highest priority goals. Be sure to obtain approval and commitment from all the lead resources and sponsors before continuing.
+
+## Put the POC into practice
+
+We recommend you execute your POC project with the discipline and rigor of any production project. Run the project according to plan and manage a change request process to prevent uncontrolled growth of the POC's scope.
+
+Here are some examples of high-level tasks:
+
+1. [Create a Synapse workspace](../quickstart-create-workspace.md), storage accounts, and the Azure resources identified in the POC plan.
+1. Set up [networking and security](security-white-paper-introduction.md) according to your requirements.
+1. Grant appropriate access to POC team members. See [this article](../sql/develop-storage-files-storage-access-control.md) about permissions for accessing files directly from Azure Storage.
+1. Load the POC dataset.
+1. Implement and configure the tests and/or migrate existing code to serverless SQL pool scripts and views.
+1. Execute the tests:
+ - Many tests can be executed in parallel.
+ - Record your results in a consumable and readily understandable format.
+1. Monitor for troubleshooting and performance.
+1. Evaluate your results and present findings.
+1. Work with technical stakeholders and the business to plan for the next stage of the project. The next stage could be a follow-up POC or a production implementation.
+
+## Interpret the POC results
+
+When you complete all the POC tests, you evaluate the results. Begin by evaluating whether the POC goals were met and the desired outputs were collected. Determine whether more testing is necessary or any questions need addressing.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Data lake exploration with dedicated SQL pool in Azure Synapse Analytics](proof-of-concept-playbook-dedicated-sql-pool.md)
+
+> [!div class="nextstepaction"]
+> [Big data analytics with Apache Spark pool in Azure Synapse Analytics](proof-of-concept-playbook-spark-pool.md)
+
+> [!div class="nextstepaction"]
+> [Build data analytics solutions using Azure Synapse serverless SQL pools](/learn/paths/build-data-analytics-solutions-using-azure-synapse-serverless-sql-pools/)
+
+> [!div class="nextstepaction"]
+> [Azure Synapse Analytics frequently asked questions](../overview-faq.yml)
synapse-analytics Proof Of Concept Playbook Spark Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/proof-of-concept-playbook-spark-pool.md
+
+ Title: "Synapse POC playbook: Big data analytics with Apache Spark pool in Azure Synapse Analytics"
+description: "A high-level methodology for preparing and running an effective Azure Synapse Analytics proof of concept (POC) project for Apache Spark pool."
+++++ Last updated : 05/23/2022++
+# Synapse POC playbook: Big data analytics with Apache Spark pool in Azure Synapse Analytics
+
+This article presents a high-level methodology for preparing and running an effective Azure Synapse Analytics proof of concept (POC) project for Apache Spark pool.
++
+## Prepare for the POC
+
+A POC project can help you make an informed business decision about implementing a big data and advanced analytics environment on a cloud-based platform that leverages Apache Spark pool in Azure Synapse.
+
+A POC project will identify your key goals and business drivers that cloud-based big data and advance analytics platform must support. It will test key metrics and prove key behaviors that are critical to the success of your data engineering, machine learning model building, and training requirements. A POC isn't designed to be deployed to a production environment. Rather, it's a short-term project that focuses on key questions, and its result can be discarded.
+
+Before you begin planning your Spark POC project:
+
+> [!div class="checklist"]
+> - Identify any restrictions or guidelines your organization has about moving data to the cloud.
+> - Identify executive or business sponsors for a big data and advanced analytics platform project. Secure their support for migration to the cloud.
+> - Identify availability of technical experts and business users to support you during the POC execution.
+
+Before you start preparing for the POC project, we recommend you first read the [Apache Spark documentation](/hdinsight/spark/apache-spark-overview.md).
+
+> [!TIP]
+> If you're new to Spark pools, we recommend you work through the [Perform data engineering with Azure Synapse Apache Spark Pools](/learn/paths/perform-data-engineering-with-azure-synapse-apache-spark-pools/) learning path.
+
+By now you should have determined that there are no immediate blockers and then you can start preparing for your POC. If you are new to Apache Spark Pools in Azure Synapse Analytics you can refer to [this documentation](../spark/apache-spark-overview.md) where you can get an overview of the Spark architecture and learn how it works in Azure Synapse.
+
+Develop an understanding of these key concepts:
+
+- Apache Spark and its distributed architecture.
+- Spark concepts like Resilient Distributed Datasets (RDD) and partitions (in-memory and physical).
+- Azure Synapse workspace, the different compute engines, pipeline, and monitoring.
+- Separation of compute and storage in Spark pool.
+- Authentication and authorization in Azure Synapse.
+- Native connectors that integrate with Azure Synapse dedicated SQL pool, Azure Cosmos DB, and others.
+
+Azure Synapse decouples compute resources from storage so that you can better manage your data processing needs and control costs. The serverless architecture of Spark pool allows you to spin up and down as well as grow and shrink your Spark cluster, independent of your storage. You can pause (or setup auto-pause) a Spark cluster entirely. That way, you pay for compute only when it's in use. When it's not in use, you only pay for storage. You can scale up your Spark cluster for heavy data processing needs or large loads and then scale it back down during less intense processing times (or shut it down completely). You can effectively scale and pause a cluster to reduce costs. Your Spark POC tests should include data ingestion and data processing at different scales (small, medium, and large) to compare price and performance at different scale. For more information, see [Automatically scale Azure Synapse Analytics Apache Spark pools](../spark/apache-spark-autoscale.md).
+
+It's important to understand the difference between the different sets of Spark APIs so you can decide what works best for your scenario. You can choose the one that provides better performance or ease of use, taking advantage of your team's existing skill sets. For more information, see [A Tale of Three Apache Spark APIs: RDDs, DataFrames, and Datasets](https://databricks.com/session/a-tale-of-three-apache-spark-apis-rdds-dataframes-and-datasets).
+
+Data and file partitioning work slightly differently in Spark. Understanding the differences will help you to optimize for performance. For more information, see Apache Spark documentation: [Partition Discovery](https://spark.apache.org/docs/latest/sql-data-sources-parquet.html#partition-discovery) and [Partition Configuration Options](https://spark.apache.org/docs/latest/sql-performance-tuning.html#other-configuration-options).
+
+### Set the goals
+
+A successful POC project requires planning. Start by identify why you're doing a POC to fully understand the real motivations. Motivations could include modernization, cost saving, performance improvement, or integrated experience. Be sure to document clear goals for your POC and the criteria that will define its success. Ask yourself:
+
+> [!div class="checklist"]
+> - What do you want as the outputs of your POC?
+> - What will you do with those outputs?
+> - Who will use the outputs?
+> - What will define a successful POC?
+
+Keep in mind that a POC should be a short and focused effort to quickly prove a limited set of concepts and capabilities. These concepts and capabilities should be representative of the overall workload. If you have a long list of items to prove, you may want to plan more than one POC. In that case, define gates between the POCs to determine whether you need to continue with the next one. Given the different professional roles that may use Spark pools and notebooks in Azure Synapse, you may choose to execute multiple POCs. For example, one POC could focus on requirements for the data engineering role, such as ingestion and processing. Another POC could focus on machine learning (ML) model development.
+
+As you consider your POC goals, ask yourself the following questions to help you shape the goals:
+
+> [!div class="checklist"]
+> - Are you migrating from an existing big data and advanced analytics platform (on-premises or cloud)?
+> - Are you migrating but want to make as few changes as possible to existing ingestion and data processing? For example, a Spark to Spark migration, or a Hadoop/Hive to Spark migration.
+> - Are you migrating but want to do some extensive improvements along the way? For example, re-writing MapReduce jobs as Spark jobs, or converting legacy RDD-based code to DataFrame/Dataset-based code.
+> - Are you building an entirely new big data and advanced analytics platform (greenfield project)?
+> - What are your current pain points? For example, scalability, performance, or flexibility.
+> - What new business requirements do you need to support?
+> - What are the SLAs that you're required to meet?
+> - What will be the workloads? For example, ETL, batch processing, stream processing, machine learning model training, analytics, reporting queries, or interactive queries?
+> - What are the skills of the users who will own the project (should the POC be implemented)? For example, PySpark vs Scala skills, notebook vs IDE experience.
+
+Here are some examples of POC goal setting:
+
+- Why are we doing a POC?
+ - We need to know that the data ingestion and processing performance for our big data workload will meet our new SLAs.
+ - We need to know whether near real-time stream processing is possible and how much throughput it can support. (Will it support our business requirements?)
+ - We need to know if our existing data ingestion and transformation processes are a good fit and where improvements will need to be made.
+ - We need to know if we can shorten our data integration run times and by how much.
+ - We need to know if our data scientists can build and train machine learning models and leverage AI/ML libraries as needed in a Spark pool.
+ - Will the move to cloud-based Synapse Analytics meet our cost goals?
+- At the conclusion of this POC:
+ - We will have the data to determine if our data processing performance requirements can be met for both batch and real-time streaming.
+ - We will have tested ingestion and processing of all our different data types (structured, semi and unstructured) that support our use cases.
+ - We will have tested some of our existing complex data processing and can identify the work that will need to be completed to migrate our portfolio of data integration to the new environment.
+ - We will have tested data ingestion and processing and will have the data points to estimate the effort required for the initial migration and load of historical data, as well as estimate the effort required to migrate our data ingestion (Azure Data Factory (ADF), Distcp, Databox, or others).
+ - We will have tested data ingestion and processing and can determine if our ETL/ELT processing requirements can be met.
+ - We will have gained insight to better estimate the effort required to complete the implementation project.
+ - We will have tested scale and scaling options and will have the data points to better configure our platform for better price-performance settings.
+ - We will have a list of items that may need more testing.
+
+### Plan the project
+
+Use your goals to identify specific tests and to provide the outputs you identified. It's important to make sure that you have at least one test to support each goal and expected output. Also, identify specific data ingestion, batch or stream processing, and all other processes that will be executed so you can identify a very specific dataset and codebase. This specific dataset and codebase will define the scope of the POC.
+
+Here's an example of the needed level of specificity in planning:
+
+- **Goal A:** We need to know whether our requirement for data ingestion and processing of batch data can be met under our defined SLA.
+- **Output A:** We will have the data to determine whether our batch data ingestion and processing can meet the data processing requirement and SLA.
+ - **Test A1:** Processing queries A, B, and C are identified as good performance tests as they are commonly executed by the data engineering team. Also, they represent overall data processing needs.
+ - **Test A2:** Processing queries X, Y, and Z are identified as good performance tests as they contain near real-time stream processing requirements. Also, they represent overall event-based stream processing needs.
+ - **Test A3:** Compare the performance of these queries at different scale of the Spark cluster (varying number of worker nodes, size of the worker nodes - like small, medium, and large - number and size of executors) with the benchmark obtained from the existing system. Keep the *law of diminishing returns* in mind; adding more resources (either by scaling up or scaling out) can help to achieve parallelism, however there's a certain limit that's unique to each scenario to achieve the parallelism. Discover the optimal configuration for each identified use case in your testing.
+- **Goal B:** We need to know if our data scientists can build and train machine learning models on this platform.
+- **Output B:** We will have tested some of our machine learning models by training them on data in a Spark pool or a SQL pool, leveraging different machine learning libraries. These tests will help to determine which machine learning models can be migrated to the new environment
+ - **Test B1:** Specific machine learning models will be tested.
+ - **Test B2:** Test base machine learning libraries that come with Spark (Spark MLLib) along with an additional library that can be installed on Spark (like scikit-learn) to meet the requirement.
+- **Goal C:** We will have tested data ingestion and will have the data points to:
+ - Estimate the effort for our initial historical data migration to data lake and/or the Spark pool.
+ - Plan an approach to migrate historical data.
+- **Output C:** We will have tested and determined the data ingestion rate achievable in our environment and can determine whether our data ingestion rate is sufficient to migrate historical data during the available time window.
+ - **Test C1:** Test different approaches of historical data migration. For more information, see [Transfer data to and from Azure](/architecture/data-guide/scenarios/data-transfer.md).
+ - **Test C2:** Identify allocated bandwidth of ExpressRoute and if there is any throttling setup by the infra team. For more information, see [What is Azure ExpressRoute? (Bandwidth options)](/expressroute/expressroute-introduction#bandwidth-options.md).
+ - **Test C3:** Test data transfer rate for both online and offline data migration. For more information, see [Copy activity performance and scalability guide](/data-factory/copy-activity-performance#copy-performance-and-scalability-achievable-using-azure-data-factory-and-synapse-pipelines).
+ - **Test C4:** Test data transfer from the data lake to the SQL pool by using either ADF, Polybase, or the COPY command. For more information, see [Data loading strategies for dedicated SQL pool in Azure Synapse Analytics](/sql-data-warehouse/design-elt-data-loading.md).
+- **Goal D:** We will have tested the data ingestion rate of incremental data loading and will have the data points to estimate the data ingestion and processing time window to the data lake and/or the dedicated SQL pool.
+- **Output D:** We will have tested the data ingestion rate and can determine whether our data ingestion and processing requirements can be met with the identified approach.
+ - **Test D1:** Test the daily update data ingestion and processing.
+ - **Test D1:** Test the processed data load to the dedicated SQL pool table from the Spark pool. For more information, see [Azure Synapse Dedicated SQL Pool Connector for Apache Spark](../spark/synapse-spark-sql-pool-import-export.md).
+ - **Test D1:** Execute the daily update load process concurrently while running end user queries.
+
+Be sure to refine your tests by adding multiple testing scenarios. Azure Synapse makes it easy to test different scale (varying number of worker nodes, size of the worker nodes like small, medium, and large) to compare performance and behavior.
+
+Here are some testing scenarios:
+
+- **Spark pool test A:** We will execute data processing across multiple node types (small, medium, and large) as well as different numbers of worker nodes.
+- **Spark pool test B:** We will load/retrieve processed data from the Spark pool to the dedicated SQL pool by using [the connector](../spark/synapse-spark-sql-pool-import-export.md).
+- **Spark pool test C:** We will load/retrieve processed data from the Spark pool to Cosmos DB by using Azure Synapse Link.
+
+### Evaluate the POC dataset
+
+Using the specific tests you identified, select a dataset to support the tests. Take time to review this dataset. You should verify that the dataset will adequately represent your future processing in terms of content, complexity, and scale. Don't use a dataset that's too small (less than 1TB) because it won't deliver representative performance. Conversely, don't use a dataset that's too large because the POC shouldn't become a full data migration. Be sure to obtain the appropriate benchmarks from existing systems so you can use them for performance comparisons.
+
+> [!IMPORTANT]
+> Make sure you check with business owners for any blockers before moving any data to the cloud. Identify any security or privacy concerns or any data obfuscation needs that should be done before moving data to the cloud.
+
+### Create a high-level architecture
+
+Based upon the high-level architecture of your proposed future state architecture, identify the components that will form part of your POC. Your high-level future state architecture likely contains many data sources, numerous data consumers, big data components, and possibly machine learning and artificial intelligence (AI) data consumers. Your POC architecture should specifically identify components that will be part of the POC. Importantly, it should identify any components that won't form part of the POC testing.
+
+If you're already using Azure, identify any resources you already have in place (Azure Active Directory, ExpressRoute, and others) that you can use during the POC. Also identify the Azure regions your organization uses. Now is a great time to identify the throughput of your ExpressRoute connection and to check with other business users that your POC can consume some of that throughput without adverse impact on production systems.
+
+For more information, see [Big data architectures](/architecture/data-guide/big-data.md).
+
+### Identify POC resources
+
+Specifically identify the technical resources and time commitments required to support your POC. Your POC will need:
+
+- A business representative to oversee requirements and results.
+- An application data expert, to source the data for the POC and provide knowledge of the existing processes and logic.
+- An Apache Spark and Spark pool expert.
+- An expert advisor, to optimize the POC tests.
+- Resources that will be required for specific components of your POC project, but not necessarily required for the duration of the POC. These resources could include network admins, Azure admins, Active Directory admins, Azure portal admins, and others.
+- Ensure all the required Azure services resources are provisioned and the required level of access is granted, including access to storage accounts.
+- Ensure you have an account that has required data access permissions to retrieve data from all data sources in the POC scope.
+
+> [!TIP]
+> We recommend engaging an expert advisor to assist with your POC. [Microsoft's partner community](https://appsource.microsoft.com/marketplace/partner-dir) has global availability of expert consultants who can help you assess, evaluate, or implement Azure Synapse.
+
+### Set the timeline
+
+Review your POC planning details and business needs to identify a time frame for your POC. Make realistic estimates of the time that will be required to complete the POC goals. The time to complete your POC will be influenced by the size of your POC dataset, the number and complexity of tests, and the number of interfaces to test. If you estimate that your POC will run longer than four weeks, consider reducing the POC scope to focus on the highest priority goals. Be sure to obtain approval and commitment from all the lead resources and sponsors before continuing.
+
+## Put the POC into practice
+
+We recommend you execute your POC project with the discipline and rigor of any production project. Run the project according to plan and manage a change request process to prevent uncontrolled growth of the POC's scope.
+
+Here are some examples of high-level tasks:
+
+1. [Create a Synapse workspace](../quickstart-create-workspace.md), Spark pools and dedicated SQL pools, storage accounts, and all Azure
+resources identified in the POC plan.
+1. Load POC dataset:
+ - Make data available in Azure by extracting from the source or by creating sample data in Azure. For more information, see:
+ - [Transferring data to and from Azure](/architecture/databox/data-box-overview.md#use-cases)
+ - [Azure Data Box](https://azure.microsoft.com/services/databox/)
+ - [Copy activity performance and scalability guide](/data-factory/copy-activity-performance#copy-performance-and-scalability-achievable-using-adf.md)
+ - [Data loading strategies for dedicated SQL pool in Azure Synapse Analytics](../sql-data-warehouse/design-elt-data-loading.md)
+ - [Bulk load data using the COPY statement](../sql-data-warehouse/quickstart-bulk-load-copy-tsql.md?view=azure-sqldw-latest&preserve-view=true)
+ - Test the dedicated connector for the Spark pool and the dedicated SQL pool.
+1. Migrate existing code to the Spark pool:
+ - If you're migrating from Spark, your migration effort is likely to be straightforward given that the Spark pool leverages the open-source Spark distribution. However, if you're using vendor specific features on top of core Spark features, you'll need to correctly map these features to the Spark pool features.
+ - If you're migrating from a non-Spark system, your migration effort will vary based on the complexity involved.
+1. Execute the tests:
+ - Many tests can be executed in parallel across multiple Spark pool clusters.
+ - Record your results in a consumable and readily understandable format.
+1. Monitor for troubleshooting and performance. For more information, see:
+ - [Monitor Apache Spark activities](../get-started-monitor.md#apache-spark-activities)
+ - [Monitor with web user interfaces - Spark's history server](https://spark.apache.org/docs/3.0.0-preview/web-ui.html)
+ - [Monitoring resource utilization and query activity in Azure Synapse Analytics](../../sql-data-warehouse/sql-data-warehouse-concept-resource-utilization-query-activity.md)
+1. Monitor data skewness, time skewness and executor usage percentage by opening the **Diagnostic** tab of Spark's history server.
+
+ :::image type="content" source="media/proof-of-concept-playbook-apache-spark-pool/apache-spark-history-server-diagnostic-tab.png" alt-text="Image shows the Diagnostic tab of Spark's history server.":::
+
+## Interpret the POC results
+
+When you complete all the POC tests, you evaluate the results. Begin by evaluating whether the POC goals were met and the desired outputs were collected. Determine whether more testing is necessary or any questions need addressing.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Data lake exploration with dedicated SQL pool in Azure Synapse Analytics](proof-of-concept-playbook-dedicated-sql-pool.md)
+
+> [!div class="nextstepaction"]
+> [Data lake exploration with serverless SQL pool in Azure Synapse Analytics](proof-of-concept-playbook-serverless-sql-pool.md)
+
+> [!div class="nextstepaction"]
+> [Azure Synapse Analytics frequently asked questions](../overview-faq.yml)
synapse-analytics Success By Design Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/success-by-design-introduction.md
+
+ Title: Success by design
+description: "TODO: Success by design"
+++++ Last updated : 05/23/2022++
+# Success by design
+
+Welcome to the Azure Synapse Customer Success Engineering Success by Design repository.
+
+As part of the Synapse Engineering Group at Microsoft, the Azure Synapse Customer Success Engineering (CSE) team is relied upon to help with complex projects that involve Azure Synapse. As a team of experienced professionals, we provide guidance for all facets of Azure Synapse, including working with customers on architecture reviews, configuration guidance, and performance analysis and recommendations. It's from this experience that the CSE team created this repository of guidance articles to help you produce successful solutions.
+
+This guidance is intended to supplement the official Azure Synapse documentation with additional specialized content that:
+
+- Helps you evaluate Azure Synapse by using a Proof of Concept (POC) project.
+- Helps you successfully implement a solution that incorporates Azure Synapse by using our [implementation success method](implementation-success-overview.md). This method can guide you through the key phases of your implementation, including planning, solution development, deployment, and post go-live evaluation.
+- Provides detailed guidance on complex topics, including security, networking, troubleshooting, performance tuning, and migration.
+
+> [!NOTE]
+> This guidance undergoes continuous improvement by the Synapse CSE team. In time, guidance articles will cover additional topics, including enterprise development support using continuous integration and continuous deployment (CI/CD) methods, business continuity and disaster recovery (BCDR), high availability (HA), and Azure Synapse monitoring.
+
+Next steps:
+
+> [!div class="nextstepaction"]
+> [Synapse proof of concept playbook](proof-of-concept-playbook-overview.md)
+
+> [!div class="nextstepaction"]
+> [Synapse implementation success method](implementation-success-overview.md)
+
+> [!div class="nextstepaction"]
+> [Security white paper](security-white-paper-introduction.md)
virtual-desktop Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new.md
Title: What's new in Azure Virtual Desktop? - Azure
description: New features and product updates for Azure Virtual Desktop. Previously updated : 05/04/2022 Last updated : 06/02/2022
Azure Virtual Desktop updates regularly. This article is where you'll find out a
Make sure to check back here often to keep up with new updates.
+## May 2022
+
+Here's what changed in May 2022:
+
+### Background effects with Teams on Azure Virtual Desktop now generally available
+
+Users can now make meetings more personalized and avoid unexpected distractions by applying background effects. Meeting participants can select an available image in Teams to change their background or choose to blur their background. For more information, see [our blog post](https://techcommunity.microsoft.com/t5/azure-virtual-desktop-blog/microsoft-teams-background-effects-is-now-generally-available-on/ba-p/3401961).
+
+### Multi-window and "Call me with Teams" features now generally available
+
+The multi-window feature gives users the option to pop out chats, meetings, calls, or documents into separate windows to streamline their workflow. The "Call me" feature lets users transfer a Teams call to their phone. Both features are now generally available in Teams on Azure Virtual Desktop. For more information, see [our blog post](https://techcommunity.microsoft.com/t5/azure-virtual-desktop-blog/microsoft-teams-multi-window-support-and-call-me-are-now-in-ga/ba-p/3401830).
+
+### Japan metadata service in public preview
+
+The Azure Virtual Desktop metadata database located in Japan is now in public preview. This allows customers to store their Azure Virtual Desktop objects and metadata within a database located within our Japan geography, ensuring that the data will only reside within Japan. For more information, see [our blog post](https://techcommunity.microsoft.com/t5/azure-virtual-desktop/announcing-the-public-preview-of-the-azure-virtual-desktop/m-p/3417497).
+
+### FSLogix 2201 hotfix
+
+The latest update for FSLogix 2201 includes fixes to Cloud Cache and container redirection processes. No new features are included with this update. Learn more at [WhatΓÇÖs new in FSLogix](/fslogix/whats-new?context=%2Fazure%2Fvirtual-desktop%2Fcontext%2Fcontext) and [our blog post](https://techcommunity.microsoft.com/t5/azure-virtual-desktop/announcing-fslogix-2201-hotfix-1-2-9-8171-14983-has-been/m-p/3435445).
+ ## April 2022 Here's what changed in April 2022:
virtual-machine-scale-sets Instance Generalized Image Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/instance-generalized-image-version.md
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Uniform scale sets
+> [!IMPORTANT]
+> You can't currently create a Flexible virtual machine scale set from an image shared by another tenant.
+ Create a scale set from a generalized image version stored in an [Azure Compute Gallery](../virtual-machines/shared-image-galleries.md). If you want to create a scale set using a specialized image version, see [Create scale set instances from a specialized image](instance-specialized-image-version-cli.md). ## Create a scale set from an image in your gallery
virtual-machine-scale-sets Instance Specialized Image Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/instance-specialized-image-version.md
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Uniform scale sets
+> [!IMPORTANT]
+> You can't currently create a Flexible virtual machine scale set from an image shared by another tenant.
+ Create a scale set from a [specialized image version](../virtual-machines/shared-image-galleries.md#generalized-and-specialized-images) stored in an Azure Compute Gallery. If you want to create a scale set using a generalized image version, see [Create a scale set from a generalized image](instance-generalized-image-version-cli.md). > [!IMPORTANT]
virtual-machine-scale-sets Quick Create Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/quick-create-portal.md
You can deploy a scale set with a Windows Server image or Linux image such as RH
:::image type="content" source="./media/virtual-machine-scale-sets-create-portal/quick-create-scale-set.png" alt-text="Image shows create options for scale sets in the Azure portal."::: 1. Select **Next** to move the the other pages.
-1. Leave the defaults for the **Instance** and **Disks** pages.
-1. On the **Networking** page, under **Load balancing**, select **Yes** to put the scale set instances behind a load balancer.
+1. Leave the defaults for the **Disks** page.
+1. On the **Networking** page, under **Load balancing**, select the **Use a load balancer** option to put the scale set instances behind a load balancer.
1. In **Load balancing options**, select **Azure load balancer**. 1. In **Select a load balancer**, select *myLoadBalancer* that you created earlier. 1. For **Select a backend pool**, select **Create new**, type *myBackendPool*, then select **Create**.
virtual-machine-scale-sets Share Images Across Tenants https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/share-images-across-tenants.md
## Create a scale set using Azure CLI
+> [!IMPORTANT]
+> You can't currently create a Flexible virtual machine scale set from an image shared by another tenant.
+ Sign in the service principal for tenant 1 using the appID, the app key, and the ID of tenant 1. You can use `az account show --query "tenantId"` to get the tenant IDs if needed. ```azurecli-interactive
virtual-machine-scale-sets Virtual Machine Scale Sets Maintenance Notifications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-maintenance-notifications.md
The **Self-service maintenance** column now appears in the list of virtual machi
Azure communicates a schedule for planned maintenance by sending an email to the subscription owner and co-owners group. You can add recipients and channels to this communication by creating Activity Log alerts. For more information, see [Monitor subscription activity with the Azure Activity Log](../azure-monitor/essentials/platform-logs-overview.md). 1. Sign in to the [Azure portal](https://portal.azure.com).
-2. In the left menu, select **Monitor**.
-3. In the **Monitor - Alerts (classic)** pane, select **+Add activity log alert**.
-4. On the **Add activity log alert** page, select or enter the requested information. In **Criteria**, make sure that you set the following values:
- - **Event category**: Select **Service Health**.
- - **Services**: Select **Virtual Machine Scale Sets and Virtual Machines**.
- - **Type**: Select **Planned maintenance**.
+1. In the left menu, select **Monitor**.
+1. In the Monitor menu, select **Service Health**.
+
+ :::image type="content" source="./media/virtual-machine-scale-sets-maintenance-notifications/monitor-service-health.png" alt-text="Select Service Health in the Monitor menu.":::
+
+1. In Service Health, select **+ Create service health alert**.
+
+ :::image type="content" source="./media/virtual-machine-scale-sets-maintenance-notifications/monitor-create-service-health-alert.png" alt-text="Select Create service health alert button.":::
+
+1. On the **Create an alert rule** page:
+ 1. Select the relevant **Subscription** and **Region** containing the resources to monitor for planned maintenance events.
+ 1. Specify the following:
+ - **Services**: *Virtual Machine Scale Sets* and *Virtual Machines*
+ - **Event type**: *Planned maintenance*
+1. Under **Actions**, add action groups to the alert rule in order to send notifications or invoke actions when a planned maintenance event is received.
+1. Fill out the details under **Alert rule details**.
+1. Select **Create alert rule**.
+ To learn more about how to configure Activity Log alerts, see [Create Activity Log alerts](../azure-monitor/alerts/activity-log-alerts.md)
virtual-machine-scale-sets Virtual Machine Scale Sets Orchestration Modes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-orchestration-modes.md
The following table compares the Flexible orchestration mode, Uniform orchestrat
| SKUs supported | D series, E series, F series, A series, B series, Intel, AMD; Specialty SKUs (G, H, L, M, N) are not supported | All SKUs | All SKUs | | Full control over VM, NICs, Disks | Yes | Limited control with virtual machine scale sets VM API | Yes | | RBAC Permissions Required | Compute VMSS Write, Compute VM Write, Network | Compute VMSS Write | N/A |
+| Cross tenant shared image gallery | No | Yes | Yes |
| Accelerated networking | Yes | Yes | Yes | | Spot instances and pricing  | Yes, you can have both Spot and Regular priority instances | Yes, instances must either be all Spot or all Regular | No, Regular priority instances only | | Mix operating systems | Yes, Linux and Windows can reside in the same Flexible scale set | No, instances are the same operating system | Yes, Linux and Windows can reside in the same availability set |
The following table compares the Flexible orchestration mode, Uniform orchestrat
| Write Accelerator  | No | Yes | Yes | | Proximity Placement Groups  | Yes, read [Proximity Placement Groups documentation](../virtual-machine-scale-sets/proximity-placement-groups.md) | Yes, read [Proximity Placement Groups documentation](../virtual-machine-scale-sets/proximity-placement-groups.md) | Yes | | Azure Dedicated Hosts  | No | Yes | Yes |
-| Managed Identity | User Assigned Identity Only | System Assigned or User Assigned | N/A (can specify Managed Identity on individual instances) |
+| Managed Identity | [User Assigned Identity](../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vmss.md#user-assigned-managed-identity) only<sup>1</sup> | System Assigned or User Assigned | N/A (can specify Managed Identity on individual instances) |
| Add/remove existing VM to the group | No | No | No | | Service Fabric | No | Yes | No | | Azure Kubernetes Service (AKS) / AKE | No | Yes | No | | UserData | Yes | Yes | UserData can be specified for individual VMs |
+<sup>1</sup> For Uniform scale sets, the `GET VMSS` response will have a reference to the *identity*, *clientID*, and *principalID*. For Flexible scale sets, the response will only get a reference the *identity*. You can make a call to `Identity` to get the *clientID* and *PrincipalID*.
### Autoscaling and instance orchestration | Feature | Supported by Flexible orchestration for scale sets | Supported by Uniform orchestration for scale sets | Supported by Availability Sets |
virtual-machines Flexible Virtual Machine Scale Sets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/flexible-virtual-machine-scale-sets.md
The following tables list the Flexible orchestration mode features and links to
| SKUs supported | D series, E series, F series, A series, B series, Intel, AMD; Specialty SKUs (G, H, L, M, N) are not supported | | Full control over VM, NICs, Disks | Yes | | RBAC Permissions Required | Compute VMSS Write, Compute VM Write, Network |
+| Cross tenant shared image gallery | No |
| Accelerated networking | Yes | | Spot instances and pricing  | Yes, you can have both Spot and Regular priority instances | | Mix operating systems | Yes, Linux and Windows can reside in the same Flexible scale set |
The following tables list the Flexible orchestration mode features and links to
| Write Accelerator  | No | | Proximity Placement Groups  | Yes, read [Proximity Placement Groups documentation](../virtual-machine-scale-sets/proximity-placement-groups.md) | | Azure Dedicated Hosts  | No |
-| Managed Identity | User Assigned Identity Only |
+| Managed Identity | [User Assigned Identity](../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vmss.md#user-assigned-managed-identity) only<sup>1</sup> |
| Add/remove existing VM to the group | No | | Service Fabric | No | | Azure Kubernetes Service (AKS) / AKE | No | | UserData | Yes |
+<sup>1</sup> For Uniform scale sets, the `GET VMSS` response will have a reference to the *identity*, *clientID*, and *principalID*. For Flexible scale sets, the response will only get a reference the *identity*. You can make a call to `Identity` to get the *clientID* and *PrincipalID*.
### Autoscaling and instance orchestration
virtual-machines Image Builder Api Update Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/image-builder-api-update-release-notes.md
This document contains all major API changes and feature updates for the Azure I
## API Releases
+### 2022-02-14
+**Improvements**:
+- [Validation Support](https://docs.microsoft.com/azure/virtual-machines/linux/image-builder-json#properties-validate)
+ - Shell (Linux) - Script or Inline
+ - PowerShell (Windows) - Script or Inline, run elevated, run as system
+ - Source-Validation-Only mode
+- [Customized staging resource group support](https://docs.microsoft.com/azure/virtual-machines/linux/image-builder-json#properties-stagingresourcegroup)
### 2021-10-01
For API versions 2021-10-01 and newer, the error output will look like the follo
- Added support for customers to use their own VNet. - Added support for customers to customize the build VM (VM size, OS disk size). - Added support for user assigned MSI (for customize/distribute steps).
- - Added support for [Gen2 images.](image-builder-overview.md#hyper-v-generation).
+ - Added support for [Gen2 images.](image-builder-overview.md#hyper-v-generation)
### Preview APIs
virtual-machines Flatcar Create Upload Vhd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/flatcar-create-upload-vhd.md
Alternatively, you can choose to build your own Flatcar Container Linux
image. On any linux based machine, follow the instructions detailed in the
-[Flatcar Container Linux developer SDK guide](https://docs.flatcar-linux.org/os/sdk-modifying-flatcar/). When
+[Flatcar Container Linux developer SDK guide](https://www.flatcar.org/docs/latest/reference/developer-guides/). When
running the `image_to_vm.sh` script, make sure you pass `--format=azure` to create an Azure virtual hard disk.
virtual-machines Image Builder Json https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/image-builder-json.md
Write-Output '>>> Sysprep complete ...'
#### Default Linux deprovision command ```bash
-/usr/sbin/waagent -force -deprovision+user && export HISTSIZE=0 && sync
+WAAGENT=/usr/sbin/waagent
+waagent -version 1> 2>&1
+if [ $? -eq 0 ]; then
+ WAAGENT=waagent
+fi
+$WAAGENT -force -deprovision+user && export HISTSIZE=0 && sync
``` #### Overriding the Commands
virtual-machines Share Gallery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/share-gallery.md
There are two main ways to share images in an Azure Compute Gallery:
- Role-based access control (RBAC) lets you share resources to specific people, groups, or service principals on a granular level. - Community gallery lets you share your entire gallery publicly, to all Azure users.
+> [!IMPORTANT]
+> You can't currently create a Flexible virtual machine scale set from an image shared by another tenant.
+ ## RBAC The Azure Compute Gallery, definitions, and versions are all resources, they can be shared using the built-in native Azure RBAC controls. Using Azure RBAC you can share these resources to other users, service principals, and groups. You can even share access to individuals outside of the tenant they were created within. Once a user has access to the image or application version, they can deploy a VM or a Virtual Machine Scale Set.
virtual-machines Shared Image Galleries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/shared-image-galleries.md
Image version:
## Sharing
+> [!IMPORTANT]
+> You can't currently create a Flexible virtual machine scale set from an image shared by another tenant.
+ You can [share images](share-gallery.md) to users and groups using the standard role-based access control (RBAC) or you can share an entire gallery of images to the public, using a [community gallery (preview)](azure-compute-gallery.md#community). > [!IMPORTANT]
virtual-machines Compute Benchmark Scores https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/compute-benchmark-scores.md
Title: Compute benchmark scores for Azure Windows VMs
-description: Compare SPECint compute benchmark scores for Azure VMs running Windows Server.
-
+ Title: Compute benchmark scores for Azure Windows VMs
+description: Compare Coremark compute benchmark scores for Azure VMs running Windows Server.
+ Previously updated : 04/26/2022-- Last updated : 05/31/2022++ # Compute benchmark scores for Windows VMs
The following CoreMark benchmark scores show compute performance for select Azur
| VM Size | CPU | vCPUs | NUMA Nodes | Memory(GiB) | Avg Score | StdDev | StdDev% | #Runs | | | | : | : | : | : | : | : | : |
-| Standard_F2s_v2 | Intel(R) Xeon(R) Platinum 8168 CPU @ 2.70GHz | 2 | 1 | 4.0 | 34,903 | 1,101 | 3.15% | 112 |
-| Standard_F2s_v2 | Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz | 2 | 1 | 4.0 | 34,738 | 1,331 | 3.83% | 224 |
-| Standard_F4s_v2 | Intel(R) Xeon(R) Platinum 8168 CPU @ 2.70GHz | 4 | 1 | 8.0 | 66,828 | 1,524 | 2.28% | 168 |
-| Standard_F4s_v2 | Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz | 4 | 1 | 8.0 | 66,903 | 1,047 | 1.57% | 182 |
-| Standard_F8s_v2 | Intel(R) Xeon(R) Platinum 8168 CPU @ 2.70GHz | 8 | 1 | 16.0 | 131,477 | 2,180 | 1.66% | 140 |
-| Standard_F8s_v2 | Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz | 8 | 1 | 16.0 | 132,533 | 1,732 | 1.31% | 210 |
-| Standard_F16s_v2 | Intel(R) Xeon(R) Platinum 8168 CPU @ 2.70GHz | 16 | 1 | 32.0 | 260,760 | 3,629 | 1.39% | 112 |
-| Standard_F16s_v2 | Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz | 16 | 1 | 32.0 | 265,158 | 2,185 | 0.82% | 182 |
-| Standard_F32s_v2 | Intel(R) Xeon(R) Platinum 8168 CPU @ 2.70GHz | 32 | 1 | 64.0 | 525,608 | 6,270 | 1.19% | 98 |
-| Standard_F32s_v2 | Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz | 32 | 1 | 64.0 | 530,137 | 6,085 | 1.15% | 140 |
-| Standard_F48s_v2 | Intel(R) Xeon(R) Platinum 8168 CPU @ 2.70GHz | 48 | 2 | 96.0 | 769,768 | 7,567 | 0.98% | 112 |
-| Standard_F48s_v2 | Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz | 48 | 1 | 96.0 | 742,828 | 17,316 | 2.33% | 112 |
-| Standard_F64s_v2 | Intel(R) Xeon(R) Platinum 8168 CPU @ 2.70GHz | 64 | 2 | 128.0 | 1,030,552 | 8,106 | 0.79% | 70 |
-| Standard_F64s_v2 | Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz | 64 | 2 | 128.0 | 1,028,052 | 9,373 | 0.91% | 168 |
+| Standard_F2s_v2 | Intel(R) Xeon(R) Platinum 8168 CPU @ 2.70 GHz | 2 | 1 | 4.0 | 34,903 | 1,101 | 3.15% | 112 |
+| Standard_F2s_v2 | Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60 GHz | 2 | 1 | 4.0 | 34,738 | 1,331 | 3.83% | 224 |
+| Standard_F4s_v2 | Intel(R) Xeon(R) Platinum 8168 CPU @ 2.70 GHz | 4 | 1 | 8.0 | 66,828 | 1,524 | 2.28% | 168 |
+| Standard_F4s_v2 | Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60 GHz | 4 | 1 | 8.0 | 66,903 | 1,047 | 1.57% | 182 |
+| Standard_F8s_v2 | Intel(R) Xeon(R) Platinum 8168 CPU @ 2.70 GHz | 8 | 1 | 16.0 | 131,477 | 2,180 | 1.66% | 140 |
+| Standard_F8s_v2 | Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60 GHz | 8 | 1 | 16.0 | 132,533 | 1,732 | 1.31% | 210 |
+| Standard_F16s_v2 | Intel(R) Xeon(R) Platinum 8168 CPU @ 2.70 GHz | 16 | 1 | 32.0 | 260,760 | 3,629 | 1.39% | 112 |
+| Standard_F16s_v2 | Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60 GHz | 16 | 1 | 32.0 | 265,158 | 2,185 | 0.82% | 182 |
+| Standard_F32s_v2 | Intel(R) Xeon(R) Platinum 8168 CPU @ 2.70 GHz | 32 | 1 | 64.0 | 525,608 | 6,270 | 1.19% | 98 |
+| Standard_F32s_v2 | Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60 GHz | 32 | 1 | 64.0 | 530,137 | 6,085 | 1.15% | 140 |
+| Standard_F48s_v2 | Intel(R) Xeon(R) Platinum 8168 CPU @ 2.70 GHz | 48 | 2 | 96.0 | 769,768 | 7,567 | 0.98% | 112 |
+| Standard_F48s_v2 | Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60 GHz | 48 | 1 | 96.0 | 742,828 | 17,316 | 2.33% | 112 |
+| Standard_F64s_v2 | Intel(R) Xeon(R) Platinum 8168 CPU @ 2.70 GHz | 64 | 2 | 128.0 | 1,030,552 | 8,106 | 0.79% | 70 |
+| Standard_F64s_v2 | Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60 GHz | 64 | 2 | 128.0 | 1,028,052 | 9,373 | 0.91% | 168 |
+| Standard_F72s_v2 | Intel(R) Xeon(R) Platinum 8168 CPU @ 2.70 GHz | 72 | 2 | 144.0 | N/A | - | - | - |
+| Standard_F72s_v2 | Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60 GHz | 72 | 2 | 144.0 | N/A | - | - | - |
+ ### Fs - Compute Optimized + Premium Storage (04/28/2021 PBIID:9198755) | VM Size | CPU | vCPUs | NUMA Nodes | Memory(GiB) | Avg Score | StdDev | StdDev% | #Runs | | | | : | : | : | : | : | : | : |
-| Standard_F1s | Intel(R) Xeon(R) CPU E5-2673 v3 @ 2.40GHz | 1 | 1 | 2.0 | 16,445 | 825 | 5.02% | 42 |
-| Standard_F1s | Intel(R) Xeon(R) CPU E5-2673 v4 @ 2.30GHz | 1 | 1 | 2.0 | 17,614 | 2,873 | 16.31% | 210 |
-| Standard_F1s | Intel(R) Xeon(R) Platinum 8171M CPU @ 2.60GHz | 1 | 1 | 2.0 | 16,053 | 1,802 | 11.22% | 70 |
-| Standard_F1s | Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz | 1 | 1 | 2.0 | 20,007 | 1,684 | 8.42% | 28 |
-| Standard_F2s | Intel(R) Xeon(R) CPU E5-2673 v3 @ 2.40GHz | 2 | 1 | 4.0 | 33,451 | 3,424 | 10.24% | 70 |
-| Standard_F2s | Intel(R) Xeon(R) CPU E5-2673 v4 @ 2.30GHz | 2 | 1 | 4.0 | 33,626 | 2,990 | 8.89% | 154 |
-| Standard_F2s | Intel(R) Xeon(R) Platinum 8171M CPU @ 2.60GHz | 2 | 1 | 4.0 | 34,386 | 3,851 | 11.20% | 98 |
-| Standard_F2s | Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz | 2 | 1 | 4.0 | 36,826 | 344 | 0.94% | 28 |
-| Standard_F4s | Intel(R) Xeon(R) CPU E5-2673 v3 @ 2.40GHz | 4 | 1 | 8.0 | 67,351 | 4,407 | 6.54% | 42 |
-| Standard_F4s | Intel(R) Xeon(R) CPU E5-2673 v4 @ 2.30GHz | 4 | 1 | 8.0 | 67,009 | 4,637 | 6.92% | 196 |
-| Standard_F4s | Intel(R) Xeon(R) Platinum 8171M CPU @ 2.60GHz | 4 | 1 | 8.0 | 63,668 | 3,375 | 5.30% | 84 |
-| Standard_F4s | Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz | 4 | 1 | 8.0 | 79,153 | 15,034 | 18.99% | 28 |
-| Standard_F8s | Intel(R) Xeon(R) CPU E5-2673 v3 @ 2.40GHz | 8 | 1 | 16.0 | 128,232 | 1,272 | 0.99% | 42 |
-| Standard_F8s | Intel(R) Xeon(R) CPU E5-2673 v4 @ 2.30GHz | 8 | 1 | 16.0 | 127,871 | 5,109 | 4.00% | 154 |
-| Standard_F8s | Intel(R) Xeon(R) Platinum 8171M CPU @ 2.60GHz | 8 | 1 | 16.0 | 122,811 | 5,481 | 4.46% | 126 |
-| Standard_F8s | Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz | 8 | 1 | 16.0 | 154,842 | 10,354 | 6.69% | 28 |
-| Standard_F16s | Intel(R) Xeon(R) CPU E5-2673 v3 @ 2.40GHz | 16 | 2 | 32.0 | 260,883 | 15,853 | 6.08% | 42 |
-| Standard_F16s | Intel(R) Xeon(R) CPU E5-2673 v4 @ 2.30GHz | 16 | 1 | 32.0 | 255,762 | 4,966 | 1.94% | 182 |
-| Standard_F16s | Intel(R) Xeon(R) Platinum 8171M CPU @ 2.60GHz | 16 | 1 | 32.0 | 248,884 | 11,035 | 4.43% | 70 |
-| Standard_F16s | Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz | 16 | 1 | 32.0 | 310,303 | 21,942 | 7.07% | 28 |
+| Standard_F1s | Intel(R) Xeon(R) CPU E5-2673 v3 @ 2.40 GHz | 1 | 1 | 2.0 | 16,445 | 825 | 5.02% | 42 |
+| Standard_F1s | Intel(R) Xeon(R) CPU E5-2673 v4 @ 2.30 GHz | 1 | 1 | 2.0 | 17,614 | 2,873 | 16.31% | 210 |
+| Standard_F1s | Intel(R) Xeon(R) Platinum 8171M CPU @ 2.60 GHz | 1 | 1 | 2.0 | 16,053 | 1,802 | 11.22% | 70 |
+| Standard_F1s | Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60 GHz | 1 | 1 | 2.0 | 20,007 | 1,684 | 8.42% | 28 |
+| Standard_F2s | Intel(R) Xeon(R) CPU E5-2673 v3 @ 2.40 GHz | 2 | 1 | 4.0 | 33,451 | 3,424 | 10.24% | 70 |
+| Standard_F2s | Intel(R) Xeon(R) CPU E5-2673 v4 @ 2.30 GHz | 2 | 1 | 4.0 | 33,626 | 2,990 | 8.89% | 154 |
+| Standard_F2s | Intel(R) Xeon(R) Platinum 8171M CPU @ 2.60 GHz | 2 | 1 | 4.0 | 34,386 | 3,851 | 11.20% | 98 |
+| Standard_F2s | Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60 GHz | 2 | 1 | 4.0 | 36,826 | 344 | 0.94% | 28 |
+| Standard_F4s | Intel(R) Xeon(R) CPU E5-2673 v3 @ 2.40 GHz | 4 | 1 | 8.0 | 67,351 | 4,407 | 6.54% | 42 |
+| Standard_F4s | Intel(R) Xeon(R) CPU E5-2673 v4 @ 2.30 GHz | 4 | 1 | 8.0 | 67,009 | 4,637 | 6.92% | 196 |
+| Standard_F4s | Intel(R) Xeon(R) Platinum 8171M CPU @ 2.60 GHz | 4 | 1 | 8.0 | 63,668 | 3,375 | 5.30% | 84 |
+| Standard_F4s | Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60 GHz | 4 | 1 | 8.0 | 79,153 | 15,034 | 18.99% | 28 |
+| Standard_F8s | Intel(R) Xeon(R) CPU E5-2673 v3 @ 2.40 GHz | 8 | 1 | 16.0 | 128,232 | 1,272 | 0.99% | 42 |
+| Standard_F8s | Intel(R) Xeon(R) CPU E5-2673 v4 @ 2.30 GHz | 8 | 1 | 16.0 | 127,871 | 5,109 | 4.00% | 154 |
+| Standard_F8s | Intel(R) Xeon(R) Platinum 8171M CPU @ 2.60 GHz | 8 | 1 | 16.0 | 122,811 | 5,481 | 4.46% | 126 |
+| Standard_F8s | Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60 GHz | 8 | 1 | 16.0 | 154,842 | 10,354 | 6.69% | 28 |
+| Standard_F16s | Intel(R) Xeon(R) CPU E5-2673 v3 @ 2.40 GHz | 16 | 2 | 32.0 | 260,883 | 15,853 | 6.08% | 42 |
+| Standard_F16s | Intel(R) Xeon(R) CPU E5-2673 v4 @ 2.30 GHz | 16 | 1 | 32.0 | 255,762 | 4,966 | 1.94% | 182 |
+| Standard_F16s | Intel(R) Xeon(R) Platinum 8171M CPU @ 2.60 GHz | 16 | 1 | 32.0 | 248,884 | 11,035 | 4.43% | 70 |
+| Standard_F16s | Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60 GHz | 16 | 1 | 32.0 | 310,303 | 21,942 | 7.07% | 28 |
### F - Compute Optimized (04/28/2021 PBIID:9198755) | VM Size | CPU | vCPUs | NUMA Nodes | Memory(GiB) | Avg Score | StdDev | StdDev% | #Runs | | | | : | : | : | : | : | : | : |
-| Standard_F1 | Intel(R) Xeon(R) CPU E5-2673 v3 @ 2.40GHz | 1 | 1 | 2.0 | 17,356 | 1,151 | 6.63% | 112 |
-| Standard_F1 | Intel(R) Xeon(R) CPU E5-2673 v4 @ 2.30GHz | 1 | 1 | 2.0 | 16,508 | 1,740 | 10.54% | 154 |
-| Standard_F1 | Intel(R) Xeon(R) Platinum 8171M CPU @ 2.60GHz | 1 | 1 | 2.0 | 16,076 | 2,065 | 12.84% | 70 |
-| Standard_F1 | Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz | 1 | 1 | 2.0 | 20,074 | 1,612 | 8.03% | 14 |
-| Standard_F2 | Intel(R) Xeon(R) CPU E5-2673 v3 @ 2.40GHz | 2 | 1 | 4.0 | 32,770 | 1,915 | 5.84% | 126 |
-| Standard_F2 | Intel(R) Xeon(R) CPU E5-2673 v4 @ 2.30GHz | 2 | 1 | 4.0 | 33,081 | 2,242 | 6.78% | 126 |
-| Standard_F2 | Intel(R) Xeon(R) Platinum 8171M CPU @ 2.60GHz | 2 | 1 | 4.0 | 33,310 | 2,532 | 7.60% | 84 |
-| Standard_F2 | Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz | 2 | 1 | 4.0 | 40,746 | 2,027 | 4.98% | 14 |
-| Standard_F4 | Intel(R) Xeon(R) CPU E5-2673 v3 @ 2.40GHz | 4 | 1 | 8.0 | 65,694 | 3,512 | 5.35% | 126 |
-| Standard_F4 | Intel(R) Xeon(R) CPU E5-2673 v4 @ 2.30GHz | 4 | 1 | 8.0 | 65,054 | 3,457 | 5.31% | 154 |
-| Standard_F4 | Intel(R) Xeon(R) Platinum 8171M CPU @ 2.60GHz | 4 | 1 | 8.0 | 61,607 | 3,662 | 5.94% | 56 |
-| Standard_F4 | Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz | 4 | 1 | 8.0 | 76,884 | 1,763 | 2.29% | 14 |
-| Standard_F8 | Intel(R) Xeon(R) CPU E5-2673 v3 @ 2.40GHz | 8 | 1 | 16.0 | 130,415 | 5,353 | 4.10% | 98 |
-| Standard_F8 | Intel(R) Xeon(R) CPU E5-2673 v4 @ 2.30GHz | 8 | 1 | 16.0 | 126,139 | 2,917 | 2.31% | 126 |
-| Standard_F8 | Intel(R) Xeon(R) Platinum 8171M CPU @ 2.60GHz | 8 | 1 | 16.0 | 122,443 | 4,391 | 3.59% | 98 |
-| Standard_F8 | Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz | 8 | 1 | 16.0 | 144,696 | 2,172 | 1.50% | 14 |
-| Standard_F16 | Intel(R) Xeon(R) CPU E5-2673 v3 @ 2.40GHz | 16 | 2 | 32.0 | 253,473 | 8,597 | 3.39% | 140 |
-| Standard_F16 | Intel(R) Xeon(R) CPU E5-2673 v4 @ 2.30GHz | 16 | 1 | 32.0 | 257,457 | 7,596 | 2.95% | 126 |
-| Standard_F16 | Intel(R) Xeon(R) Platinum 8171M CPU @ 2.60GHz | 16 | 1 | 32.0 | 244,559 | 8,036 | 3.29% | 70 |
-| Standard_F16 | Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz | 16 | 1 | 32.0 | 283,565 | 8,683 | 3.06% | 14 |
+| Standard_F1 | Intel(R) Xeon(R) CPU E5-2673 v3 @ 2.40 GHz | 1 | 1 | 2.0 | 17,356 | 1,151 | 6.63% | 112 |
+| Standard_F1 | Intel(R) Xeon(R) CPU E5-2673 v4 @ 2.30 GHz | 1 | 1 | 2.0 | 16,508 | 1,740 | 10.54% | 154 |
+| Standard_F1 | Intel(R) Xeon(R) Platinum 8171M CPU @ 2.60 GHz | 1 | 1 | 2.0 | 16,076 | 2,065 | 12.84% | 70 |
+| Standard_F1 | Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60 GHz | 1 | 1 | 2.0 | 20,074 | 1,612 | 8.03% | 14 |
+| Standard_F2 | Intel(R) Xeon(R) CPU E5-2673 v3 @ 2.40 GHz | 2 | 1 | 4.0 | 32,770 | 1,915 | 5.84% | 126 |
+| Standard_F2 | Intel(R) Xeon(R) CPU E5-2673 v4 @ 2.30 GHz | 2 | 1 | 4.0 | 33,081 | 2,242 | 6.78% | 126 |
+| Standard_F2 | Intel(R) Xeon(R) Platinum 8171M CPU @ 2.60 GHz | 2 | 1 | 4.0 | 33,310 | 2,532 | 7.60% | 84 |
+| Standard_F2 | Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60 GHz | 2 | 1 | 4.0 | 40,746 | 2,027 | 4.98% | 14 |
+| Standard_F4 | Intel(R) Xeon(R) CPU E5-2673 v3 @ 2.40 GHz | 4 | 1 | 8.0 | 65,694 | 3,512 | 5.35% | 126 |
+| Standard_F4 | Intel(R) Xeon(R) CPU E5-2673 v4 @ 2.30 GHz | 4 | 1 | 8.0 | 65,054 | 3,457 | 5.31% | 154 |
+| Standard_F4 | Intel(R) Xeon(R) Platinum 8171M CPU @ 2.60 GHz | 4 | 1 | 8.0 | 61,607 | 3,662 | 5.94% | 56 |
+| Standard_F4 | Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60 GHz | 4 | 1 | 8.0 | 76,884 | 1,763 | 2.29% | 14 |
+| Standard_F8 | Intel(R) Xeon(R) CPU E5-2673 v3 @ 2.40 GHz | 8 | 1 | 16.0 | 130,415 | 5,353 | 4.10% | 98 |
+| Standard_F8 | Intel(R) Xeon(R) CPU E5-2673 v4 @ 2.30 GHz | 8 | 1 | 16.0 | 126,139 | 2,917 | 2.31% | 126 |
+| Standard_F8 | Intel(R) Xeon(R) Platinum 8171M CPU @ 2.60 GHz | 8 | 1 | 16.0 | 122,443 | 4,391 | 3.59% | 98 |
+| Standard_F8 | Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60 GHz | 8 | 1 | 16.0 | 144,696 | 2,172 | 1.50% | 14 |
+| Standard_F16 | Intel(R) Xeon(R) CPU E5-2673 v3 @ 2.40 GHz | 16 | 2 | 32.0 | 253,473 | 8,597 | 3.39% | 140 |
+| Standard_F16 | Intel(R) Xeon(R) CPU E5-2673 v4 @ 2.30 GHz | 16 | 1 | 32.0 | 257,457 | 7,596 | 2.95% | 126 |
+| Standard_F16 | Intel(R) Xeon(R) Platinum 8171M CPU @ 2.60 GHz | 16 | 1 | 32.0 | 244,559 | 8,036 | 3.29% | 70 |
+| Standard_F16 | Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60 GHz | 16 | 1 | 32.0 | 283,565 | 8,683 | 3.06% | 14 |
### GS - Compute Optimized + Premium Storage (05/27/2021 PBIID:9198755) | VM Size | CPU | vCPUs | NUMA Nodes | Memory(GiB) | Avg Score | StdDev | StdDev% | #Runs | | | | : | : | : | : | : | : | : |
-| Standard_GS1 | Intel(R) Xeon(R) CPU E5-2698B v3 @ 2.00GHz | 2 | 1 | 28.0 | 35,593 | 2,888 | 8.11% | 252 |
-| Standard_GS2 | Intel(R) Xeon(R) CPU E5-2698B v3 @ 2.00GHz | 4 | 1 | 56.0 | 72,188 | 5,949 | 8.24% | 252 |
-| Standard_GS3 | Intel(R) Xeon(R) CPU E5-2698B v3 @ 2.00GHz | 8 | 1 | 112.0 | 132,665 | 6,910 | 5.21% | 238 |
-| Standard_GS4 | Intel(R) Xeon(R) CPU E5-2698B v3 @ 2.00GHz | 16 | 1 | 224.0 | 261,542 | 3,722 | 1.42% | 252 |
-| Standard_GS4-4 | Intel(R) Xeon(R) CPU E5-2698B v3 @ 2.00GHz | 4 | 1 | 224.0 | 79,352 | 4,935 | 6.22% | 224 |
-| Standard_GS4-8 | Intel(R) Xeon(R) CPU E5-2698B v3 @ 2.00GHz | 8 | 1 | 224.0 | 137,774 | 6,887 | 5.00% | 238 |
-| Standard_GS5 | Intel(R) Xeon(R) CPU E5-2698B v3 @ 2.00GHz | 32 | 2 | 448.0 | 507,026 | 6,895 | 1.36% | 252 |
-| Standard_GS5-8 | Intel(R) Xeon(R) CPU E5-2698B v3 @ 2.00GHz | 8 | 2 | 448.0 | 157,541 | 3,151 | 2.00% | 238 |
-| Standard_GS5-16 | Intel(R) Xeon(R) CPU E5-2698B v3 @ 2.00GHz | 16 | 2 | 448.0 | 278,656 | 5,235 | 1.88% | 224 |
+| Standard_GS1 | Intel(R) Xeon(R) CPU E5-2698B v3 @ 2.00 GHz | 2 | 1 | 28.0 | 35,593 | 2,888 | 8.11% | 252 |
+| Standard_GS2 | Intel(R) Xeon(R) CPU E5-2698B v3 @ 2.00 GHz | 4 | 1 | 56.0 | 72,188 | 5,949 | 8.24% | 252 |
+| Standard_GS3 | Intel(R) Xeon(R) CPU E5-2698B v3 @ 2.00 GHz | 8 | 1 | 112.0 | 132,665 | 6,910 | 5.21% | 238 |
+| Standard_GS4 | Intel(R) Xeon(R) CPU E5-2698B v3 @ 2.00 GHz | 16 | 1 | 224.0 | 261,542 | 3,722 | 1.42% | 252 |
+| Standard_GS4-4 | Intel(R) Xeon(R) CPU E5-2698B v3 @ 2.00 GHz | 4 | 1 | 224.0 | 79,352 | 4,935 | 6.22% | 224 |
+| Standard_GS4-8 | Intel(R) Xeon(R) CPU E5-2698B v3 @ 2.00 GHz | 8 | 1 | 224.0 | 137,774 | 6,887 | 5.00% | 238 |
+| Standard_GS5 | Intel(R) Xeon(R) CPU E5-2698B v3 @ 2.0 0GHz | 32 | 2 | 448.0 | 507,026 | 6,895 | 1.36% | 252 |
+| Standard_GS5-8 | Intel(R) Xeon(R) CPU E5-2698B v3 @ 2.00 GHz | 8 | 2 | 448.0 | 157,541 | 3,151 | 2.00% | 238 |
+| Standard_GS5-16 | Intel(R) Xeon(R) CPU E5-2698B v3 @ 2.00 GHz | 16 | 2 | 448.0 | 278,656 | 5,235 | 1.88% | 224 |
### G - Compute Optimized (05/27/2021 PBIID:9198755) | VM Size | CPU | vCPUs | NUMA Nodes | Memory(GiB) | Avg Score | StdDev | StdDev% | #Runs | | | | : | : | : | : | : | : | : |
-| Standard_G1 | Intel(R) Xeon(R) CPU E5-2698B v3 @ 2.00GHz | 2 | 1 | 28.0 | 36,386 | 4,100 | 11.27% | 252 |
-| Standard_G2 | Intel(R) Xeon(R) CPU E5-2698B v3 @ 2.00GHz | 4 | 1 | 56.0 | 72,484 | 5,563 | 7.67% | 252 |
-| Standard_G3 | Intel(R) Xeon(R) CPU E5-2698B v3 @ 2.00GHz | 8 | 1 | 112.0 | 136,618 | 5,714 | 4.18% | 252 |
-| Standard_G4 | Intel(R) Xeon(R) CPU E5-2698B v3 @ 2.00GHz | 16 | 1 | 224.0 | 261,708 | 3,426 | 1.31% | 238 |
-| Standard_G5 | Intel(R) Xeon(R) CPU E5-2698B v3 @ 2.00GHz | 32 | 2 | 448.0 | 507,423 | 7,261 | 1.43% | 252 |
+| Standard_G1 | Intel(R) Xeon(R) CPU E5-2698B v3 @ 2.00 GHz | 2 | 1 | 28.0 | 36,386 | 4,100 | 11.27% | 252 |
+| Standard_G2 | Intel(R) Xeon(R) CPU E5-2698B v3 @ 2.00 GHz | 4 | 1 | 56.0 | 72,484 | 5,563 | 7.67% | 252 |
+| Standard_G3 | Intel(R) Xeon(R) CPU E5-2698B v3 @ 2.00 GHz | 8 | 1 | 112.0 | 136,618 | 5,714 | 4.18% | 252 |
+| Standard_G4 | Intel(R) Xeon(R) CPU E5-2698B v3 @ 2.00 GHz | 16 | 1 | 224.0 | 261,708 | 3,426 | 1.31% | 238 |
+| Standard_G5 | Intel(R) Xeon(R) CPU E5-2698B v3 @ 2.00 GHz | 32 | 2 | 448.0 | 507,423 | 7,261 | 1.43% | 252 |
## General purpose
The following CoreMark benchmark scores show compute performance for select Azur
| VM Size | CPU | vCPUs | NUMA Nodes | Memory(GiB) | Avg Score | StdDev | StdDev% | #Runs | | | | : | : | : | : | : | : | : |
-| Standard_B1ms | Intel(R) Xeon(R) CPU E5-2673 v3 @ 2.40GHz | 1 | 1 | 2.0 | 18,093 | 679 | 3.75% | 42 |
-| Standard_B1ms | Intel(R) Xeon(R) CPU E5-2673 v4 @ 2.30GHz | 1 | 1 | 2.0 | 18,197 | 1,341 | 7.37% | 168 |
-| Standard_B1ms | Intel(R) Xeon(R) Platinum 8171M CPU @ 2.60GHz | 1 | 1 | 2.0 | 17,975 | 920 | 5.12% | 112 |
-| Standard_B1ms | Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz | 1 | 1 | 2.0 | 20,176 | 1,568 | 7.77% | 28 |
-| Standard_B2s | Intel(R) Xeon(R) CPU E5-2673 v3 @ 2.40GHz | 2 | 1 | 4.0 | 35,546 | 660 | 1.86% | 42 |
-| Standard_B2s | Intel(R) Xeon(R) CPU E5-2673 v4 @ 2.30GHz | 2 | 1 | 4.0 | 36,569 | 2,172 | 5.94% | 154 |
-| Standard_B2s | Intel(R) Xeon(R) Platinum 8171M CPU @ 2.60GHz | 2 | 1 | 4.0 | 36,136 | 924 | 2.56% | 140 |
-| Standard_B2s | Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz | 2 | 1 | 4.0 | 42,546 | 834 | 1.96% | 14 |
-| Standard_B2hms | Intel(R) Xeon(R) CPU E5-2673 v3 @ 2.40GHz | 2 | 1 | 8.0 | 36,949 | 1,494 | 4.04% | 28 |
-| Standard_B2hms | Intel(R) Xeon(R) CPU E5-2673 v4 @ 2.30GHz | 2 | 1 | 8.0 | 36,512 | 2,537 | 6.95% | 70 |
-| Standard_B2hms | Intel(R) Xeon(R) Platinum 8171M CPU @ 2.60GHz | 2 | 1 | 8.0 | 36,389 | 990 | 2.72% | 56 |
-| Standard_B2ms | Intel(R) Xeon(R) CPU E5-2673 v3 @ 2.40GHz | 2 | 1 | 8.0 | 35,758 | 1,028 | 2.88% | 42 |
-| Standard_B2ms | Intel(R) Xeon(R) CPU E5-2673 v4 @ 2.30GHz | 2 | 1 | 8.0 | 36,028 | 1,605 | 4.45% | 182 |
-| Standard_B2ms | Intel(R) Xeon(R) Platinum 8171M CPU @ 2.60GHz | 2 | 1 | 8.0 | 36,122 | 2,128 | 5.89% | 112 |
-| Standard_B2ms | Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz | 2 | 1 | 8.0 | 42,525 | 672 | 1.58% | 14 |
-| Standard_B4hms | Intel(R) Xeon(R) CPU E5-2673 v3 @ 2.40GHz | 4 | 1 | 16.0 | 71,028 | 879 | 1.24% | 28 |
-| Standard_B4hms | Intel(R) Xeon(R) CPU E5-2673 v4 @ 2.30GHz | 4 | 1 | 16.0 | 73,126 | 2,954 | 4.04% | 56 |
-| Standard_B4hms | Intel(R) Xeon(R) Platinum 8171M CPU @ 2.60GHz | 4 | 1 | 16.0 | 68,451 | 1,571 | 2.29% | 56 |
-| Standard_B4hms | Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz | 4 | 1 | 16.0 | 83,525 | 563 | 0.67% | 14 |
-| Standard_B4ms | Intel(R) Xeon(R) CPU E5-2673 v3 @ 2.40GHz | 4 | 1 | 16.0 | 70,831 | 1,135 | 1.60% | 28 |
-| Standard_B4ms | Intel(R) Xeon(R) CPU E5-2673 v4 @ 2.30GHz | 4 | 1 | 16.0 | 70,987 | 2,287 | 3.22% | 168 |
-| Standard_B4ms | Intel(R) Xeon(R) Platinum 8171M CPU @ 2.60GHz | 4 | 1 | 16.0 | 68,796 | 1,897 | 2.76% | 84 |
-| Standard_B4ms | Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz | 4 | 1 | 16.0 | 81,712 | 4,042 | 4.95% | 70 |
-| Standard_B8ms | Intel(R) Xeon(R) CPU E5-2673 v3 @ 2.40GHz | 8 | 1 | 32.0 | 141,620 | 2,256 | 1.59% | 42 |
-| Standard_B8ms | Intel(R) Xeon(R) CPU E5-2673 v4 @ 2.30GHz | 8 | 1 | 32.0 | 139,090 | 3,229 | 2.32% | 182 |
-| Standard_B8ms | Intel(R) Xeon(R) Platinum 8171M CPU @ 2.60GHz | 8 | 1 | 32.0 | 135,510 | 2,653 | 1.96% | 112 |
-| Standard_B8ms | Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz | 8 | 1 | 32.0 | 164,510 | 2,254 | 1.37% | 14 |
-| Standard_B12ms | Intel(R) Xeon(R) CPU E5-2673 v3 @ 2.40GHz | 12 | 1 | 48.0 | 206,957 | 5,240 | 2.53% | 56 |
-| Standard_B12ms | Intel(R) Xeon(R) CPU E5-2673 v4 @ 2.30GHz | 12 | 1 | 48.0 | 211,461 | 4,115 | 1.95% | 154 |
-| Standard_B12ms | Intel(R) Xeon(R) Platinum 8171M CPU @ 2.60GHz | 12 | 1 | 48.0 | 200,729 | 3,475 | 1.73% | 140 |
-| Standard_B16ms | Intel(R) Xeon(R) CPU E5-2673 v3 @ 2.40GHz | 16 | 2 | 64.0 | 273,257 | 3,862 | 1.41% | 42 |
-| Standard_B16ms | Intel(R) Xeon(R) CPU E5-2673 v4 @ 2.30GHz | 16 | 1 | 64.0 | 282,187 | 5,030 | 1.78% | 154 |
-| Standard_B16ms | Intel(R) Xeon(R) Platinum 8171M CPU @ 2.60GHz | 16 | 1 | 64.0 | 265,834 | 5,545 | 2.09% | 112 |
-| Standard_B16ms | Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz | 16 | 1 | 64.0 | 331,694 | 3,537 | 1.07% | 28 |
-| Standard_B20ms | Intel(R) Xeon(R) CPU E5-2673 v3 @ 2.40GHz | 20 | 2 | 80.0 | 334,369 | 8,555 | 2.56% | 42 |
-| Standard_B20ms | Intel(R) Xeon(R) CPU E5-2673 v4 @ 2.30GHz | 20 | 1 | 80.0 | 345,686 | 6,702 | 1.94% | 154 |
-| Standard_B20ms | Intel(R) Xeon(R) Platinum 8171M CPU @ 2.60GHz | 20 | 1 | 80.0 | 328,900 | 7,625 | 2.32% | 126 |
-| Standard_B20ms | Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz | 20 | 1 | 80.0 | 409,515 | 4,792 | 1.17% | 14 |
+| Standard_B1ms | Intel(R) Xeon(R) CPU E5-2673 v3 @ 2.40 GHz | 1 | 1 | 2.0 | 18,093 | 679 | 3.75% | 42 |
+| Standard_B1ms | Intel(R) Xeon(R) CPU E5-2673 v4 @ 2.30 GHz | 1 | 1 | 2.0 | 18,197 | 1,341 | 7.37% | 168 |
+| Standard_B1ms | Intel(R) Xeon(R) Platinum 8171M CPU @ 2.60 GHz | 1 | 1 | 2.0 | 17,975 | 920 | 5.12% | 112 |
+| Standard_B1ms | Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60 GHz | 1 | 1 | 2.0 | 20,176 | 1,568 | 7.77% | 28 |
+| Standard_B2s | Intel(R) Xeon(R) CPU E5-2673 v3 @ 2.40 GHz | 2 | 1 | 4.0 | 35,546 | 660 | 1.86% | 42 |
+| Standard_B2s | Intel(R) Xeon(R) CPU E5-2673 v4 @ 2.30 GHz | 2 | 1 | 4.0 | 36,569 | 2,172 | 5.94% | 154 |
+| Standard_B2s | Intel(R) Xeon(R) Platinum 8171M CPU @ 2.60 GHz | 2 | 1 | 4.0 | 36,136 | 924 | 2.56% | 140 |
+| Standard_B2s | Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60 GHz | 2 | 1 | 4.0 | 42,546 | 834 | 1.96% | 14 |
+| Standard_B2hms | Intel(R) Xeon(R) CPU E5-2673 v3 @ 2.40 GHz | 2 | 1 | 8.0 | 36,949 | 1,494 | 4.04% | 28 |
+| Standard_B2hms | Intel(R) Xeon(R) CPU E5-2673 v4 @ 2.30 GHz | 2 | 1 | 8.0 | 36,512 | 2,537 | 6.95% | 70 |
+| Standard_B2hms | Intel(R) Xeon(R) Platinum 8171M CPU @ 2.60 GHz | 2 | 1 | 8.0 | 36,389 | 990 | 2.72% | 56 |
+| Standard_B2ms | Intel(R) Xeon(R) CPU E5-2673 v3 @ 2.40 GHz | 2 | 1 | 8.0 | 35,758 | 1,028 | 2.88% | 42 |
+| Standard_B2ms | Intel(R) Xeon(R) CPU E5-2673 v4 @ 2.30 GHz | 2 | 1 | 8.0 | 36,028 | 1,605 | 4.45% | 182 |
+| Standard_B2ms | Intel(R) Xeon(R) Platinum 8171M CPU @ 2.60 GHz | 2 | 1 | 8.0 | 36,122 | 2,128 | 5.89% | 112 |
+| Standard_B2ms | Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60 GHz | 2 | 1 | 8.0 | 42,525 | 672 | 1.58% | 14 |
+| Standard_B4hms | Intel(R) Xeon(R) CPU E5-2673 v3 @ 2.40 GHz | 4 | 1 | 16.0 | 71,028 | 879 | 1.24% | 28 |
+| Standard_B4hms | Intel(R) Xeon(R) CPU E5-2673 v4 @ 2.30 GHz | 4 | 1 | 16.0 | 73,126 | 2,954 | 4.04% | 56 |
+| Standard_B4hms | Intel(R) Xeon(R) Platinum 8171M CPU @ 2.60 GHz | 4 | 1 | 16.0 | 68,451 | 1,571 | 2.29% | 56 |
+| Standard_B4hms | Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60 GHz | 4 | 1 | 16.0 | 83,525 | 563 | 0.67% | 14 |
+| Standard_B4ms | Intel(R) Xeon(R) CPU E5-2673 v3 @ 2.40 GHz | 4 | 1 | 16.0 | 70,831 | 1,135 | 1.60% | 28 |
+| Standard_B4ms | Intel(R) Xeon(R) CPU E5-2673 v4 @ 2.30 GHz | 4 | 1 | 16.0 | 70,987 | 2,287 | 3.22% | 168 |
+| Standard_B4ms | Intel(R) Xeon(R) Platinum 8171M CPU @ 2.60 GHz | 4 | 1 | 16.0 | 68,796 | 1,897 | 2.76% | 84 |
+| Standard_B4ms | Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60 GHz | 4 | 1 | 16.0 | 81,712 | 4,042 | 4.95% | 70 |
+| Standard_B8ms | Intel(R) Xeon(R) CPU E5-2673 v3 @ 2.40 GHz | 8 | 1 | 32.0 | 141,620 | 2,256 | 1.59% | 42 |
+| Standard_B8ms | Intel(R) Xeon(R) CPU E5-2673 v4 @ 2.30 GHz | 8 | 1 | 32.0 | 139,090 | 3,229 | 2.32% | 182 |
+| Standard_B8ms | Intel(R) Xeon(R) Platinum 8171M CPU @ 2.60 GHz | 8 | 1 | 32.0 | 135,510 | 2,653 | 1.96% | 112 |
+| Standard_B8ms | Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60 GHz | 8 | 1 | 32.0 | 164,510 | 2,254 | 1.37% | 14 |
+| Standard_B12ms | Intel(R) Xeon(R) CPU E5-2673 v3 @ 2.40 GHz | 12 | 1 | 48.0 | 206,957 | 5,240 | 2.53% | 56 |
+| Standard_B12ms | Intel(R) Xeon(R) CPU E5-2673 v4 @ 2.30 GHz | 12 | 1 | 48.0 | 211,461 | 4,115 | 1.95% | 154 |
+| Standard_B12ms | Intel(R) Xeon(R) Platinum 8171M CPU @ 2.60 GHz | 12 | 1 | 48.0 | 200,729 | 3,475 | 1.73% | 140 |
+| Standard_B16ms | Intel(R) Xeon(R) CPU E5-2673 v3 @ 2.40 GHz | 16 | 2 | 64.0 | 273,257 | 3,862 | 1.41% | 42 |
+| Standard_B16ms | Intel(R) Xeon(R) CPU E5-2673 v4 @ 2.30 GHz | 16 | 1 | 64.0 | 282,187 | 5,030 | 1.78% | 154 |
+| Standard_B16ms | Intel(R) Xeon(R) Platinum 8171M CPU @ 2.60 GHz | 16 | 1 | 64.0 | 265,834 | 5,545 | 2.09% | 112 |
+| Standard_B16ms | Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60 GHz | 16 | 1 | 64.0 | 331,694 | 3,537 | 1.07% | 28 |
+| Standard_B20ms | Intel(R) Xeon(R) CPU E5-2673 v3 @ 2.40 GHz | 20 | 2 | 80.0 | 334,369 | 8,555 | 2.56% | 42 |
+| Standard_B20ms | Intel(R) Xeon(R) CPU E5-2673 v4 @ 2.30 GHz | 20 | 1 | 80.0 | 345,686 | 6,702 | 1.94% | 154 |
+| Standard_B20ms | Intel(R) Xeon(R) Platinum 8171M CPU @ 2.60 GHz | 20 | 1 | 80.0 | 328,900 | 7,625 | 2.32% | 126 |
+| Standard_B20ms | Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60 GHz | 20 | 1 | 80.0 | 409,515 | 4,792 | 1.17% | 14 |
### Dasv4 (03/25/2021 PBIID:9198755)
The following CoreMark benchmark scores show compute performance for select Azur
| Standard_D32as_v4 | AMD EPYC 7452 32-Core Processor | 32 | 4 | 128.0 | 566,270 | 8,484 | 1.50% | 140 | | Standard_D48as_v4 | AMD EPYC 7452 32-Core Processor | 48 | 6 | 192.0 | 829,547 | 15,679 | 1.89% | 126 | | Standard_D64as_v4 | AMD EPYC 7452 32-Core Processor | 64 | 8 | 256.0 | 1,088,030 | 16,708 | 1.54% | 28 |-
+| Standard_D96as_v4 | AMD EPYC 7452 32-Core Processor | 96 | 12 | 384.0 | N/A | - | - | - |
### Dav4 (03/25/2021 PBIID:9198755)
The following CoreMark benchmark scores show compute performance for select Azur
| Standard_D32a_v4 | AMD EPYC 7452 32-Core Processor | 32 | 4 | 128.0 | 567,019 | 11,019 | 1.94% | 210 | | Standard_D48a_v4 | AMD EPYC 7452 32-Core Processor | 48 | 6 | 192.0 | 835,617 | 13,097 | 1.57% | 140 | | Standard_D64a_v4 | AMD EPYC 7452 32-Core Processor | 64 | 8 | 256.0 | 1,099,165 | 21,962 | 2.00% | 252 |-
+| Standard_D96a_v4 | AMD EPYC 7452 32-Core Processor | 96 | 12 | 384.0 | N/A | - | - | - |
### DDSv4 (03/26/2021 PBIID:9198755)
The following CoreMark benchmark scores show compute performance for select Azur
| Standard_E64as_v4 | AMD EPYC 7452 32-Core Processor | 64 | 8 | 512.0 | 1,097,588 | 26,100 | 2.38% | 280 | | Standard_E64-16as_v4 | AMD EPYC 7452 32-Core Processor | 16 | 8 | 512.0 | 284,934 | 5,065 | 1.78% | 154 | | Standard_E64-32as_v4 | AMD EPYC 7452 32-Core Processor | 32 | 8 | 512.0 | 561,951 | 9,691 | 1.72% | 140 |
+| Standard_E96as_v4 | AMD EPYC 7452 32-Core Processor | 96 | 12 | 672.0 | N/A | - | - | - |
| Standard_E96-24as_v4 | AMD EPYC 7452 32-Core Processor | 24 | 11 | 672.0 | 423,442 | 8,504 | 2.01% | 182 | | Standard_E96-48as_v4 | AMD EPYC 7452 32-Core Processor | 48 | 11 | 672.0 | 839,993 | 14,218 | 1.69% | 70 |
The following CoreMark benchmark scores show compute performance for select Azur
| Standard_E32a_v4 | AMD EPYC 7452 32-Core Processor | 32 | 4 | 256.0 | 565,363 | 10,941 | 1.94% | 126 | | Standard_E48a_v4 | AMD EPYC 7452 32-Core Processor | 48 | 6 | 384.0 | 837,493 | 15,803 | 1.89% | 126 | | Standard_E64a_v4 | AMD EPYC 7452 32-Core Processor | 64 | 8 | 512.0 | 1,097,111 | 30,290 | 2.76% | 336 |
+| Standard_E96a_v4 | AMD EPYC 7452 32-Core Processor | 96 | 12 | 672.0 | N/A | - | - | - |
### EDSv4 (03/27/2021 PBIID:9198755)
The following CoreMark benchmark scores show compute performance for select Azur
| Standard_E64-16ds_v4 | Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz | 16 | 2 | 504.0 | 260,677 | 3,340 | 1.28% | 154 | | Standard_E64-32ds_v4 | Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz | 32 | 2 | 504.0 | 514,504 | 4,082 | 0.79% | 98 |
+### Edsv4 Isolated Extended
+(04/05/2021 PBIID:9198755)
+
+| VM Size | CPU | vCPUs | NUMA Nodes | Memory(GiB) | Avg Score | StdDev | StdDev% | #Runs |
+| | | : | : | : | : | : | : | : |
+| Standard_E80ids_v4 | Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz | 80 | 2 | 504.0 |N/A | - | - | - |
+ ### EDv4 (03/26/2021 PBIID:9198755)
The following CoreMark benchmark scores show compute performance for select Azur
| Standard_E48d_v4 | Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz | 48 | 2 | 384.0 | 761,410 | 21,640 | 2.84% | 336 | | Standard_E64d_v4 | Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz | 64 | 2 | 504.0 | 1,030,708 | 9,500 | 0.92% | 322 |
+### EIASv4
+(04/05/2021 PBIID:9198755)
+
+| VM Size | CPU | vCPUs | NUMA Nodes | Memory(GiB) | Avg Score | StdDev | StdDev% | #Runs |
+| | | : | : | : | : | : | : | : |
+| Standard_E96ias_v4 | AMD EPYC 7452 32-Core Processor | 96 | 12 | 672.0 | N/A | - | - | - |
+ ### Esv4 (03/25/2021 PBIID:9198755)
The following CoreMark benchmark scores show compute performance for select Azur
| Standard_E64-16s_v4 | Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz | 16 | 2 | 504.0 | 224,499 | 3,955 | 1.76% | 168 | | Standard_E64-32s_v4 | Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz | 32 | 2 | 504.0 | 441,521 | 30,939 | 7.01% | 168 |
+### Esv4 Isolated Extended
+| VM Size | CPU | vCPUs | NUMA Nodes | Memory(GiB) | Avg Score | StdDev | StdDev% | #Runs |
+| | | : | : | : | : | : | : | : |
+| Standard_E80is_v4 | Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz | 80 | 2 | 504.0 | N/A | - | - | - |
+ ### Ev4 (03/25/2021 PBIID:9198755)- | VM Size | CPU | vCPUs | NUMA Nodes | Memory(GiB) | Avg Score | StdDev | StdDev% | #Runs | | | | : | : | : | : | : | : | : | | Standard_E2_v4 | Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz | 2 | 1 | 16.0 | 30,825 | 2,765 | 8.97% | 406 |
The following CoreMark benchmark scores show compute performance for select Azur
## About CoreMark
-[CoreMark](https://www.eembc.org/coremark/faq.php) is a benchmark that tests the functionality of a microctronoller (MCU) or central processing unit (CPU). CoreMark is not system dependent, so it functions the same regardless of the platform (e.g. big or little endian, high-end or low-end processor).
+[CoreMark](https://www.eembc.org/coremark/faq.php) is a benchmark that tests the functionality of a microctronoller (MCU) or central processing unit (CPU). CoreMark isn't system dependent, so it functions the same regardless of the platform (for example, big or little endian, high-end or low-end processor).
Windows numbers were computed by running CoreMark on Windows Server 2019. CoreMark was configured with the number of threads set to the number of virtual CPUs, and concurrency set to `PThreads`. The target number of iterations was adjusted based on expected performance to provide a runtime of at least 20 seconds (typically much longer). The final score represents the number of iterations completed divided by the number of seconds it took to run the test. Each test was run at least seven times on each VM. Test run dates shown above. Tests run on multiple VMs across Azure public regions the VM was supported in on the date run.
+Windows numbers were computed by running CoreMark on Windows Server 2019. CoreMark was configured with the number of threads set to the number of virtual CPUs, and concurrency set to `PThreads`. The target number of iterations was adjusted based on expected performance to provide a runtime of at least 20 seconds (typically much longer). The final score represents the number of iterations completed divided by the number of seconds it took to run the test. Each test was run at least seven times on each VM. Test run dates shown above. Tests run on multiple VMs across Azure public regions the VM was supported in on the date run. (Coremark doesn't properly support more than 64 vCPUs on Windows, therefore SKUs with > 64 vCPUs have been marked as N/A.)
### Running Coremark on Azure VMs
To build and run the benchmark, type:
`> make` Full results are available in the files ```run1.log``` and ```run2.log```.
-```run1.log``` contains CoreMark results. These are the benchmark results with performance parameters.
+```run1.log``` contains CoreMark results with performance parameters.
```run2.log``` contains benchmark results with validation parameters. **Run Time:**
The above will compile the benchmark for execution on 4 cores.
- The benchmark needs to run for at least 10 seconds, probably longer on larger systems. - All source files must be compiled with same flags.-- Do not change source files other than ```core_portme*``` (use ```make check``` to validate)
+- Don't change source files other than ```core_portme*``` (use ```make check``` to validate)
- Multiple runs are suggested for best results. ## GPU Series
Performance of GPU based VM series is best understood by using GPU appropriate b
## Next steps
-* For storage capacities, disk details, and additional considerations for choosing among VM sizes, see [Sizes for virtual machines](../sizes.md).
+* For storage capacities, disk details, and other considerations for choosing among VM sizes, see [Sizes for virtual machines](../sizes.md).
virtual-machines Oracle Vm Solutions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/oracle/oracle-vm-solutions.md
For related information, see KB article **860340.1** at <https://support.oracle.
If you configure the admin server to automatically assign unique port numbers to its managed servers, then load balancing is not possible because Azure does not support mapping from a single public port to multiple private ports, as would be required for this configuration. - **Multiple instances of Oracle WebLogic Server on a virtual machine.** Depending on your deploymentΓÇÖs requirements, you might consider running multiple instances of Oracle WebLogic Server on the same virtual machine, if the virtual machine is large enough. For example, on a midsize virtual machine, which contains two cores, you could choose to run two instances of Oracle WebLogic Server. However, we still recommend that you avoid introducing single points of failure into your architecture, which would be the case if you used just one virtual machine that is running multiple instances of Oracle WebLogic Server. Using at least two virtual machines could be a better approach, and each virtual machine could then run multiple instances of Oracle WebLogic Server. Each instance of Oracle WebLogic Server could still be part of the same cluster. However, it is currently not possible to use Azure to load-balance endpoints that are exposed by such Oracle WebLogic Server deployments within the same virtual machine, because Azure load balancer requires the load-balanced servers to be distributed among unique virtual machines.
-## Oracle JDK virtual machine images
--- **JDK 6 and 7 latest updates.** While we recommend using the latest public, supported version of Java (currently Java 8), Azure also makes JDK 6 and 7 images available. This is intended for legacy applications that are not yet ready to be upgraded to JDK 8. While updates to previous JDK images might no longer be available to the general public, given the Microsoft partnership with Oracle, the JDK 6 and 7 images provided by Azure are intended to contain a more recent non-public update that is normally offered by Oracle to only a select group of OracleΓÇÖs supported customers. New versions of the JDK images will be made available over time with updated releases of JDK 6 and 7.-
- The JDK available in the JDK 6 and 7 images, and the virtual machines and images derived from them, can only be used within Azure.
-- **64-bit JDK.** The Oracle WebLogic Server virtual machine images and the Oracle JDK virtual machine images provided by Azure contain the 64-bit versions of both Windows Server and the JDK.- ## Next steps You now have an overview of current Oracle solutions based on virtual machine images in Microsoft Azure. Your next step is to deploy your first Oracle database on Azure.
virtual-machines Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/get-started.md
In this section, you find documents about Microsoft Power BI integration into SA
## Change Log
+- June 02, 2022: Change in [HA for SAP NetWeaver on Azure VMs on Windows with Azure NetApp Files(SMB)](./high-availability-guide-windows-netapp-files-smb.md), [HA for SAP NW on Azure VMs on SLES with ANF](./high-availability-guide-suse-netapp-files.md) and [HA for SAP NW on Azure VMs on RHEL with ANF](./high-availability-guide-rhel-netapp-files.md) to add sizing considerations
- May 11, 2022: Change in [Cluster an SAP ASCS/SCS instance on a Windows failover cluster by using a cluster shared disk in Azure](./sap-high-availability-guide-wsfc-shared-disk.md), [Prepare the Azure infrastructure for SAP HA by using a Windows failover cluster and shared disk for SAP ASCS/SCS](./sap-high-availability-infrastructure-wsfc-shared-disk.md) and [SAP ASCS/SCS instance multi-SID high availability with Windows server failover clustering and Azure shared disk](./sap-ascs-ha-multi-sid-wsfc-azure-shared-disk.md) to update instruction about the usage of Azure shared disk for SAP deployment with PPG. - May 10, 2022: Changes in Change in [HA for SAP HANA scale-up with ANF on RHEL](./sap-hana-high-availability-netapp-files-red-hat.md), [SAP HANA scale-out HSR with Pacemaker on Azure VMs on RHEL](./sap-hana-high-availability-scale-out-hsr-rhel.md), [HA for SAP HANA Scale-up with Azure NetApp Files on SLES](./sap-hana-high-availability-netapp-files-suse.md), [SAP HANA scale-out with standby node on Azure VMs with ANF on SLES](./sap-hana-scale-out-standby-netapp-files-suse.md), [SAP HANA scale-out HSR with Pacemaker on Azure VMs on SLES](./sap-hana-high-availability-scale-out-hsr-suse.md) and [SAP HANA scale-out with standby node on Azure VMs with ANF on RHEL](./sap-hana-scale-out-standby-netapp-files-rhel.md) to adjust parameters per SAP note 3024346 - April 26, 2022: Changes in [Setting up Pacemaker on SUSE Linux Enterprise Server in Azure](high-availability-guide-suse-pacemaker.md) to add Azure Identity python module to installation instructions for Azure Fence Agent
virtual-machines High Availability Guide Rhel Netapp Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/high-availability-guide-rhel-netapp-files.md
vm-windows Previously updated : 03/25/2022 Last updated : 06/02/2022
When considering Azure NetApp Files for the SAP Netweaver on RHEL High Availabil
- The minimum volume is 100 GiB - Azure NetApp Files and all virtual machines, where Azure NetApp Files volumes will be mounted, must be in the same Azure Virtual Network or in [peered virtual networks](../../../virtual-network/virtual-network-peering-overview.md) in the same region. Azure NetApp Files access over VNET peering in the same region is supported now. Azure NetApp access over global peering is not yet supported. - The selected virtual network must have a subnet, delegated to Azure NetApp Files.
+- The throughput and performance characteristics of an Azure NetApp Files volume is a function of the volume quota and service level, as documented in [Service level for Azure NetApp Files](../../../azure-netapp-files/azure-netapp-files-service-levels.md). While sizing the SAP Azure NetApp volumes, make sure that the resulting throughput meets the application requirements.
- Azure NetApp Files offers [export policy](../../../azure-netapp-files/azure-netapp-files-configure-export-policy.md): you can control the allowed clients, the access type (Read&Write, Read Only, etc.). - Azure NetApp Files feature isn't zone aware yet. Currently Azure NetApp Files feature isn't deployed in all Availability zones in an Azure region. Be aware of the potential latency implications in some Azure regions. - Azure NetApp Files volumes can be deployed as NFSv3 or NFSv4.1 volumes. Both protocols are supported for the SAP application layer (ASCS/ERS, SAP application servers).
virtual-machines High Availability Guide Suse Netapp Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/high-availability-guide-suse-netapp-files.md
vm-windows Previously updated : 03/25/2022 Last updated : 06/02/2022
In this example, we used Azure NetApp Files for all SAP Netweaver file systems t
When considering Azure NetApp Files for the SAP Netweaver on SUSE High Availability architecture, be aware of the following important considerations: -- The minimum capacity pool is 4 TiB. The capacity pool size can be increased be in 1 TiB increments.
+- The minimum capacity pool is 4 TiB. The capacity pool size can be increased in 1 TiB increments.
- The minimum volume is 100 GiB - Azure NetApp Files and all virtual machines, where Azure NetApp Files volumes will be mounted, must be in the same Azure Virtual Network or in [peered virtual networks](../../../virtual-network/virtual-network-peering-overview.md) in the same region. Azure NetApp Files access over VNET peering in the same region is supported now. Azure NetApp access over global peering is not yet supported. - The selected virtual network must have a subnet, delegated to Azure NetApp Files.
+- The throughput and performance characteristics of an Azure NetApp Files volume is a function of the volume quota and service level, as documented in [Service level for Azure NetApp Files](../../../azure-netapp-files/azure-netapp-files-service-levels.md). While sizing the SAP Azure NetApp volumes, make sure that the resulting throughput meets the application requirements.
- Azure NetApp Files offers [export policy](../../../azure-netapp-files/azure-netapp-files-configure-export-policy.md): you can control the allowed clients, the access type (Read&Write, Read Only, etc.). - Azure NetApp Files feature isn't zone aware yet. Currently Azure NetApp Files feature isn't deployed in all Availability zones in an Azure region. Be aware of the potential latency implications in some Azure regions. - Azure NetApp Files volumes can be deployed as NFSv3 or NFSv4.1 volumes. Both protocols are supported for the SAP application layer (ASCS/ERS, SAP application servers).
virtual-machines High Availability Guide Windows Netapp Files Smb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/high-availability-guide-windows-netapp-files-smb.md
vm-windows Previously updated : 02/13/2022 Last updated : 06/02/2022
Perform the following steps, as preparation for using Azure NetApp Files.
> [!TIP] > You can find the instructions on how to mount the Azure NetApp Files volume, if you navigate in [Azure Portal](https://portal.azure.com/#home) to the Azure NetApp Files object, click on the **Volumes** blade, then **Mount Instructions**.
+### Important considerations
+
+When considering Azure NetApp Files for the SAP Netweaver architecture, be aware of the following important considerations:
+
+- The minimum capacity pool is 4 TiB. The capacity pool size can be increased in 1 TiB increments.
+- The minimum volume is 100 GiB
+- The selected virtual network must have a subnet, delegated to Azure NetApp Files.
+- The throughput and performance characteristics of an Azure NetApp Files volume is a function of the volume quota and service level, as documented in [Service level for Azure NetApp Files](../../../azure-netapp-files/azure-netapp-files-service-levels.md). While sizing the SAP Azure NetApp volumes, make sure that the resulting throughput meets the application requirements.
+
+ ## Prepare the infrastructure for SAP HA by using a Windows failover cluster 1. [Set the ASCS/SCS load balancing rules for the Azure internal load balancer](./sap-high-availability-infrastructure-wsfc-shared-disk.md#fe0bd8b5-2b43-45e3-8295-80bee5415716).
virtual-network Public Ip Addresses https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/public-ip-addresses.md
The fully qualified domain name (FQDN) **contoso.westus.cloudapp.azure.com** res
If a custom domain is desired for services that use a Public IP, you can use [Azure DNS](../../dns/dns-custom-domain.md?toc=%2fazure%2fvirtual-network%2ftoc.json#public-ip-address) or an external DNS provider for your DNS Record.
+## Availability Zone
+
+Public IP addresses with a Standard SKU can be created as non-zonal, zonal, or zone-redundant in [regions that support availability zones](../../availability-zones/az-region.md). A zone-redundant IP is created in all zones for a region and can survive any single zone failure. A zonal IP is tied to a specific availability zone, and shares fate with the health of the zone. A "non-zonal" public IP addresses is placed into a zone for you by Azure and does not give a guarantee of redundancy.
+
+In regions without availability zones, all public IP addresses are created as non-zonal. Public IP addresses created in a region that is later upgraded to have availability zones remain non-zonal.
+
+> [!NOTE]
+> All Basic SKU public IP addresses are created as non-zonal. Any IP that is upgraded from a Basic SKU to Standard SKU remains non-zonal.
+ ## Other public IP address features There are other attributes that can be used for a public IP address.
virtual-network Virtual Network Public Ip Address https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/virtual-network-public-ip-address.md
Learn how to assign a public IP address to the following resources:
- A [Windows](../../virtual-machines/windows/quick-create-portal.md?toc=%2fazure%2fvirtual-network%2ftoc.json) or [Linux](../../virtual-machines/linux/quick-create-portal.md?toc=%2fazure%2fvirtual-network%2ftoc.json) Virtual Machine on creation. Add IP to an [existing virtual machine](./virtual-network-network-interface-addresses.md#add-ip-addresses). - [Virtual Machine Scale Set](../../virtual-machine-scale-sets/quick-create-portal.md?toc=%2fazure%2fvirtual-network%2ftoc.json)-- [Public load balancer](/configure-public-ip-load-balancer.md)
+- [Public load balancer](/azure/virtual-network/ip-services/configure-public-ip-load-balancer)
- [Cross-region load balancer](../../load-balancer/tutorial-cross-region-portal.md?toc=%2fazure%2fvirtual-network%2ftoc.json)-- [Application Gateway](/configure-public-ip-application-gateway.md)
+- [Application Gateway](/azure/virtual-network/ip-services/configure-public-ip-application-gateway)
- [Site-to-site connection using a VPN gateway](configure-public-ip-vpn-gateway.md)-- [NAT gateway](/configure-public-ip-nat-gateway.md)-- [Azure Bastion](/configure-public-ip-bastion.md)-- [Azure Firewall](/configure-public-ip-firewall.md)
+- [NAT gateway](/azure/virtual-network/ip-services/configure-public-ip-nat-gateway)
+- [Azure Bastion](/azure/virtual-network/ip-services/configure-public-ip-bastion)
+- [Azure Firewall](/azure/virtual-network/ip-services/configure-public-ip-firewall)
## Region availability
virtual-wan Virtual Wan Site To Site Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/virtual-wan-site-to-site-portal.md
Title: 'Tutorial: Create site-to-site connections using Virtual WAN'+ description: Learn how to use Azure Virtual WAN to create a site-to-site VPN connection to Azure.-
In this tutorial you learn how to:
> [!div class="checklist"] > * Create a virtual WAN
-> * Configure hub Basic settings
+> * Configure virtual hub Basic settings
> * Configure site-to-site VPN gateway settings > * Create a site
-> * Connect a site to a hub
-> * Connect a VPN site to a hub
-> * Connect a VNet to a hub
+> * Connect a site to a virtual hub
+> * Connect a VPN site to a virtual hub
+> * Connect a VNet to a virtual hub
> * Download a configuration file > * View or edit your VPN gateway
Verify that you've met the following criteria before beginning your configuratio
[!INCLUDE [Create a virtual WAN](../../includes/virtual-wan-create-vwan-include.md)]
-## <a name="hub"></a>Configure hub settings
+## <a name="hub"></a>Configure virtual hub settings
-A hub is a virtual network that can contain gateways for site-to-site, ExpressRoute, or point-to-site functionality. For this tutorial, you begin by filling out the **Basics** tab for the virtual hub and then continue on to fill out the site-to-site tab in the next section. It's also possible to create an empty hub (a hub that doesn't contain any gateways) and then add gateways (S2S, P2S, ExpressRoute, etc.) later. Once a hub is created, you'll be charged for the hub, even if you don't attach any sites or create any gateways within the hub.
+A virtual hub is a virtual network that can contain gateways for site-to-site, ExpressRoute, or point-to-site functionality. For this tutorial, you begin by filling out the **Basics** tab for the virtual hub and then continue on to fill out the site-to-site tab in the next section. It's also possible to create an empty virtual hub (a virtual hub that doesn't contain any gateways) and then add gateways (S2S, P2S, ExpressRoute, etc.) later. Once a virtual hub is created, you'll be charged for the virtual hub, even if you don't attach any sites or create any gateways within the virtual hub.
-Don't create the hub yet. Continue on to the next section to configure additional settings.
+Don't create the virtual hub yet. Continue on to the next section to configure additional settings.
## <a name="gateway"></a>Configure a site-to-site gateway
-In this section, you configure site-to-site connectivity settings, and then proceed to create the hub and site-to-site VPN gateway. A hub and gateway can take about 30 minutes to create.
+In this section, you configure site-to-site connectivity settings, and then proceed to create the virtual hub and site-to-site VPN gateway. A virtual hub and gateway can take about 30 minutes to create.
[!INCLUDE [Create a gateway](../../includes/virtual-wan-tutorial-s2s-gateway-include.md)] ## <a name="site"></a>Create a site
-In this section, you create site. Sites correspond to your physical locations. Create as many sites as you need. For example, if you have a branch office in NY, a branch office in London, and a branch office and LA, you'd create three separate sites. These sites contain your on-premises VPN device endpoints. You can create up to 1000 sites per virtual hub in a virtual WAN. If you had multiple hubs, you can create 1000 per each of those hubs. If you have Virtual WAN partner CPE device, check with them to learn about their automation to Azure. Typically, automation implies a simple click experience to export large-scale branch information into Azure, and setting up connectivity from the CPE to Azure Virtual WAN VPN gateway. For more information, see [Automation guidance from Azure to CPE partners](virtual-wan-configure-automation-providers.md).
+In this section, you create site. Sites correspond to your physical locations. Create as many sites as you need. For example, if you have a branch office in NY, a branch office in London, and a branch office and LA, you'd create three separate sites. These sites contain your on-premises VPN device endpoints. You can create up to 1000 sites per virtual hub in a virtual WAN. If you had multiple virtual hubs, you can create 1000 per each of those virtual hubs. If you have Virtual WAN partner CPE device, check with them to learn about their automation to Azure. Typically, automation implies a simple click experience to export large-scale branch information into Azure, and setting up connectivity from the CPE to Azure Virtual WAN VPN gateway. For more information, see [Automation guidance from Azure to CPE partners](virtual-wan-configure-automation-providers.md).
[!INCLUDE [Create a site](../../includes/virtual-wan-tutorial-s2s-site-include.md)]
-## <a name="connectsites"></a>Connect the VPN site to a hub
+## <a name="connectsites"></a>Connect the VPN site to a virtual hub
-In this section, you connect your VPN site to the hub.
+In this section, you connect your VPN site to the virtual hub.
[!INCLUDE [Connect VPN sites](../../includes/virtual-wan-tutorial-s2s-connect-vpn-site-include.md)]
-## <a name="vnet"></a>Connect a VNet to the hub
+## <a name="vnet"></a>Connect a VNet to the virtual hub
-In this section, you create a connection between the hub and your VNet.
+In this section, you create a connection between the virtual hub and your VNet.
[!INCLUDE [Connect](../../includes/virtual-wan-connect-vnet-hub-include.md)]
The device configuration file contains the settings to use when configuring your
``` "AddressSpace":"10.1.0.0/24" ```
- * **Address space** of the VNets that are connected to the hub.<br>Example:
+ * **Address space** of the VNets that are connected to the virtual hub.<br>Example:
``` "ConnectedSubnets":["10.2.0.0/16","10.3.0.0/16"]
vpn-gateway Point To Site Vpn Client Configuration Radius Other https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/point-to-site-vpn-client-configuration-radius-other.md
You can generate the VPN client configuration files by using the Azure portal, o
### Azure PowerShell
-Use the [Get-AzVpnClientConfiguration](/powershell/module/az.network/get-azvpnclientconfiguration.md) cmdlet to generate the VPN client configuration for EapMSChapv2.
+Use the [Get-AzVpnClientConfiguration](/powershell/module/az.network/get-azvpnclientconfiguration) cmdlet to generate the VPN client configuration for EapMSChapv2.
## View the files and configure the VPN client
vpn-gateway Vpn Gateway Howto Aws Bgp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-howto-aws-bgp.md
Title: 'Tutorial - Configure a BGP-enabled connection between Azure and Amazon Web Services (AWS) using the portal' description: In this tutorial, learn how to connect Azure and AWS using an active-active VPN Gateway and two site-to-site connections on AWS. --++ Last updated 12/2/2021
web-application-firewall Manage Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/shared/manage-policies.md
Previously updated : 06/01/2022 Last updated : 06/02/2022
-# Use Azure Firewall Manager to manage Web Application Firewall policies (preview)
+# Configure WAF policies using Azure Firewall Manager (preview)
> [!IMPORTANT]
-> Managing Web Application Firewall policies using Azure Firewall Manager is currently in PREVIEW.
+> Configure Web Application Firewall policies using Azure Firewall Manager is currently in PREVIEW.
> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-Azure Firewall Manager is a central network security policy and route management service that allows administrators and organizations to protect their networks and cloud platforms at scale, all in one central place.
+Azure Firewall Manager is a platform to manage and protect your network security resources at scale. You can associate your WAF policies to an Application Gateway or Azure Front Door within Azure Firewall Manager, all in a single place.
-## Create and associate policies
+## View and manage WAF policies
-You can use Azure Firewall Manager to centrally create, associate, and manage Web Application Firewall (WAF) policies for your application delivery platforms, including Azure Front Door and Azure Application Gateway.
+In Azure Firewall Manager, you can create and view all WAF policies in one central place across subscriptions and regions.
+
+To navigate to WAF policies, select the **Web Application Firewall Policies** tab on the left, under **Security**.
++
+## Associate of dissociate WAF policies
+
+In Azure Firewall Manager, you can create and view all WAF policies in your subscriptions. These policies can be associated or dissociated with an application delivery platform. Select the service and then select **Manage Security**.
++
+## Upgrade Application Gateway WAF configuration to WAF policy
+
+For Application Gateway with WAF configuration, you can upgrade the WAF configuration to a WAF policy associated with Application Gateway.
+
+The WAF policy can be shared to multiple application gateways. Also, a WAF policy allows you to take advantage of advanced and new features like bot protection, newer rule sets, and reduced false positives. New features are only released on WAF policies.
+
+To upgrade a WAF configuration to a WAF policy, select **Upgrade from WAF configuration** from the desired application gateway.
+ ## Next steps