Updates from: 06/17/2021 03:09:35
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Add Ropc Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/add-ropc-policy.md
Previously updated : 01/11/2021 Last updated : 06/16/2021
A successful response looks like the following example:
} ```
+## Troubleshooting
+
+### The provided application is not configured to allow the 'OAuth' Implicit flow
+
+* **Symptom** - You run the ROPC flow, and get the following message: *AADB2C90057: The provided application is not configured to allow the 'OAuth' Implicit flow*.
+* **Possible causes** - The implicit flow is not allowed for your application.
+* **Resolution**: When creating your [app registration](#register-an-application) in Azure AD B2C, you need to manually edit the application manifest and set the value of the `oauth2AllowImplicitFlow` property to `true`. After you configure the `oauth2AllowImplicitFlow` property, it can take a few minutes (typically no more than five) for the change to take affect.
+ ## Use a native SDK or App-Auth Azure AD B2C meets OAuth 2.0 standards for public client resource owner password credentials and should be compatible with most client SDKs. For the latest information, see [Native App SDK for OAuth 2.0 and OpenID Connect implementing modern best practices](https://appauth.io/).
active-directory-b2c Configure Authentication Sample Spa App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/configure-authentication-sample-spa-app.md
You can add and modify redirect URIs in your registered applications at any time
## Next steps * Learn more [about the code sample](https://github.com/Azure-Samples/ms-identity-b2c-javascript-spa)
-* Learn how to [Authentication options in your own SPA application using Azure AD B2C](enable-authentication-spa-app-options.md)
+* [Enable authentication in your own SPA application](enable-authentication-spa-app.md)
+* Configure [authentication options in your SPA application](enable-authentication-spa-app-options.md)
active-directory-b2c Enable Authentication Spa App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/enable-authentication-spa-app.md
+
+ Title: Enable authentication in a SPA application using Azure Active Directory B2C building blocks
+description: The building blocks of Azure Active Directory B2C to sign in and sign up users in a SPA application.
++++++ Last updated : 06/16/2021+++++
+# Enable authentication in your own Single Page Application using Azure Active Directory B2C
+
+This article shows you how to add Azure Active Directory B2C (Azure AD B2C) authentication to your own Single Page Application (SPA). Learn how create a SPA application with [MSAL.js](https://github.com/AzureAD/microsoft-authentication-library-for-js) authentication library. Use this article with [Configure authentication in a sample SPA application](./configure-authentication-sample-spa-app.md), substituting the sample SPA app with your own SPA app.
+
+## Overview
+
+This article uses Node.js and [Express](https://expressjs.com/), to create a basic Node.js web app. Express is a minimal and flexible Node.js web app framework that provides a set of features for web and mobile applications.
+
+The [MSAL.js](https://github.com/AzureAD/microsoft-authentication-library-for-js) authentication library is a Microsoft provided library that simplifies adding authentication and authorization support to SPA apps.
+
+> [!TIP]
+> The entire MSAL.js code runs on the client side. You can substitute the Node.js and Express server side code with other solutions, such as .NET core, Java, and PHP.
+
+## Prerequisites
+
+Review the prerequisites and integration steps in [Configure authentication in a sample SPA application](configure-authentication-sample-spa-app.md) article.
+
+## Create an SPA app project
+
+You can use an existing SPA app project, or create new one. To create a new project, follow these steps:
+
+1. Open command shell, and create a new directory. For example, *myApp*. This directory will contain your app code, user interface, and configuration files.
+1. Enter the directory your created.
+1. Use the `npm init` command to create a `package.json` file for your app. This command prompts you for information about your app. For example, the name and version of your app, and the name of the initial entry point, the `index.js` file. Run the following command, and accept the defaults:
+
+```
+npm init
+```
+
+## Install the dependencies
+
+To install the Express package, in your command shell run the following commands:
+
+```
+npm install express
+```
+
+To locate the app's static files, the server-side code uses the [Path](https://www.npmjs.com/package/path) package.
+To install the Path package, in your command shell run the following commands:
+
+```
+npm install path
+```
+
+## Configure your web server
+
+In your *myApp* folder, create a file named `index.js` containing the following code:
+
+```javascript
+// Initialize express
+const express = require('express');
+const app = express();
+
+// The port to listen to incoming HTTP requests
+const port = 6420;
+
+// Initialize path
+const path = require('path');
+
+// Set the front-end folder to serve public assets.
+app.use(express.static('App'));
+
+// Set up a route for the https://docsupdatetracker.net/index.html
+app.get('*', (req, res) => {
+ res.sendFile(path.join(__dirname + '/https://docsupdatetracker.net/index.html'));
+});
+
+// Start the server, and listen for HTTP requests
+app.listen(port, () => {
+ console.log(`Listening on http://localhost:${port}`);
+});
+```
+
+## Create the SPA user interface
+
+In this step, add the SAP app `https://docsupdatetracker.net/index.html` file. This file implements the user interface built with Bootstrap framework, imports script files for configuration, authentication, and web API calls.
+
+The table below details the resources referenced by the *https://docsupdatetracker.net/index.html* file.
+
+|Reference |Definition|
+|||
+|MSAL.js library| MSAL.js authentication JavaScript library [CDN path](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser/docs/cdn-usage.md).|
+|[Bootstrap stylesheet](https://getbootstrap.com/) | A free front-end framework for faster and easier web development. The framework includes HTML and CSS based design templates. |
+|[policies.js](https://github.com/Azure-Samples/ms-identity-b2c-javascript-spa/blob/main/App/policies.js) | Contains the Azure AD B2C custom policies and user-flows. |
+|[authConfig.js](https://github.com/Azure-Samples/ms-identity-b2c-javascript-spa/blob/main/App/authConfig.js) | Contains authentication configuration parameters.|
+|[authRedirect.js](https://github.com/Azure-Samples/ms-identity-b2c-javascript-spa/blob/main/App/authRedirect.js) | Contains the authentication logic. |
+|[apiConfig.js](https://github.com/Azure-Samples/ms-identity-b2c-javascript-spa/blob/main/App/apiConfig.js) | Contains web API scopes and the API endpoint location. |
+|[api.js](https://github.com/Azure-Samples/ms-identity-b2c-javascript-spa/blob/main/App/api.js) | Defines the method to call your API and handle its response|
+|[ui.js](https://github.com/Azure-Samples/ms-identity-b2c-javascript-spa/blob/main/App/ui.js) | Controls UI elements |
+
+To render the SPA index file, in the *myApp* folder, create a file named *https://docsupdatetracker.net/index.html* containing the following HTML snippet.
+
+```html
+<!DOCTYPE html>
+<html>
+ <head>
+ <title>My AAD B2C test app</title>
+ </head>
+ <body>
+ <h2>My AAD B2C test app</h2>
+ <link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.4.1/css/bootstrap.min.css" integrity="sha384-Vkoo8x4CGsO3+Hhxv8T/Q5PaXtkKtu6ug5TOeNV6gBiFeWPGFN9MuhOf23Q9Ifjh" crossorigin="anonymous" />
+ <button type="button" id="signIn" class="btn btn-secondary" onclick="signIn()">Sign-in</button>
+ <button type="button" id="signOut" class="btn btn-success d-none" onclick="signOut()">Sign-out</button>
+ <h5 id="welcome-div" class="card-header text-center d-none"></h5>
+ <br />
+ <!-- Content -->
+ <div class="card">
+ <div class="card-body text-center">
+ <pre id="response" class="card-text"></pre>
+ <button type="button" id="callApiButton" class="btn btn-primary d-none" onclick="passTokenToApi()">Call API</button>
+ </div>
+ </div>
+ <script src="https://alcdn.msauth.net/browser/2.14.2/js/msal-browser.min.js" integrity="sha384-ggh+EF1aSqm+Y4yvv2n17KpurNcZTeYtUZUvhPziElsstmIEubyEB6AIVpKLuZgr" crossorigin="anonymous"></script>
+
+ <!-- Importing app scripts (load order is important) -->
+ <script type="text/javascript" src="./apiConfig.js"></script>
+ <script type="text/javascript" src="./policies.js"></script>
+ <script type="text/javascript" src="./authConfig.js"></script>
+ <script type="text/javascript" src="./ui.js"></script>
+
+ <!-- <script type="text/javascript" src="./authRedirect.js"></script> -->
+ <!-- uncomment the above line and comment the line below if you would like to use the redirect flow -->
+ <script type="text/javascript" src="./authRedirect.js"></script>
+ <script type="text/javascript" src="./api.js"></script>
+ </body>
+</html>
+ ```
+
+## Configure the authentication library
+
+In this section, configure how the MSAL.js library integrates with Azure AD B2C. The MSAL.js library uses a common configuration object to connect to your Azure AD B2C tenants authentication endpoints.
+
+To configure the authentication library, follow these steps:
+
+1. In the *myApp* folder, create a new folder called *App*.
+1. Inside the *App* folder, create a new file named *authConfig.js*.
+1. Add following JavaScript code to the *authConfig.js* file:
+
+ ```javascript
+ const msalConfig = {
+ auth: {
+ clientId: "<Application-ID>",
+ authority: b2cPolicies.authorities.signUpSignIn.authority,
+ knownAuthorities: [b2cPolicies.authorityDomain],
+ redirectUri: "http://localhost:6420",
+ },
+ cache: {
+ cacheLocation: "localStorage", .
+ storeAuthStateInCookie: false,
+ }
+ };
+
+ const loginRequest = {
+ scopes: ["openid", ...apiConfig.b2cScopes],
+ };
+
+ const tokenRequest = {
+ scopes: [...apiConfig.b2cScopes],
+ forceRefresh: false
+ };
+ ```
+
+1. Replace `<Application-ID>` with your app registration application ID. For more information, see [Configure authentication in a sample SPA application article](./configure-authentication-sample-spa-app.md#23-register-the-client-app).
+
+> [!TIP]
+> For more MSAL object configuration options, see the [Authentication options](./enable-authentication-spa-app-options.md) article.
+
+### Specify your Azure AD B2C user flows
+
+In this step, create the *policies.js* file, which provides information about your Azure AD B2C environment. The MSAL.js library uses this information to create authentication requests to Azure AD B2C.
+
+To specify your Azure AD B2C user flows, follow these steps:
+
+1. Inside the *App* folder, create a new file named *policies.js*.
+1. Add the following code to the *policies.js* file:
+
+ ```javascript
+ const b2cPolicies = {
+ names: {
+ signUpSignIn: "B2C_1_SUSI",
+ editProfile: "B2C_1_EditProfile"
+ },
+ authorities: {
+ signUpSignIn: {
+ authority: "https://contoso.b2clogin.com/contoso.onmicrosoft.com/Your-B2C-SignInOrSignUp-Policy-Id",
+ },
+ editProfile: {
+ authority: "https://contoso.b2clogin.com/contoso.onmicrosoft.com/Your-B2C-EditProfile-Policy-Id"
+ }
+ },
+ authorityDomain: "contoso.b2clogin.com"
+ }
+ ```
+
+1. Replace `B2C_1_SUSI` with your sign-in Azure AD B2C Policy name.
+1. Replace `B2C_1_EditProfile` with your edit profile Azure AD B2C policy name.
+1. Replace all instances of `contoso` with your [Azure AD B2C tenant name](./tenant-management.md#get-your-tenant-name).
+
+## Use the MSAL to sign in the user
+
+In this step, implement the methods to initialize the sign-in flow, api access token acquisition, and the sign-out methods.
+
+For more information, see the [MSAL PublicClientApplication class reference](https://azuread.github.io/microsoft-authentication-library-for-js/ref/classes/_azure_msal_browser.publicclientapplication.html), and [Use the Microsoft Authentication Library (MSAL) to sign in the user](../active-directory/develop/tutorial-v2-javascript-spa.md#use-the-microsoft-authentication-library-msal-to-sign-in-the-user) articles.
+
+To sign in the user, follow these steps:
+
+1. Inside the *App* folder, create a new file named *authRedirect.js*.
+1. In your *authRedirect.js*, copy and paste the following code:
+
+ ```javascript
+ // Create the main myMSALObj instance
+ // configuration parameters are located at authConfig.js
+ const myMSALObj = new msal.PublicClientApplication(msalConfig);
+
+ let accountId = "";
+ let idTokenObject = "";
+ let accessToken = null;
+
+ myMSALObj.handleRedirectPromise()
+ .then(response => {
+ if (response) {
+ /**
+ * For the purpose of setting an active account for UI update, we want to consider only the auth response resulting
+ * from SUSI flow. "tfp" claim in the id token tells us the policy (NOTE: legacy policies may use "acr" instead of "tfp").
+ * To learn more about B2C tokens, visit https://docs.microsoft.com/en-us/azure/active-directory-b2c/tokens-overview
+ */
+ if (response.idTokenClaims['tfp'].toUpperCase() === b2cPolicies.names.signUpSignIn.toUpperCase()) {
+ handleResponse(response);
+ }
+ }
+ })
+ .catch(error => {
+ console.log(error);
+ });
++
+ function setAccount(account) {
+ accountId = account.homeAccountId;
+ idTokenObject = account.idTokenClaims;
+ myClaims= JSON.stringify(idTokenObject);
+ welcomeUser(myClaims);
+ }
+
+ function selectAccount() {
+
+ /**
+ * See here for more information on account retrieval:
+ * https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-common/docs/Accounts.md
+ */
+
+ const currentAccounts = myMSALObj.getAllAccounts();
+
+ if (currentAccounts.length < 1) {
+ return;
+ } else if (currentAccounts.length > 1) {
+
+ /**
+ * Due to the way MSAL caches account objects, the auth response from initiating a user-flow
+ * is cached as a new account, which results in more than one account in the cache. Here we make
+ * sure we are selecting the account with homeAccountId that contains the sign-up/sign-in user-flow,
+ * as this is the default flow the user initially signed-in with.
+ */
+ const accounts = currentAccounts.filter(account =>
+ account.homeAccountId.toUpperCase().includes(b2cPolicies.names.signUpSignIn.toUpperCase())
+ &&
+ account.idTokenClaims.iss.toUpperCase().includes(b2cPolicies.authorityDomain.toUpperCase())
+ &&
+ account.idTokenClaims.aud === msalConfig.auth.clientId
+ );
+
+ if (accounts.length > 1) {
+ // localAccountId identifies the entity for which the token asserts information.
+ if (accounts.every(account => account.localAccountId === accounts[0].localAccountId)) {
+ // All accounts belong to the same user
+ setAccount(accounts[0]);
+ } else {
+ // Multiple users detected. Logout all to be safe.
+ signOut();
+ };
+ } else if (accounts.length === 1) {
+ setAccount(accounts[0]);
+ }
+
+ } else if (currentAccounts.length === 1) {
+ setAccount(currentAccounts[0]);
+ }
+ }
+
+ // in case of page refresh
+ selectAccount();
+
+ async function handleResponse(response) {
+
+ if (response !== null) {
+ setAccount(response.account);
+ } else {
+ selectAccount();
+ }
+ }
+
+ function signIn() {
+ myMSALObj.loginRedirect(loginRequest);
+ }
+
+ function signOut() {
+ const logoutRequest = {
+ postLogoutRedirectUri: msalConfig.auth.redirectUri,
+ };
+
+ myMSALObj.logoutRedirect(logoutRequest);
+ }
+
+ function getTokenRedirect(request) {
+ request.account = myMSALObj.getAccountByHomeId(accountId);
+
+ return myMSALObj.acquireTokenSilent(request)
+ .then((response) => {
+ // In case the response from B2C server has an empty accessToken field
+ // throw an error to initiate token acquisition
+ if (!response.accessToken || response.accessToken === "") {
+ throw new msal.InteractionRequiredAuthError;
+ } else {
+ console.log("access_token acquired at: " + new Date().toString());
+ accessToken = response.accessToken;
+ passTokenToApi();
+ }
+ }).catch(error => {
+ console.log("Silent token acquisition fails. Acquiring token using popup. \n", error);
+ if (error instanceof msal.InteractionRequiredAuthError) {
+ // fallback to interaction when silent call fails
+ return myMSALObj.acquireTokenRedirect(request);
+ } else {
+ console.log(error);
+ }
+ });
+ }
+
+ // Acquires and access token and then passes it to the API call
+ function passTokenToApi() {
+ if (!accessToken) {
+ getTokenRedirect(tokenRequest);
+ } else {
+ try {
+ callApi(apiConfig.webApi, accessToken);
+ } catch(error) {
+ console.log(error);
+ }
+ }
+ }
+
+ function editProfile() {
++
+ const editProfileRequest = b2cPolicies.authorities.editProfile;
+ editProfileRequest.loginHint = myMSALObj.getAccountByHomeId(accountId).username;
+
+ myMSALObj.loginRedirect(editProfileRequest);
+ }
+ ```
+
+## Configure the web API location and scope
+
+To allow your SPA app to call a web API, provide the web API endpoint location, and the [scopes](./configure-authentication-sample-spa-app.md#app-registration-overview) used to authorize access to the web API.
+
+To configure the web API location and scopes, follow these steps:
+
+1. Inside the *App* folder, create a new file named *apiConfig.js*.
+1. In your *apiConfig.js*, copy and paste the following code:
+
+ ```javascript
+ // The current application coordinates were pre-registered in a B2C tenant.
+ const apiConfig = {
+ b2cScopes: ["https://contoso.onmicrosoft.com/tasks/tasks.read"],
+ webApi: "https://mydomain.azurewebsites.net/tasks"
+ };
+ ```
+
+1. Replace `contoso` with your tenant name. The required scope name can be found as described in the [Configure scopes](./configure-authentication-sample-spa-app.md#22-configure-scopes) article.
+1. Replace the value for `webApi` with your web API endpoint location.
+
+## Call your web API
+
+In this step, define the HTTP request to your API endpoint. The HTTP request is configured to pass the Access Token acquired with MSAL.js into the `Authorization` HTTP header in the request.
+
+The code below defines the HTTP `GET` request to the API endpoint, passing the access token within the `Authorization` HTTP header. The API location is defined by the `webApi` key in *apiConfig.js*.
+
+To call your web API by using the token you acquired, follow these steps:
+
+1. Inside the *App* folder, create a new file named *api.js*.
+1. Add the following code to the *api.js* file:
+
+ ```javascript
+ function callApi(endpoint, token) {
+
+ const headers = new Headers();
+ const bearer = `Bearer ${token}`;
+
+ headers.append("Authorization", bearer);
+
+ const options = {
+ method: "GET",
+ headers: headers
+ };
+
+ logMessage('Calling web API...');
+
+ fetch(endpoint, options)
+ .then(response => response.json())
+ .then(response => {
+
+ if (response) {
+ logMessage('Web API responded: ' + response.name);
+ }
+
+ return response;
+ }).catch(error => {
+ console.error(error);
+ });
+ }
+ ```
+
+## Add the UI elements reference
+
+The SPA app uses JavaScript to control the UI elements. For example, display the sign-in and sign-out buttons, render the users ID token claims to the screen.
+
+To add the UI elements reference, follow these steps:
+
+1. Inside the *App* folder, create a new file named *ui.js*.
+1. Add the following code to the *ui.js* file:
+
+ ```javascript
+ // Select DOM elements to work with
+ const signInButton = document.getElementById('signIn');
+ const signOutButton = document.getElementById('signOut')
+ const titleDiv = document.getElementById('title-div');
+ const welcomeDiv = document.getElementById('welcome-div');
+ const tableDiv = document.getElementById('table-div');
+ const tableBody = document.getElementById('table-body-div');
+ const editProfileButton = document.getElementById('editProfileButton');
+ const callApiButton = document.getElementById('callApiButton');
+ const response = document.getElementById("response");
+ const label = document.getElementById('label');
+
+ function welcomeUser(claims) {
+ welcomeDiv.innerHTML = `Token claims: </br></br> ${claims}!`
+
+ signInButton.classList.add('d-none');
+ signOutButton.classList.remove('d-none');
+ welcomeDiv.classList.remove('d-none');
+ callApiButton.classList.remove('d-none');
+ }
+
+ function logMessage(s) {
+ response.appendChild(document.createTextNode('\n' + s + '\n'));
+ }
+ ```
+
+## Run your SPA application
+
+In your command shell, run the following commands:
+
+``` powershell
+npm install
+npm ./index.js
+```
+
+1. Browse to https://localhost:6420.
+1. Select **Sign-in**.
+1. Complete the sign-up or sign-in process.
+
+After you successfully authenticate, you can see the parsed ID token appear on the screen. Select `Call API`, to call your API endpoint.
+
+## Next steps
+
+* Learn more [about the code sample](https://github.com/Azure-Samples/ms-identity-b2c-javascript-spa)
+* Configure [Authentication options in your own SPA application using Azure AD B2C](enable-authentication-spa-app-options.md)
active-directory-b2c Error Codes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/error-codes.md
Previously updated : 10/02/2020 Last updated : 06/16/2021
The following errors can be returned by the Azure Active Directory B2C service.
| `AADB2C90284` | The application with identifier '{0}' has not been granted consent and is unable to be used for local accounts. | | `AADB2C90285` | The application with identifier '{0}' was not found. | | `AADB2C90288` | UserJourney with id '{0}' referenced in TechnicalProfile '{1}' for refresh token redemption for tenant '{2}' does not exist in policy '{3}' or any of its base policies. |
+| `AADB2C90287` | The request contains invalid redirect URI '{0}'.|
| `AADB2C90289` | We encountered an error connecting to the identity provider. Please try again later. | | `AADB2C90296` | Application has not been configured correctly. Please contact administrator of the site you are trying to access. | | `AADB2C99005` | The request contains an invalid scope parameter which includes an illegal character '{0}'. |
active-directory-b2c User Profile Attributes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/user-profile-attributes.md
Previously updated : 04/27/2021 Last updated : 06/16/2021
The table below lists the [user resource type](/graph/api/resources/user) attrib
<sup>1 </sup>Not supported by Microsoft Graph<br><sup>2 </sup>For more information, see [MFA phone number attribute](#mfa-phone-number-attribute)<br><sup>3 </sup>Should not be used with Azure AD B2C
+## Required attributes
+
+To create a user account in the Azure AD B2C directory, provide the following required attributes:
+
+- [Display name](#display-name-attribute)
+
+- [Identities](#display-name-attribute) - With at least one entity (a local or a federated account).
+
+- [Password profile](#password-policy-attribute)- If you create a local account, provide the password profile.
+ ## Display name attribute The `displayName` is the name to display in Azure portal user management for the user, and in the access token Azure AD B2C returns to the application. This property is required.
active-directory-b2c Userjourneys https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/userjourneys.md
Previously updated : 03/04/2021 Last updated : 06/16/2021
The **UserJourney** element contains the following attribute:
| Attribute | Required | Description | | | -- | -- | | Id | Yes | An identifier of a user journey that can be used to reference it from other elements in the policy. The **DefaultUserJourney** element of the [relying party policy](relyingparty.md) points to this attribute. |
+| DefaultCpimIssuerTechnicalProfileReferenceId| No | The default token issuer technical profile reference ID. For example, [JWT token issuer](userjourneys.md), [SAML token issuer](saml-issuer-technical-profile.md), or [OAuth2 custom error](oauth2-error-technical-profile.md). If your user journey or sub journey already has another `SendClaims` orchestration step, set the `DefaultCpimIssuerTechnicalProfileReferenceId` attribute to the user journey's token issuer technical profile. |
The **UserJourney** element contains the following elements:
active-directory Concept Authentication Methods https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/concept-authentication-methods.md
Previously updated : 03/15/2021 Last updated : 06/16/2021
# What authentication and verification methods are available in Azure Active Directory?
-As part of the sign-in experience for accounts in Azure Active Directory (Azure AD), there are different ways that a user can authenticate themselves. A username and password is the most common way a user would historically provide credentials. With modern authentication and security features in Azure AD, that basic password should be supplemented or replaced with more secure authentication methods.
+Microsoft recommends passwordless authentication methods such as Windows Hello, FIDO2 security keys, and the Microsoft Authenticator app because they provide the most secure sign-in experience. As part of the sign-in experience for accounts in Azure Active Directory (Azure AD), there are different ways that Although a user can sign-in using other common methods such as a username and password, passwords should be replaced with more secure authentication methods.
![Table of the strengths and preferred authentication methods in Azure AD](media/concept-authentication-methods/authentication-methods.png)
-Passwordless authentication methods such as Windows Hello, FIDO2 security keys, and the Microsoft Authenticator app provide the most secure sign-in events.
- Azure AD Multi-Factor Authentication (MFA) adds additional security over only using a password when a user signs in. The user can be prompted for additional forms of authentication, such as to respond to a push notification, enter a code from a software or hardware token, or respond to an SMS or phone call. To simplify the user on-boarding experience and register for both MFA and self-service password reset (SSPR), we recommend you [enable combined security information registration](howto-registration-mfa-sspr-combined.md). For resiliency, we recommend that you require users to register multiple authentication methods. When one method isn't available for a user during sign-in or SSPR, they can choose to authenticate with another method. For more information, see [Create a resilient access control management strategy in Azure AD](concept-resilient-controls.md).
active-directory Concept Conditional Access Cloud Apps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/conditional-access/concept-conditional-access-cloud-apps.md
Previously updated : 06/08/2021 Last updated : 06/15/2021
-# Conditional Access: Cloud apps, actions, and authentication context
+# Conditional Access: Cloud apps, actions, and authentication context
Cloud apps, actions, and authentication context are key signals in a Conditional Access policy. Conditional Access policies allow administrators to assign controls to specific applications, actions, or authentication context.
The Microsoft Azure Management application includes multiple services.
### Other applications
-In addition to the Microsoft apps, administrators can add any Azure AD registered application to Conditional Access policies. These applications may include:
+Administrators can add any Azure AD registered application to Conditional Access policies. These applications may include:
- Applications published through [Azure AD Application Proxy](../app-proxy/what-is-application-proxy.md) - [Applications added from the gallery](../manage-apps/add-application-portal.md)
User actions are tasks that can be performed by a user. Currently, Conditional A
- **Register or join devices (preview)**: This user action enables administrators to enforce Conditional Access policy when users [register](../devices/concept-azure-ad-register.md) or [join](../devices/concept-azure-ad-join.md) devices to Azure AD. It provides granularity in configuring multi-factor authentication for registering or joining devices instead of a tenant-wide policy that currently exists. There are three key considerations with this user action: - `Require multi-factor authentication` is the only access control available with this user action and all others are disabled. This restriction prevents conflicts with access controls that are either dependent on Azure AD device registration or not applicable to Azure AD device registration.
- - `Client apps` and `Device state` conditions are not available with this user action since they are dependent on Azure AD device registration to enforce Conditional Access policies.
- - When a Conditional Access policy is enabled with this user action, you must set **Azure Active Directory** > **Devices** > **Device Settings** - `Devices to be Azure AD joined or Azure AD registered require Multi-Factor Authentication` to **No**. Otherwise, the Conditional Access policy with this user action is not properly enforced. More information regarding this device setting can found in [Configure device settings](../devices/device-management-azure-portal.md#configure-device-settings).
+ - `Client apps` and `Device state` conditions aren't available with this user action since they're dependent on Azure AD device registration to enforce Conditional Access policies.
+ - When a Conditional Access policy is enabled with this user action, you must set **Azure Active Directory** > **Devices** > **Device Settings** - `Devices to be Azure AD joined or Azure AD registered require Multi-Factor Authentication` to **No**. Otherwise, the Conditional Access policy with this user action isn't properly enforced. More information about this device setting can found in [Configure device settings](../devices/device-management-azure-portal.md#configure-device-settings).
## Authentication context (Preview)
Create new authentication context definitions by selecting **New authentication
- **Publish to apps** checkbox when checked, advertises the authentication context to apps and makes them available to be assigned. If not checked the authentication context will be unavailable to downstream resources. - **ID** is read-only and used in tokens and apps for request-specific authentication context definitions. It is listed here for troubleshooting and development use cases.
-Administrators can then select published authentication contexts in their Conditional Access policies under **Assignments** > **Cloud apps or actions** > **Authentication context**.
+#### Add to Conditional Access policy
+
+Administrators can select published authentication contexts in their Conditional Access policies under **Assignments** > **Cloud apps or actions** and selecting **Authentication context** from the **Select what this policy applies to** menu.
+ ### Tag resources with authentication contexts For more information about authentication context use in applications, see the following articles. -- [SharePoint Online](/microsoft-365/compliance/sensitivity-labels-teams-groups-sites?view=o365-worldwide#more-information-about-the-dependencies-for-the-authentication-context-option)
+- [Microsoft Information Protection sensitivity labels to protect SharePoint sites](/microsoft-365/compliance/sensitivity-labels-teams-groups-sites?view=o365-worldwide#more-information-about-the-dependencies-for-the-authentication-context-option)
- [Microsoft Cloud App Security](/cloud-app-security/session-policy-aad?branch=pr-en-us-2082#require-step-up-authentication-authentication-context)-- Custom applications
+- [Custom applications](../develop/developer-guide-conditional-access-authentication-context.md)
## Next steps
active-directory Quickstart Configure App Access Web Apis https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-configure-app-access-web-apis.md
You can add or remove the permissions that appear in this table by using the ste
### Other permissions granted
-You might also see a table entitled **Other permissions granted for {your tenant}** on the **API permissions** pane. The **Other permissions granted for {your tenant}** table shows permissions granted for the tenant that haven't been explicitly configured on the application object. These permissions were dynamically requested and consented to. This section appears only if there is at least one permission that applies.
+You might also see a table titled **Other permissions granted for {your tenant}** on the **API permissions** pane. The **Other permissions granted for {your tenant}** table shows permissions granted tenant-wide for the tenant that haven't been explicitly configured on the application object. These permissions were dynamically requested and consented to by an admin, on behalf of all users. This section appears only if there is at least one permission that applies.
You can add the full set of an API's permissions or individual permissions appearing this table to the **Configured permissions** table. As an admin, you can revoke admin consent for APIs or individual permissions in this section.
The **Grant admin consent** button is *disabled* if you aren't an admin or if no
Advance to the next quickstart in the series to learn how to configure which account types can access your application. For example, you might want to limit access only to those users in your organization (single-tenant) or allow users in other Azure AD tenants (multi-tenant) and those with personal Microsoft accounts (MSA). > [!div class="nextstepaction"]
-> [Modify the accounts supported by an application](./howto-modify-supported-accounts.md)
+> [Modify the accounts supported by an application](./howto-modify-supported-accounts.md)
active-directory Concept Primary Refresh Token https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/devices/concept-primary-refresh-token.md
This article assumes that you already understand the different device states ava
The following Windows components play a key role in requesting and using a PRT: * **Cloud Authentication Provider** (CloudAP): CloudAP is the modern authentication provider for Windows sign in, that verifies users logging to a Windows 10 device. CloudAP provides a plugin framework that identity providers can build on to enable authentication to Windows using that identity providerΓÇÖs credentials.
-* **Web Account Manager** (WAM): WAM is the default token broker on Windows 10 devices. WAM also provides a plugin framework that identity providers can build on and enable SSO to their applications relying on that identity provider.
+* **Web Account Manager** (WAM): WAM is the default token broker on Windows 10 devices. WAM also provides a plugin framework that identity providers can build on and enable SSO to their applications relying on that identity provider. (Not included in Windows Server 2016 LTSC builds)
* **Azure AD CloudAP plugin**: An Azure AD specific plugin built on the CloudAP framework, that verifies user credentials with Azure AD during Windows sign in. * **Azure AD WAM plugin**: An Azure AD specific plugin built on the WAM framework, that enables SSO to applications that rely on Azure AD for authentication. * **Dsreg**: An Azure AD specific component on Windows 10, that handles the device registration process for all device states.
active-directory Entitlement Management Access Package Assignments https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/governance/entitlement-management-access-package-assignments.md
To use Azure AD entitlement management and assign users to access packages, you
### Viewing assignments programmatically
-You can also retrieve assignments in an access package using Microsoft Graph. A user in an appropriate role with an application that has the delegated `EntitlementManagement.ReadWrite.All` permission can call the API to [list accessPackageAssignments](/graph/api/accesspackageassignment-list?view=graph-rest-beta&preserve-view=true).
+You can also retrieve assignments in an access package using Microsoft Graph. A user in an appropriate role with an application that has the delegated `EntitlementManagement.Read.All` or `EntitlementManagement.ReadWrite.All` permission can call the API to [list accessPackageAssignments](/graph/api/accesspackageassignment-list?view=graph-rest-beta&preserve-view=true). While an identity governance administrator can retrieve access packages from multiple catalogs, if user is assigned only to catalog-specific delegated administrative roles, the request must supply a filter to indicate a specific access package, such as: `$filter=accessPackage/id eq 'a914b616-e04e-476b-aa37-91038f0b165b'`. An application that has the application permission `EntitlementManagement.Read.All` or `EntitlementManagement.ReadWrite.All` permission can also use this API.
## Directly assign a user
In some cases, you might want to directly assign specific users to an access pac
### Directly assigning users programmatically
-You can also directly assign a user to an access package using Microsoft Graph. A user in an appropriate role with an application that has the delegated `EntitlementManagement.ReadWrite.All` permission can call the API to [create an accessPackageAssignmentRequest](/graph/api/accesspackageassignmentrequest-post?view=graph-rest-beta&preserve-view=true).
+You can also directly assign a user to an access package using Microsoft Graph. A user in an appropriate role with an application that has the delegated `EntitlementManagement.ReadWrite.All` permission, or an application with that application permission, can call the API to [create an accessPackageAssignmentRequest](/graph/api/accesspackageassignmentrequest-post?view=graph-rest-beta&preserve-view=true). In this request, the value of the `requestType` property should be `AdminAdd`, and the `accessPackageAssignment` property is a structure that contains the `targetId` of the user being assigned.
## Remove an assignment
You can also directly assign a user to an access package using Microsoft Graph.
A notification will appear informing you that the assignment has been removed.
+### Removing an assignment programmatically
+
+You can also remove an assignment of a user to an access package using Microsoft Graph. A user in an appropriate role with an application that has the delegated `EntitlementManagement.ReadWrite.All` permission, or an application with that appplication permission, can call the API to [create an accessPackageAssignmentRequest](/graph/api/accesspackageassignmentrequest-post?view=graph-rest-beta&preserve-view=true). In this request, the value of the `requestType` property should be `AdminRemove`, and the `accessPackageAssignment` property is a structure that contains the `id` property identifying the `accessPackageAssignment` being removed.
+ ## Next steps - [Change request and settings for an access package](entitlement-management-access-package-request-policy.md)-- [View reports and logs](entitlement-management-reports.md)
+- [View reports and logs](entitlement-management-reports.md)
active-directory Entitlement Management Catalog Create https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/governance/entitlement-management-catalog-create.md
A catalog is a container of resources and access packages. You create a catalog
### Creating a catalog programmatically
-You can also create a catalog using Microsoft Graph. A user in an appropriate role with an application that has the delegated `EntitlementManagement.ReadWrite.All` permission can call the API to [create an accessPackageCatalog](/graph/api/accesspackagecatalog-post?view=graph-rest-beta&preserve-view=true).
+You can also create a catalog using Microsoft Graph. A user in an appropriate role with an application that has the delegated `EntitlementManagement.ReadWrite.All` permission, or an application with that application permission, can call the API to [create an accessPackageCatalog](/graph/api/accesspackagecatalog-post?view=graph-rest-beta&preserve-view=true).
## Add resources to a catalog
To include resources in an access package, the resources must exist in a catalog
### Adding a resource to a catalog programmatically
-You can also add a resource to a catalog using Microsoft Graph. A user in an appropriate role, or a catalog and resource owner, with an application that has the delegated `EntitlementManagement.ReadWrite.All` permission can call the API to [create an accessPackageResourceRequest](/graph/api/accesspackageresourcerequest-post?view=graph-rest-beta&preserve-view=true).
+You can also add a resource to a catalog using Microsoft Graph. A user in an appropriate role, or a catalog and resource owner, with an application that has the delegated `EntitlementManagement.ReadWrite.All` permission can call the API to [create an accessPackageResourceRequest](/graph/api/accesspackageresourcerequest-post?view=graph-rest-beta&preserve-view=true). An application with application permissions cannot yet programmatically add a resource without a user context at the time of the request, however.
## Remove resources from a catalog
active-directory Entitlement Management Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/governance/entitlement-management-scenarios.md
There are several ways that you can configure entitlement management for your or
## Programmatic administration
-You can also manage access packages, catalogs, policies, requests and assignments using Microsoft Graph. A user in an appropriate role with an application that has the delegated `EntitlementManagement.ReadWrite.All` permission can call the [entitlement management API](/graph/tutorial-access-package-api).
+You can also manage access packages, catalogs, policies, requests and assignments using Microsoft Graph. A user in an appropriate role with an application that has the delegated `EntitlementManagement.Read.All` or `EntitlementManagement.ReadWrite.All` permission can call the [entitlement management API](/graph/tutorial-access-package-api). An application with the those application permissions can also use many of those API functions, with the exception of managing resources in catalogs and access packages.
## Next steps
active-directory Azure Ad Custom Roles Activate https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/privileged-identity-management/azure-ad-custom-roles-activate.md
- Title: Activate Azure AD custom role - Privileged Identity Management (PIM)
-description: How to activate an Azure AD custom role for assignment Privileged Identity Management (PIM)
-------- Previously updated : 08/06/2019-----
-#Customer intent: As a dev, devops, or it admin, I want to learn how to activate Azure AD custom roles, so that I can grant access to resources using this new capability.
--
-# Activate an Azure AD custom role in Privileged Identity Management
-
-Privileged Identity Management in Azure Active Directory (Azure AD) now supports just-in-time and time-bound assignment to custom roles created for Application Management in the Identity and Access Management administrative experience. For more information about creating custom roles to delegate application management in Azure AD, see [Custom administrator roles in Azure Active Directory (preview)](../roles/custom-overview.md).
-
-> [!NOTE]
-> Azure AD custom roles are not integrated with the built-in directory roles during preview. Once the capability is generally available, role management will take place in the built-in roles experience. If you see the following banner, these roles should be managed [in the built-in roles experience](pim-how-to-activate-role.md) and this article does not apply:
->
-> :::image type="content" source="media/pim-how-to-add-role-to-user/pim-new-version.png" alt-text="Select Privileged Identity Management in Azure AD." lightbox="media/pim-how-to-add-role-to-user/pim-new-version.png":::
-
-## Activate a role
-
-When you need to activate an Azure AD custom role, request activation by selecting the My roles navigation option in Privileged Identity Management.
-
-1. Sign in to [the Azure portal](https://portal.azure.com).
-1. Open Azure AD [Privileged Identity Management](https://portal.azure.com/?Microsoft_AAD_IAM_enableCustomRoleManagement=true&Microsoft_AAD_IAM_enableCustomRoleAssignment=true&feature.rbacv2roles=true&feature.rbacv2=true&Microsoft_AAD_RegisteredApps=demo#blade/Microsoft_Azure_PIMCommon/CommonMenuBlade/quickStart).
-
-1. Select **Azure AD custom roles** to see a list of your eligible Azure AD custom role assignments.
-
- ![See the list of eligible Azure AD custom role assignments](./media/azure-ad-custom-roles-activate/view-preview-roles.png)
-
-> [!Note]
-> Before assigning a role, you must create/configure a role. For further information regarding configuring AAD Custom Roles, see [Configure Azure AD custom roles in Privileged Identity Management](azure-ad-custom-roles-configure.md).
-
-1. On the **Azure AD custom roles (Preview)** page, find the assignment you need.
-1. Select **Activate your role** to open the **Activate** page.
-1. If your role requires multi-factor authentication, select **Verify your identity before proceeding**. You are required to authenticate only once per session.
-1. Select **Verify my identity** and follow the instructions to provide any additional security verification.
-1. To specify a custom application scope, select **Scope** to open the filter pane. You should request access to a role at the minimum scope needed. If your assignment is at an application scope, you can activate only at that scope.
-
- ![Assign an Azure AD resource scope to the role assignment](./media/azure-ad-custom-roles-activate/assign-scope.png)
-
-1. If needed, specify a custom activation start time. When used, the role member is activated at the specified time.
-1. In the **Reason** box, enter the reason for the activation request. These can be made required or not in the role setting.
-1. Select **Activate**.
-
-If the role doesn't require approval, it's activated according to your settings and is added to the list of active roles. If you want to use the activated role, start with the steps in [Assign an Azure AD custom role in Privileged Identity Management](azure-ad-custom-roles-assign.md).
-
-If the role requires approval to activate, you will receive an Azure notification informing you that the request is pending approval.
-
-## Next steps
--- [Assign an Azure AD custom role](azure-ad-custom-roles-assign.md)-- [Remove or update an Azure AD custom role assignment](azure-ad-custom-roles-update-remove.md)-- [Configure an Azure AD custom role assignment](azure-ad-custom-roles-configure.md)-- [Role definitions in Azure AD](../roles/permissions-reference.md)
active-directory Azure Ad Custom Roles Assign https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/privileged-identity-management/azure-ad-custom-roles-assign.md
- Title: Assign Azure AD custom role - Privileged Identity Management (PIM)
-description: How to assign an Azure AD custom role for assignment Privileged Identity Management (PIM)
-------- Previously updated : 08/06/2019-----
-#Customer intent: As a dev, devops, or it admin, I want to learn how to activate Azure AD custom roles, so that I can grant access to resources using this new capability.
--
-# Assign an Azure AD custom role in Privileged Identity Management
-
-This article tells you how to use Privileged Identity Management (PIM) to create just-in-time and time-bound assignment to custom roles created for managing applications in the Azure Active Directory (Azure AD) administrative experience.
--- For more information about creating custom roles to delegate application management in Azure AD, see [Custom administrator roles in Azure Active Directory (preview)](../roles/custom-overview.md).-- If you haven't used Privileged Identity Management yet, get more information at [Start using Privileged Identity Management](pim-getting-started.md).-- For information about how to grant another administrator access to manage Privileged Identity Management, see [Grant access to other administrators to manage Privileged Identity Management](pim-how-to-give-access-to-pim.md).-
-> [!NOTE]
-> Azure AD custom roles are not integrated with the built-in directory roles during preview. Once the capability is generally available, role management will take place in the built-in roles experience. If you see the following banner, these roles should be managed [in the built-in roles experience](pim-how-to-activate-role.md) and this article does not apply:
->
-> [![Select Azure AD > Privileged Identity Management.](media/pim-how-to-add-role-to-user/pim-new-version.png)](media/pim-how-to-add-role-to-user/pim-new-version.png#lightbox)
-
-## Assign a role
-
-Privileged Identity Management can manage custom roles you can create in Azure Active Directory (Azure AD) application management. The following steps make an eligible assignment to a custom directory role.
-
-1. Sign in to [Privileged Identity Management](https://portal.azure.com/?Microsoft_AAD_IAM_enableCustomRoleManagement=true&Microsoft_AAD_IAM_enableCustomRoleAssignment=true&feature.rbacv2roles=true&feature.rbacv2=true&Microsoft_AAD_RegisteredApps=demo#blade/Microsoft_Azure_PIMCommon/CommonMenuBlade/quickStart) in the Azure portal with a user account that is assigned to the Privileged role administrator role.
-1. Select **Azure AD custom roles (Preview)**.
-
- ![Select Azure AD custom roles preview to see eligible role assignments](./media/azure-ad-custom-roles-assign/view-custom.png)
-
-1. Select **Roles** to see a list of custom roles for Azure AD applications.
-
- ![Select Roles see the list of eligible role assignments](./media/azure-ad-custom-roles-assign/view-roles.png)
-
-1. Select **Add member** to open the assignment page.
-1. To restrict the scope of the role assignment to a single application, select **Scope** to specify an application scope.
-
- ![restrict the scope of eligible role assignments in Azure AD](./media/azure-ad-custom-roles-assign/set-scope.png)
-
-1. Select **Select a role** to open the **Select a role** list.
-
- ![select the eligible role to assign to a user](./media/azure-ad-custom-roles-assign/select-role.png)
-
-1. Select a role you want to assign and then click **Select**. The **Select a member** list opens.
-
- ![select the user to whom you're assigning the role](./media/azure-ad-custom-roles-assign/select-member.png)
-
-1. Select a user you want to assign to the role and then click **Select**. The **Membership settings** list opens.
-
- ![Set the role assignment type to eligible or active](./media/azure-ad-custom-roles-assign/membership-settings.png)
-
-1. On the **Membership settings** page, select **Eligible** or **Active**:
-
- - **Eligible** assignments require the user assigned to the role to perform an action before they can use the role. Actions might include passing a multi-factor authentication check, providing a business justification, or requesting approval from designated approvers.
- - **Active** assignments don't require the assigned user to perform any action to use the role. Active users have the privileges assigned to the role at all times.
-
-1. If the **Permanent** check box is present and available (depending on role settings), you can specify whether the assignment is permanent. Select the check box to to make the assignment permanently eligible or permanently assigned. Clear the check box to specify an assignment duration.
-1. To create the new role assignment, click **Save** and then **Add**. A notification of the assignment process status is displayed.
-
-To verify the role assignment, in an open role, select **Assignments** > **Assign** and verify that your role assignment is properly identified as eligible or active.
-
- ![Check to see if the role assignment is visible as eligible or active](./media/azure-ad-custom-roles-assign/verify-assignments.png)
-
-## Next steps
--- [Activate an Azure AD custom role](azure-ad-custom-roles-assign.md)-- [Remove or update an Azure AD custom role assignment](azure-ad-custom-roles-update-remove.md)-- [Configure an Azure AD custom role assignment](azure-ad-custom-roles-configure.md)-- [Role definitions in Azure AD](../roles/permissions-reference.md)
active-directory Azure Ad Custom Roles Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/privileged-identity-management/azure-ad-custom-roles-configure.md
- Title: Configure Azure AD custom role - Privileged Identity Management (PIM)
-description: How to configure Azure AD custom roles in Privileged Identity Management (PIM)
-------- Previously updated : 08/06/2019-----
-#Customer intent: As a dev, devops, or it admin, I want to learn how to activate Azure AD custom roles, so that I can grant access to resources using this new capability.
--
-# Configure Azure AD custom roles in Privileged Identity Management
-
-A privileged role administrator can change the role settings that apply to a user when they activate their assignment to a custom role and for other application administrators that are assigning custom roles.
-
-> [!NOTE]
-> Azure AD custom roles are not integrated with the built-in directory roles during preview. Once the capability is generally available, role management will take place in the built-in roles experience. If you see the following banner, these roles should be managed [in the built-in roles experience](pim-how-to-activate-role.md) and this article does not apply:
->
-> [![Select Azure AD > Privileged Identity Management](media/pim-how-to-add-role-to-user/pim-new-version.png)](media/pim-how-to-add-role-to-user/pim-new-version.png#lightbox)
-
-## Open role settings
-
-Follow these steps to open the settings for an Azure AD role.
-
-1. Sign in to [Privileged Identity Management](https://portal.azure.com/?Microsoft_AAD_IAM_enableCustomRoleManagement=true&Microsoft_AAD_IAM_enableCustomRoleAssignment=true&feature.rbacv2roles=true&feature.rbacv2=true&Microsoft_AAD_RegisteredApps=demo#blade/Microsoft_Azure_PIMCommon/CommonMenuBlade/quickStart) in the Azure portal with a user account that is assigned to the Privileged role administrator role.
-1. Select **Azure AD custom roles (Preview)**.
-
- ![Select Azure AD custom roles preview to see eligible role assignments](./media/azure-ad-custom-roles-configure/settings-list.png)
-
-1. Select **Setting** to open the **Settings** page. Select the role for the settings you want to configure.
-1. Select **Edit** to open the **Role settings** page.
-
- ![Screenshot that shows the "Role setting details" page with the "Edit" action selected.](./media/azure-ad-custom-roles-configure/edit-settings.png)
-
-## Role settings
-
-There are several settings you can configure.
-
-### Assignment duration
-
-You can choose from two assignment duration options for each assignment type (eligible or active) when you configure settings for a role. These options become the default maximum duration when a member is assigned to the role in Privileged Identity Management.
-
-You can choose one of these *eligible* assignment duration options.
--- **Allow permanent eligible assignment**: Administrators can assign permanent eligible membership.-- **Expire eligible assignment after**: Administrators can require that all eligible assignments have a specified start and end date.-
-Also, you can choose one of these *active* assignment duration options:
--- **Allow permanent active assignment**: Administrators can assign permanent active membership.-- **Expire active assignment after**: Administrators can require that all active assignments have a specified start and end date.-
-### Require Azure AD Multi-Factor Authentication
-
-Privileged Identity Management provides optional enforcement of Azure AD Multi-Factor Authentication for two distinct scenarios.
--- **Require Multi-Factor Authentication on active assignment**-
- If you only want to assign a member to a role for a short duration (one day, for example), it might be too slow to require the assigned members to request activation. In this scenario, Privileged Identity Management can't enforce multi-factor authentication when the user activates their role assignment, because they are already active in the role from the moment they are assigned. To ensure that the administrator fulfilling the assignment is who they say they are, select the **Require Multi-Factor Authentication on active assignment** box.
--- **Require Multi-Factor Authentication on activation**-
- You can require eligible users assigned to a role to enroll in Azure AD Multi-Factor Authentication before they can activate. This process ensures that the user who is requesting activation is who they say they are with reasonable certainty. Enforcing this option protects critical roles in situations when the user account might have been compromised. To require an eligible member to run Azure AD Multi-Factor Authentication before activation, select the **Require Multi-Factor Authentication on activation** box.
-
-For more information, see [Multi-factor authentication and Privileged Identity Management](pim-how-to-require-mfa.md).
-
-### Activation maximum duration
-
-Use the **Activation maximum duration** slider to set the maximum time, in hours, that a role stays active before it expires. This value can be from, 1 and 24 hours.
-
-### Require justification
-
-You can require that members enter a justification on active assignment or when they activate. To require justification, select the **Require justification on active assignment** check box or the **Require justification on activation** box.
-
-### Require approval to activate
-
-If you want to require approval to activate a role, follow these steps.
-
-1. Select the **Require approval to activate** check box.
-1. Select **Select approvers** to open the **Select a member or group** list.
-
- ![Open the Azure AD custom role to edit settings](./media/azure-ad-custom-roles-configure/select-approvers.png)
-
-1. Select at least one member or group and then click **Select**. You must select at least one approver. There are no default approvers. Your selections will appear in the list of selected approvers.
-1. Once you have specified the role settings, select **Update** to save your changes.
-
-## Next steps
--- [Activate an Azure AD custom role](azure-ad-custom-roles-activate.md)-- [Assign an Azure AD custom role](azure-ad-custom-roles-assign.md)-- [Remove or update an Azure AD custom role assignment](azure-ad-custom-roles-update-remove.md)-- [Role definitions in Azure AD](../roles/permissions-reference.md)
active-directory Azure Ad Custom Roles Update Remove https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/privileged-identity-management/azure-ad-custom-roles-update-remove.md
- Title: Update or remove Azure AD custom role - Privileged Identity Management (PIM)
-description: How to update or remove an Azure AD custom role assignment Privileged Identity Management (PIM)
-------- Previously updated : 08/06/2019-----
-#Customer intent: As a dev, devops, or it admin, I want to learn how to activate Azure AD custom roles, so that I can grant access to resources using this new capability.
--
-# Update or remove an assigned Azure AD custom role in Privileged Identity Management
-
-This article tells you how to use Privileged Identity Management (PIM) to update or remove just-in-time and time-bound assignment to custom roles created for application management in the Azure Active Directory (Azure AD) administrative experience.
--- For more information about creating custom roles to delegate application management in Azure AD, see [Custom administrator roles in Azure Active Directory (preview)](../roles/custom-overview.md). -- If you haven't used Privileged Identity Management yet, get more information at [Start using Privileged Identity Management](pim-getting-started.md).-
-> [!NOTE]
-> Azure AD custom roles are not integrated with the built-in directory roles during preview. Once the capability is generally available, role management will take place in the built-in roles experience. If you see the following banner, these roles should be managed [in the built-in roles experience](pim-how-to-add-role-to-user.md) and this article does not apply:
->
-> [![Select Azure AD > Privileged Identity Management.](media/pim-how-to-add-role-to-user/pim-new-version.png)](media/pim-how-to-add-role-to-user/pim-new-version.png#lightbox)
-
-## Update or remove an assignment
-
-Follow these steps to update or remove an existing custom role assignment.
-
-1. Sign in to [Privileged Identity Management](https://portal.azure.com/?Microsoft_AAD_IAM_enableCustomRoleManagement=true&Microsoft_AAD_IAM_enableCustomRoleAssignment=true&feature.rbacv2roles=true&feature.rbacv2=true&Microsoft_AAD_RegisteredApps=demo#blade/Microsoft_Azure_PIMCommon/CommonMenuBlade/quickStart) in the Azure portal with a user account that is assigned to the Privileged role administrator role.
-1. Select **Azure AD custom roles (Preview)**.
-
- ![Select Azure AD custom roles preview to see eligible role assignments](./media/azure-ad-custom-roles-assign/view-custom.png)
-
-1. Select **Roles** to see a the **Assignments** list of custom roles for Azure AD applications.
-
- ![Select Roles see the list of eligible role assignments](./media/azure-ad-custom-roles-update-remove/assignments-list.png)
-
-1. Select the role that you want to update or remove.
-1. Find the role assignment on the **Eligible roles** or **Active roles** tabs.
-1. Select **Update** or **Remove** to update or remove the role assignment.
-
- ![Select remove or update in the eligible role assignment](./media/azure-ad-custom-roles-update-remove/remove-update.png)
-
-## Next steps
--- [Activate an Azure AD custom role](azure-ad-custom-roles-assign.md)-- [Assign an Azure AD custom role](azure-ad-custom-roles-assign.md)-- [Configure an Azure AD custom role assignment](azure-ad-custom-roles-configure.md)
active-directory Azure Ad Pim Approval Workflow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/privileged-identity-management/azure-ad-pim-approval-workflow.md
ms.devlang: na
na Previously updated : 02/07/2020 Last updated : 06/03/2021
With Azure Active Directory (Azure AD) Privileged Identity Management (PIM), you can configure roles to require approval for activation, and choose one or multiple users or groups as delegated approvers. Delegated approvers have 24 hours to approve requests. If a request is not approved within 24 hours, then the eligible user must re-submit a new request. The 24 hour approval time window is not configurable.
-## Determine your version of PIM
-
-Beginning in November 2019, the Azure AD roles portion of Privileged Identity Management is being updated to a new version that matches the experiences for Azure roles. This creates additional features as well as [changes to the existing API](azure-ad-roles-features.md#api-changes). While the new version is being rolled out, which procedures that you follow in this article depend on version of Privileged Identity Management you currently have. Follow the steps in this section to determine which version of Privileged Identity Management you have. After you know your version of Privileged Identity Management, you can select the procedures in this article that match that version.
-
-1. Sign in to the [Azure portal](https://portal.azure.com/) with a user who is in the [Privileged role administrator](../roles/permissions-reference.md#privileged-role-administrator) role.
-1. Open **Azure AD Privileged Identity Management**. If you have a banner on the top of the overview page, follow the instructions in the **New version** tab of this article. Otherwise, follow the instructions in the **Previous version** tab.
-
- [![Select Azure AD > Privileged Identity Management.](media/pim-how-to-add-role-to-user/pim-new-version.png)](media/pim-how-to-add-role-to-user/pim-new-version.png#lightbox)
-
-Follow the steps in this article to approve or deny requests for Azure AD roles.
-
-## [New version](#tab/new)
-
-### View pending requests
+## View pending requests
As a delegated approver, you'll receive an email notification when an Azure AD role request is pending your approval. You can view these pending requests in Privileged Identity Management.
As a delegated approver, you'll receive an email notification when an Azure AD r
In the **Requests for role activations** section, you'll see a list of requests pending your approval.
-### Approve requests
+## Approve requests
1. Find and select the request that you want to approve. An approve or deny page appears.
As a delegated approver, you'll receive an email notification when an Azure AD r
![Approve notification showing request was approved](./media/pim-resource-roles-approval-workflow/resources-approve-pane.png))
-### Deny requests
+## Deny requests
1. Find and select the request that you want to deny. An approve or deny page appears.
As a delegated approver, you'll receive an email notification when an Azure AD r
1. Select **Deny**. A notification appears with your denial.
-### Workflow notifications
+## Workflow notifications
Here's some information about workflow notifications:
Here's some information about workflow notifications:
>[!NOTE] >A Global admin or Privileged role admin who believes that an approved user should not be active can remove the active role assignment in Privileged Identity Management. Although administrators are not notified of pending requests unless they are an approver, they can view and cancel any pending requests for all users by viewing pending requests in Privileged Identity Management.
-## [Previous version](#tab/previous)
-
-### View pending requests
-
-As a delegated approver, you'll receive an email notification when an Azure AD role request is pending your approval. You can view these pending requests in Privileged Identity Management.
-
-1. Sign in to the [Azure portal](https://portal.azure.com/).
-
-1. Open **Azure AD Privileged Identity Management**.
-
-1. Click **Azure AD roles**.
-
-1. Click **Approve requests**.
-
- ![Azure AD roles - Approve requests](./media/azure-ad-pim-approval-workflow/approve-requests.png)
-
- You'll see a list of requests pending your approval.
-
-### Approve requests
-
-1. Select the requests you want to approve and then click **Approve** to open the Approve selected requests pane.
-
- ![Approve requests list with Approve option highlighted](./media/azure-ad-pim-approval-workflow/pim-approve-requests-list.png)
-
-1. In the **Approve reason** box, type a reason.
-
- ![Approve selected requests pane with a approve reason](./media/azure-ad-pim-approval-workflow/pim-approve-selected-requests.png)
-
-1. Click **Approve**.
-
- The Status symbol will be updated with your approval.
-
- ![Approve selected requests pane after Approve button clicked](./media/azure-ad-pim-approval-workflow/pim-approve-status.png)
-
-### Deny requests
-
-1. Select the requests you want to deny and then click **Deny** to open the Deny selected requests pane.
-
- ![Approve requests list with Deny option highlighted](./media/azure-ad-pim-approval-workflow/pim-deny-requests-list.png)
-
-1. In the **Deny reason** box, type a reason.
-
- ![Deny selected requests pane with a deny reason](./media/azure-ad-pim-approval-workflow/pim-deny-selected-requests.png)
-
-1. Select **Deny**.
-
- The Status symbol will be updated with your denial.
--- ## Next steps - [Email notifications in Privileged Identity Management](pim-email-notifications.md)
active-directory Azure Ad Roles Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/privileged-identity-management/azure-ad-roles-features.md
- Title: Azure AD role features in Privileged Identity Management | Microsoft Docs
-description: How to manage Azure AD roles for assignment Privileged Identity Management (PIM)
-------- Previously updated : 07/10/2020-----
-#Customer intent: As a dev, devops, or it admin, I want to learn how to manage Azure AD roles, so that I can grant access to resources using new capabilities.
--
-# Management capabilities for Azure AD roles in Privileged Identity Management
-
-The management experience for Azure AD roles in Privileged Identity Management has been updated to unify how Azure AD roles and Azure resource roles are managed. Previously, Privileged Identity Management for Azure resource roles has had a couple of key features that were not available for Azure AD roles.
-
-With the update being currently rolled out, we are merging the two into a single management experience, and in it you get the same functionality for Azure AD roles as for Azure resource roles. This article informs you of the updated features and any requirements.
-
-## Time-bound assignments
-
-Previously, there were two possible states for role assignments: *eligible* and *permanent*. Now you can also set a start and end time for each type of assignment. This addition gives you four possible states into which you can place an assignment:
--- Eligible permanently-- Active permanently-- Eligible, with specified start and end dates for assignment-- Active, with specified start and end dates for assignment-
-In many cases, even if you donΓÇÖt want users to have eligible assignment and activate roles every time, you can still protect your Azure AD organization by setting an expiration time for assignments. For example, if you have some temporary users who are eligible, consider setting an expiration so to remove them automatically from the role assignment when their work is complete.
-
-## New role settings
-
-We are also adding new settings for Azure AD roles.
--- **Previously**, you could only configure activation settings on a per-role basis. That is, activation settings such as multi-factor authentication requirements and incident/request ticket requirements were applied to all users eligible for a specified role.-- **Now**, you can configure whether an individual user needs to perform multi-factor authentication before they can activate a role. Also, you can have advanced control over your Privileged Identity Management emails related to specific roles.-
-## Extend and renew assignments
-
-As soon as you figure out time-bound assignment, the first question you might ask is what happens if a role is expired? In this new version, we provide two options for this scenario:
--- **Extend**: When a role assignment nears its expiration, the user can use Privileged Identity Management to request an extension for that role assignment-- **Renew**: When a role assignment has expired, the user can use Privileged Identity Management to request a renewal for that role assignment-
-Both user-initiated actions require an approval from a Global administrator or Privileged role administrator. Admins will no longer need to be in the business of managing these expirations. They just need to wait for the extension or renewal requests and approve them if the request is valid.
-
-## API changes
-
-When customers have the updated version rolled out to their Azure AD organization, the existing graph API will stop working. You must transition to use the [Graph API for Azure resource roles](/graph/api/resources/privilegedidentitymanagement-resources?view=graph-rest-beta&preserve-view=true). To manage Azure AD roles using that API, swap `/azureResources` with `/aadroles` in the signature and use the Directory ID for the `resourceId`.
-
-We have tried our best to reach out to all customers who are using the previous API to let them know about this change ahead of time. If your Azure AD organization was moved on to the new version and you still depend on the old API, reach out to the team at pim_preview@microsoft.com.
-
-## PowerShell change
-
-For customers who are using the Privileged Identity Management PowerShell module for Azure AD roles, the PowerShell will stop working with the update. In place of the previous cmdlets you must use the Privileged Identity Management cmdlets inside the Azure AD Preview PowerShell module. Install the Azure AD PowerShell module from the [PowerShell Gallery](https://www.powershellgallery.com/packages/AzureADPreview/2.0.0.17). You can now [read the documentation and samples for PIM operations in this PowerShell module](powershell-for-azure-ad-roles.md).
-
-## Next steps
--- [Assign an Azure AD custom role](azure-ad-custom-roles-assign.md)-- [Remove or update an Azure AD custom role assignment](azure-ad-custom-roles-update-remove.md)-- [Configure an Azure AD custom role assignment](azure-ad-custom-roles-configure.md)-- [Role definitions in Azure AD](../roles/permissions-reference.md)
active-directory Pim Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/privileged-identity-management/pim-configure.md
Previously updated : 03/19/2021 Last updated : 06/15/2021 -+
## Reasons to use
-Organizations want to minimize the number of people who have access to secure information or resources, because that reduces the chance of a malicious actor getting that access, or an authorized user inadvertently impacting a sensitive resource. However, users still need to carry out privileged operations in Azure AD, Azure, Microsoft 365, or SaaS apps. Organizations can give users just-in-time privileged access to Azure resources and Azure AD. There is a need for oversight for what those users are doing with their administrator privileges.
+Organizations want to minimize the number of people who have access to secure information or resources, because that reduces the chance of
+
+- a malicious actor getting access
+- an authorized user inadvertently impacting a sensitive resource
+
+However, users still need to carry out privileged operations in Azure AD, Azure, Microsoft 365, or SaaS apps. Organizations can give users just-in-time privileged access to Azure and Azure AD resources and can oversee what those users are doing with their privileged access.
+
+## License requirements
++
+For information about licenses for users, see [License requirements to use Privileged Identity Management](subscription-requirements.md).
## What does it do?
Once you set up Privileged Identity Management, you'll see **Tasks**, **Manage**
## Who can do what?
-For Azure AD roles in Privileged Identity Management, only a user who is in the Privileged role administrator or Global administrator role can manage assignments for other administrators. You can [grant access to other administrators to manage Privileged Identity Management](pim-how-to-give-access-to-pim.md). Global Administrators, Security Administrators, Global readers, and Security Readers can also view assignments to Azure AD roles in Privileged Identity Management.
+For Azure AD roles in Privileged Identity Management, only a user who is in the Privileged Role Administrator or Global Administrator role can manage assignments for other administrators. Global Administrators, Security Administrators, Global Readers, and Security Readers can also view assignments to Azure AD roles in Privileged Identity Management.
For Azure resource roles in Privileged Identity Management, only a subscription administrator, a resource Owner, or a resource User Access administrator can manage assignments for other administrators. Users who are Privileged Role Administrators, Security Administrators, or Security Readers do not by default have access to view assignments to Azure resource roles in Privileged Identity Management.
+## Extend and renew assignments
+
+After you set up your time-bound owner or member assignments, the first question you might ask is what happens if an assignment expires? In this new version, we provide two options for this scenario:
+
+- Extend ΓÇô When a role assignment nears expiration, the user can use Privileged Identity Management to request an extension for the role assignment
+- Renew ΓÇô When a role assignment has already expired, the user can use Privileged Identity Management to request a renewal for the role assignment
+
+Both user-initiated actions require an approval from a Global Administrator or Privileged Role Administrator. Admins don't need to be in the business of managing assignment expirations. You can just wait for the extension or renewal requests to arrive for simple approval or denial.
+ ## Scenarios Privileged Identity Management supports the following scenarios:
-### Privileged Role administrator permissions
+### Privileged Role Administrator permissions
- Enable approval for specific roles - Specify approver users or groups to approve requests
Privileged Identity Management supports the following scenarios:
- View the status of your request to activate - Complete your task in Azure AD if activation was approved
-## Terminology
+## Managing privileged access Azure AD groups (preview)
-To better understand Privileged Identity Management and its documentation, you should review the following terms.
+In Privileged Identity Management (PIM), you can now assign eligibility for membership or ownership of privileged access groups. Starting with this preview, you can assign Azure Active Directory (Azure AD) built-in roles to cloud groups and use PIM to manage group member and owner eligibility and activation. For more information about role-assignable groups in Azure AD, see [Use cloud groups to manage role assignments in Azure Active Directory (preview)](../roles/groups-concept.md).
-| Term or concept | Role assignment category | Description |
-| | | |
-| eligible | Type | A role assignment that requires a user to perform one or more actions to use the role. If a user has been made eligible for a role, that means they can activate the role when they need to perform privileged tasks. There's no difference in the access given to someone with a permanent versus an eligible role assignment. The only difference is that some people don't need that access all the time. |
-| active | Type | A role assignment that doesn't require a user to perform any action to use the role. Users assigned as active have the privileges assigned to the role. |
-| activate | | The process of performing one or more actions to use a role that a user is eligible for. Actions might include performing a multi-factor authentication (MFA) check, providing a business justification, or requesting approval from designated approvers. |
-| assigned | State | A user that has an active role assignment. |
-| activated | State | A user that has an eligible role assignment, performed the actions to activate the role, and is now active. Once activated, the user can use the role for a preconfigured period-of-time before they need to activate again. |
-| permanent eligible | Duration | A role assignment where a user is always eligible to activate the role. |
-| permanent active | Duration | A role assignment where a user can always use the role without performing any actions. |
-| time-bound eligible | Duration | A role assignment where a user is eligible to activate the role only within start and end dates. |
-| time-bound active | Duration | A role assignment where a user can use the role only within start and end dates. |
-| just-in-time (JIT) access | | A model in which users receive temporary permissions to perform privileged tasks, which prevents malicious or unauthorized users from gaining access after the permissions have expired. Access is granted only when users need it. |
-| principle of least privilege access | | A recommended security practice in which every user is provided with only the minimum privileges needed to accomplish the tasks they are authorized to perform. This practice minimizes the number of Global Administrators and instead uses specific administrator roles for certain scenarios. |
+>[!Important]
+> To assign a privileged access group to a role for administrative access to Exchange, Security and Compliance center, or SharePoint, use the Azure AD portal **Roles and Administrators** experience and not in the Privileged Access Groups experience to make the user or group eligible for activation into the group.
-## License requirements
+### Different just-in-time policies for each group
+Some organizations use tools like Azure AD business-to-business (B2B) collaboration to invite their partners as guests to their Azure AD organization. Instead of a single just-in-time policy for all assignments to a privileged role, you can create two different privileged access groups with their own policies. You can enforce less strict requirements for your trusted employees, and stricter requirements like approval workflow for your partners when they request activation into their assigned group.
-For information about licenses for users, see [License requirements to use Privileged Identity Management](subscription-requirements.md).
+### Activate multiple role assignments in one request
+
+With the privileged access groups preview, you can give workload-specific administrators quick access to multiple roles with a single just-in-time request. For example, your Tier 3 Office Admins might need just-in-time access to the Exchange Admin, Office Apps Admin, Teams Admin, and Search Admin roles to thoroughly investigate incidents daily. Before today it would require four consecutive requests, which are a process that takes some time. Instead, you can create a role assignable group called ΓÇ£Tier 3 Office AdminsΓÇ¥, assign it to each of the four roles previously mentioned (or any Azure AD built-in roles) and enable it for Privileged Access in the groupΓÇÖs Activity section. Once enabled for privileged access, you can configure the just-in-time settings for members of the group and assign your admins and owners as eligible. When the admins elevate into the group, theyΓÇÖll become members of all four Azure AD roles.
+
+## Invite guest users and assign Azure resource roles in Privileged Identity Management
+
+Azure Active Directory (Azure AD) guest users are part of the business-to-business (B2B) collaboration capabilities within Azure AD so that you can manage external guest users and vendors as guests in Azure AD. For example, you can use these Privileged Identity Management features for Azure identity tasks with guests such as assigning access to specific Azure resources, specifying assignment duration and end date, or requiring two-step verification on active assignment or activation. For more information on how to invite a guest to your organization and manage their access , see [Add B2B collaboration users in the Azure AD portal](../external-identities/add-users-administrator.md).
+
+### When would you invite guests?
+
+Here are a couple examples of when you might invite guests to your organization:
+
+- Allow an external self-employed vendor that only has an email account to access your Azure resources for a project.
+- Allow an external partner in a large organization that uses on-premises Active Directory Federation Services to access your expense application.
+- Allow support engineers not in your organization (such as Microsoft support) to temporarily access your Azure resource to troubleshoot issues.
+
+### How does collaboration using B2B guests work?
+
+When you use B2B collaboration, you can invite an external user to your organization as a guest. The guest can be managed as a user in your organization, but a guest has to be authenticated in their home organization and not in your Azure AD organization. This means that if the guest no longer has access to their home organization, they also lose access to your organization. For example, if the guest leaves their organization, they automatically lose access to any resources you shared with them in Azure AD without you having to do anything. For more information about B2B collaboration, see [What is guest user access in Azure Active Directory B2B?](../external-identities/what-is-b2b.md).
+
+![Diagram showing how a guest user is authenticated in their home directory](./media/pim-resource-roles-external-users/b2b-external-user.png)
## Next steps
active-directory Pim Deployment Plan https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/privileged-identity-management/pim-deployment-plan.md
Previously updated : 08/27/2020 Last updated : 06/03/2021
Now that you have identified the test users, use this step to configure Privileg
1. Navigate to **Azure AD roles**, select **Roles**, and then select the role you configured.
-1. For the group of test users, if they are already a permanent administrator, you can make them eligible by searching for them and converting them from permanent to eligible by selecting the three dots on their row. If they donΓÇÖt have the role assignments yet, you can [make a new eligible assignment](pim-how-to-add-role-to-user.md#make-a-user-eligible-for-a-role).
+1. For the group of test users, if they are already a permanent administrator, you can make them eligible by searching for them and converting them from permanent to eligible by selecting the three dots on their row. If they donΓÇÖt have the role assignments yet, you can [make a new eligible assignment](pim-how-to-add-role-to-user.md).
-1. Repeat steps 1-3 for all the roles you want to test.
+1. Repeat steps 1-3 for each role that you want to test.
1. Once you have set up the test users, you should send them the link for how to [activate their Azure AD role](pim-how-to-activate-role.md).
active-directory Pim Email Notifications https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/privileged-identity-management/pim-email-notifications.md
Who receives these emails for Azure AD roles depends on your role, the event, an
| Security Administrator</br>(Activated/Eligible) | No | Yes* | Yes | | Global Administrator</br>(Activated/Eligible) | No | Yes* | Yes |
-\* If the [**Notifications** setting](pim-how-to-change-default-settings.md#notifications) is set to **Enable**.
+\* If the [**Notifications** setting](pim-how-to-change-default-settings.md) is set to **Enable**.
The following shows an example email that is sent when a user activates an Azure AD role for the fictional Contoso organization.
A weekly Privileged Identity Management summary email for Azure AD roles is sent
![Weekly Privileged Identity Management digest email for Azure AD roles](./media/pim-email-notifications/email-directory-weekly.png)
-The email includes four tiles:
+The email includes:
| Tile | Description | | | |
The email includes four tiles:
| **Role assignments in Privileged Identity Management** | Number of times users are assigned an eligible role inside Privileged Identity Management. | | **Role assignments outside of PIM** | Number of times users are assigned a permanent role outside of Privileged Identity Management (inside Azure AD). |
-The **Overview of your top roles** section lists the top five roles in your organization based on total number of permanent and eligible administrators for each role. The **Take action** link opens the [PIM wizard](pim-security-wizard.md) where you can convert permanent administrators to eligible administrators in batches.
+The **Overview of your top roles** section lists the top five roles in your organization based on total number of permanent and eligible administrators for each role. The **Take action** link opens [Discovery & Insights](pim-security-wizard.md) where you can convert permanent administrators to eligible administrators in batches.
## Email timing for activation approvals
When users activate their role and the role setting requires approval, approvers
- Request to approve or deny the user's activation request (sent by the request approval engine) - The user's request is approved (sent by the request approval engine)
-Also, Global administrators and Privileged Role administrators receive an email for each approval:
+Also, Global Administrators and Privileged Role Administrators receive an email for each approval:
- The user's role is activated (sent by Privileged Identity Management)
active-directory Pim How To Activate Role https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/privileged-identity-management/pim-how-to-activate-role.md
Previously updated : 05/28/2021 Last updated : 06/03/2021
If you have been made *eligible* for an administrative role, then you must *acti
This article is for administrators who need to activate their Azure AD role in Privileged Identity Management.
-## Determine your version of PIM
-
-Beginning in November 2019, the Azure AD roles portion of Privileged Identity Management is being updated to a new version that matches the experiences for Azure resource roles. This creates additional features as well as [changes to the existing API](azure-ad-roles-features.md#api-changes). While the new version is being rolled out, which procedures that you follow in this article depend on version of Privileged Identity Management you currently have. Follow the steps in this section to determine which version of Privileged Identity Management you have. After you know your version of Privileged Identity Management, you can select the procedures in this article that match that version.
-
-1. Sign in to the [Azure portal](https://portal.azure.com/) with the [Privileged role administrator](../roles/permissions-reference.md#privileged-role-administrator) role.
-1. Open **Azure AD Privileged Identity Management**. If you have a banner on the top of the overview page, follow the instructions in the **New version** tab of this article. Otherwise, follow the instructions in the **Previous version** tab.
-
- [![Select Azure AD > Privileged Identity Management.](media/pim-how-to-add-role-to-user/pim-new-version.png)](media/pim-how-to-add-role-to-user/pim-new-version.png#lightbox)
- # [New version](#tab/new) ## Activate a role for new version
If you do not require activation of a role that requires approval, you can cance
When you activate a role in Privileged Identity Management, the activation may not instantly propagate to all portals that require the privileged role. Sometimes, even if the change is propagated, web caching in a portal may result in the change not taking effect immediately. If your activation is delayed, sign out of the portal you are trying to perform the action and then sign back in. In the Azure portal, PIM signs you out and back in automatically.
-# [Previous version](#tab/previous)
-
-## Activate a role (previous version)
-
-When you need to take on an Azure AD role, you can request activation by using the **My roles** navigation option in Privileged Identity Management.
-
-1. Sign in to the [Azure portal](https://portal.azure.com/).
-
-1. Open **Azure AD Privileged Identity Management**. For information about how to add the Privileged Identity Management tile to your dashboard, see [Start using Privileged Identity Management](pim-getting-started.md).
-
-1. Select **Azure AD roles**.
-
-1. Select **My roles** to see a list of your eligible Azure AD roles.
-
- ![Azure AD roles - My roles showing eligible or active roles list](./media/pim-how-to-activate-role/directory-roles-my-roles.png)
-
-1. Find a role that you want to activate.
-
- ![Azure AD roles - My eligible roles list showing Activate link](./media/pim-how-to-activate-role/directory-roles-my-roles-activate.png)
-
-1. Select **Activate** to open the Role activation details pane.
-
-1. If your role requires multi-factor authentication (MFA), select **Verify your identity before proceeding**. You only have to authenticate once per session.
-
- ![Verify my identity pane with MFA before role activation](./media/pim-how-to-activate-role/directory-roles-my-roles-mfa.png)
-
-1. Select **Verify my identity** and follow the instructions to provide additional security verification.
-
- ![Additional security verification page asking how to contact you](./media/pim-how-to-activate-role/additional-security-verification.png)
-
-1. Select **Activate** to open the Activation pane.
-
- ![Activation pane to specify start time, duration, ticket, and reason](./media/pim-how-to-activate-role/directory-roles-activate.png)
-
-1. If necessary, specify a custom activation start time.
-
-1. Specify the activation duration.
-
-1. In the **Activation reason** box, enter the reason for the activation request. Some roles require you to supply a trouble ticket number.
-
- ![Completed Activation pane with a custom start time, duration, ticket, and reason](./media/pim-how-to-activate-role/directory-roles-activation-pane.png)
-
-1. Select **Activate**.
-
- If the role does not require approval, an **Activation status** pane appears that displays the status of the activation.
-
- ![Activation status page showing the three stages of activation](./media/pim-how-to-activate-role/activation-status.png)
-
- Once all the stages are complete, select the **Sign out** link to sign out of the Azure portal. When you sign back in to the portal, you can now use the role.
-
- If the [role requires approval](./azure-ad-pim-approval-workflow.md) to activate, an Azure notification will appear in the upper right corner of your browser informing you the request is pending approval.
-
-## View the status of your requests (previous version)
-
-You can view the status of your pending requests to activate.
-
-1. Open Azure AD Privileged Identity Management.
-
-1. Select **Azure AD roles**.
-
-1. Select **My requests** to see a list of your requests.
-
- ![Azure AD roles - My requests list](./media/pim-how-to-activate-role/directory-roles-my-requests.png)
-
-## Deactivate a role (previous version)
-
-Once a role has been activated, it automatically deactivates when its time limit (eligible duration) is reached.
-
-If you complete your administrator tasks early, you can also deactivate a role manually in Azure AD Privileged Identity Management.
-
-1. Open Azure AD Privileged Identity Management.
-
-1. Select **Azure AD roles**.
-
-1. Select **My roles**.
-
-1. Select **Active roles** to see your list of active roles.
-
-1. Find the role you're done using and then select **Deactivate**.
-
-## Cancel a pending request (previous version)
-
-If you do not require activation of a role that requires approval, you can cancel a pending request at any time.
-
-1. Open Azure AD Privileged Identity Management.
-
-1. Select **Azure AD roles**.
-
-1. Select **My requests**.
-
-1. For the role that you want to cancel, select the **Cancel** button.
-
- When you select **Cancel**, the request will be canceled. To activate the role again, you will have to submit a new request for activation.
-
- ![My requests list with the Cancel button highlighted](./media/pim-how-to-activate-role/directory-role-cancel.png)
-
-## Troubleshoot (previous version)
-
-### Permissions are not granted after activating a role
-
-When you activate a role in Privileged Identity Management, your activation might be delayed in admin portals other than the Azure portal, such as the Office 365 portal. If your activation is delayed, sign out of the portal you're in and then sign back in. Then, use Privileged Identity Management to verify that you are listed as the member of the role.
-
-
- ## Next steps - [View audit history for Azure AD roles](pim-how-to-use-audit-log.md)
active-directory Pim How To Add Role To User https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/privileged-identity-management/pim-how-to-add-role-to-user.md
Previously updated : 02/16/2021 Last updated : 06/03/2021 + # Assign Azure AD roles in Privileged Identity Management
With Azure Active Directory (Azure AD), a Global administrator can make **perman
The Azure AD Privileged Identity Management (PIM) service also allows Privileged role administrators to make permanent admin role assignments. Additionally, Privileged role administrators can make users **eligible** for Azure AD admin roles. An eligible administrator can activate the role when they need it, and then their permissions expire once they're done.
-## Determine your version of PIM
-
-Beginning in November 2019, the Azure AD roles portion of Privileged Identity Management is being updated to a new version that matches the experiences for Azure resource roles. This creates additional features as well as [changes to the existing API](azure-ad-roles-features.md#api-changes). While the new version is being rolled out, which procedures that you follow in this article depend on version of Privileged Identity Management you currently have. Follow the steps in this section to determine which version of Privileged Identity Management you have. After you know your version of Privileged Identity Management, you can select the procedures in this article that match that version.
-
-1. Sign in to the [Azure portal](https://portal.azure.com/) with a user who is in the [Privileged role administrator](../roles/permissions-reference.md#privileged-role-administrator) role.
-1. Open **Azure AD Privileged Identity Management**. If you have a banner on the top of the overview page, follow the instructions in the **New version** tab of this article. Otherwise, follow the instructions in the **Previous version** tab.
-
- [![Select Azure AD > Privileged Identity Management.](media/pim-how-to-add-role-to-user/pim-new-version.png)](media/pim-how-to-add-role-to-user/pim-new-version.png#lightbox)
-
-# [New version](#tab/new)
+Privileged Identity Management support both built-in and custom Azure AD roles. For more information on Azure AD custom roles, see [Role-based access control in Azure Active Directory](../roles/custom-overview.md).
## Assign a role
Follow these steps to make a user eligible for an Azure AD admin role.
1. Sign in to [Azure portal](https://portal.azure.com/) with a user that is a member of the [Privileged role administrator](../roles/permissions-reference.md#privileged-role-administrator) role.
- For information about how to grant another administrator access to manage Privileged Identity Management, see [Grant access to other administrators to manage Privileged Identity Management](pim-how-to-give-access-to-pim.md).
- 1. Open **Azure AD Privileged Identity Management**. 1. Select **Azure AD roles**.
Follow these steps to update or remove an existing role assignment. **Azure AD P
1. Select **Update** or **Remove** to update or remove the role assignment.
-# [Previous version](#tab/previous)
-
-## Make a user eligible for a role
-
-Follow these steps to make a user eligible for an Azure AD admin role.
-
-1. Select **Roles** or **Members**.
-
- ![open Azure AD roles](./media/pim-how-to-add-role-to-user/pim-directory-roles.png)
-
-1. Select **Add member** to open **Add managed members**.
-
-1. Select **Select a role**, select a role you want to manage, and then select **Select**.
-
- ![Select a role](./media/pim-how-to-add-role-to-user/pim-select-a-role.png)
-
-1. Select **Select members**, select the users you want to assign to the role, and then select **Select**.
-
- ![Select a user or group to assign](./media/pim-how-to-add-role-to-user/pim-select-members.png)
-
-1. In **Add managed members**, select **OK** to add the user to the role.
-
-1. In the list of roles, select the role you just assigned to see the list of members.
-
- When the role is assigned, the user you selected will appear in the members list as **Eligible** for the role.
-
- ![User eligible for a role](./media/pim-how-to-add-role-to-user/pim-directory-role-eligible.png)
-
-1. Now that the user is eligible for the role, let them know that they can activate it according to the instructions in [Activate my Azure AD roles in Privileged Identity Management](pim-how-to-activate-role.md).
-
- Eligible administrators are asked to register for Azure AD Multi-Factor Authentication during activation. If a user cannot register for MFA, or is using a Microsoft account (such as @outlook.com), you need to make them permanent in all their roles.
-
-## Make a role assignment permanent
-
-By default, new users are only *eligible* for an Azure AD admin role. Follow these steps if you want to make a role assignment permanent.
-
-1. Open **Azure AD Privileged Identity Management**.
-
-1. Select **Azure AD roles**.
-
-1. Select **Members**.
-
- ![List of members](./media/pim-how-to-add-role-to-user/pim-directory-role-list-members.png)
-
-1. Select an **Eligible** role that you want to make permanent.
-
-1. Select **More** and then select **Make perm**.
-
- ![Make role assignment permanent](./media/pim-how-to-add-role-to-user/pim-make-perm.png)
-
- The role is now listed as **permanent**.
-
- ![List of members with permanent change](./media/pim-how-to-add-role-to-user/pim-directory-role-list-members-permanent.png)
-
-## Remove a user from a role
-
-You can remove users from role assignments, but make sure there is always at least one user who is a permanent Global Administrator. If you're not sure which users still need their role assignments, you can [start an access review for the role](pim-how-to-start-security-review.md).
-
-Follow these steps to remove a specific user from an Azure AD admin role.
-
-1. Open **Azure AD Privileged Identity Management**.
-
-1. Select **Azure AD roles**.
-
-1. Select **Members**.
-
- ![List of members](./media/pim-how-to-add-role-to-user/pim-directory-role-list-members.png)
-
-1. Select a role assignment you want to remove.
-
-1. Select **More** and then select **Remove**.
-
- ![Remove a role](./media/pim-how-to-add-role-to-user/pim-remove-role.png)
-
-1. In the message that asks you to confirm, select **Yes**.
-
- ![Confirm the removal](./media/pim-how-to-add-role-to-user/pim-remove-role-confirm.png)
-
- The role assignment is removed.
-
-## Authorization error when assigning roles
-
-If you recently enabled Privileged Identity Management for a subscription and you get an authorization error when you try to make a user eligible for an Azure AD admin role, it might be because the MS-PIM service principal does not yet have the appropriate permissions. The MS-PIM service principal must have the [User Access Administrator](../../role-based-access-control/built-in-roles.md#user-access-administrator) role to assign roles to others. Instead of waiting until MS-PIM is assigned the User Access Administrator role, you can assign it manually.
-
-Follow these steps to assign the User Access Administrator role to the MS-PIM service principal for a subscription.
-
-1. Sign into the Azure portal as a Global Administrator.
-
-1. Choose **All services** and then **Subscriptions**.
-
-1. Choose your subscription.
-
-1. Choose **Access control (IAM)**.
-
-1. Choose **Role assignments** to see the current list of role assignments at the subscription scope.
-
- ![Access control (IAM) blade for a subscription](./media/pim-how-to-add-role-to-user/ms-pim-access-control.png)
-
-1. Check whether the **MS-PIM** service principal is assigned the **User Access Administrator** role.
-
-1. If not, choose **Add role assignment** to open the **Add role assignment** pane.
-
-1. In the **Role** drop-down list, select the **User Access Administrator** role.
-
-1. In the **Select** list, find and select the **MS-PIM** service principal.
-
- ![Add role assignment pane - Add permissions for MS-PIM service principal](./media/pim-how-to-add-role-to-user/ms-pim-add-permissions.png)
-
-1. Choose **Save** to assign the role.
-
- After a few moments, the MS-PIM service principal is assigned the User Access Administrator role at the subscription scope.
-
- ![Access control page showing User access admin role assignment for the MS-PIM service principal](./media/pim-how-to-add-role-to-user/ms-pim-user-access-administrator.png)
-
-
- ## Next steps - [Configure Azure AD admin role settings in Privileged Identity Management](pim-how-to-change-default-settings.md)
active-directory Pim How To Change Default Settings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/privileged-identity-management/pim-how-to-change-default-settings.md
Previously updated : 02/28/2020 Last updated : 06/03/2021
A Privileged role administrator can customize Privileged Identity Management (PIM) in their Azure Active Directory (Azure AD) organization, including changing the experience for a user who is activating an eligible role assignment.
-## Determine your version of PIM
-
-Beginning in November 2019, the Azure AD roles portion of Privileged Identity Management is being updated to a new version that matches the experiences for Azure resource roles. This creates additional features as well as [changes to the existing API](azure-ad-roles-features.md#api-changes). While the new version is being rolled out, which procedures that you follow in this article depend on version of Privileged Identity Management you currently have. Follow the steps in this section to determine which version of Privileged Identity Management you have. After you know your version of Privileged Identity Management, you can select the procedures in this article that match that version.
-
-1. Sign in to the [Azure portal](https://portal.azure.com/) with a user who is in the [Privileged role administrator](../roles/permissions-reference.md#privileged-role-administrator) role.
-1. Open **Azure AD Privileged Identity Management**. If you have a banner on the top of the overview page, follow the instructions in the **New version** tab of this article. Otherwise, follow the instructions in the **Previous version** tab.
-
- [![Select Azure AD > Privileged Identity Management.](media/pim-how-to-add-role-to-user/pim-new-version.png)](media/pim-how-to-add-role-to-user/pim-new-version.png#lightbox)
-
-Follow the steps in this article to approve or deny requests for Azure AD roles.
-
-# [New version](#tab/new)
- ## Open role settings Follow these steps to open the settings for an Azure AD role.
If setting multiple approvers, approval completes as soon as one of them approve
1. Once you have specified your all your role settings, select **Update** to save your changes.
-# [Previous version](#tab/previous)
-
-## Open role settings (previous version)
-
-Follow these steps to open the settings for an Azure AD role.
-
-1. Open **Azure AD Privileged Identity Management**.
-
-1. Select **Azure AD roles**.
-
-1. Select **Settings**.
-
- ![Azure AD roles - Settings](./media/pim-how-to-change-default-settings/pim-directory-roles-settings.png)
-
-1. Select **Roles**.
-
-1. Select the role whose settings you want to configure.
-
- ![Azure AD roles - Settings Roles](./media/pim-how-to-change-default-settings/pim-directory-roles-settings-role.png)
-
- On the settings page for each role, there are several settings you can configure. These settings only affect users who are **eligible** assignments, not **permanent** assignments.
-
-## Activations
-
-Use the **Activations** slider to set the maximum time, in hours, that a role stays active before it expires. This value can be between 1 and 72 hours.
-
-## Notifications
-
-Use the **Notifications** switch to specify whether administrators will receive email notifications when roles are activated. This notification can be useful for detecting unauthorized or illegitimate activations.
-
-When set to **Enable**, notifications are sent to:
--- Privileged role administrator-- Security administrator-- Global administrator-
-For more information, see [Email notifications in Privileged Identity Management](pim-email-notifications.md).
-
-## Incident/Request ticket
-
-Use the **Incident/Request ticket** switch to require eligible administrators to include a ticket number when they activate their role. This practice can make role access audits more effective.
-
-## Multi-Factor Authentication
-
-Use the **Multi-Factor Authentication** switch to specify whether to require users to verify their identity with MFA before they can activate their roles. They only have to verify their identity once per session, not every time they activate a role. There are two tips to keep in mind when you enable MFA:
--- Users who have Microsoft accounts for their email addresses (typically @outlook.com, but not always) cannot register for Azure AD Multi-Factor Authentication. If you want to assign roles to users with Microsoft accounts, you should either make them permanent admins or disable multi-factor authentication for that role.-- You cannot disable Azure AD Multi-Factor Authentication for highly privileged roles for Azure AD and Microsoft 365. This safety feature helps protect the following roles:
-
- - Azure Information Protection administrator
- - Billing administrator
- - Cloud application administrator
- - Compliance administrator
- - Conditional access administrator
- - Dynamics 365 administrator
- - Customer LockBox access approver
- - Directory writers
- - Exchange administrator
- - Global administrator
- - Intune administrator
- - Power BI administrator
- - Privileged role administrator
- - Security administrator
- - SharePoint administrator
- - Skype for Business administrator
- - User administrator
-
-For more information, see [Multi-factor authentication and Privileged Identity Management](pim-how-to-require-mfa.md).
-
-## Require approval
-
-If you want to delegate the required approval to activate a role, follow these steps.
-
-1. Set the **Require approval** switch to **Enabled**. The pane expands with options to select approvers.
-
- ![Screenshot that shows the "Require approval" switch with "Enable" selected.](./media/pim-how-to-change-default-settings/pim-directory-roles-settings-require-approval.png)
-
- If you don't specify any approvers, the Privileged role administrator becomes the default approver and is then required to approve all activation requests for this role.
-
-1. To add approvers, click **Select approvers**.
-
- ![Azure AD roles - Settings - Require approval](./media/pim-how-to-change-default-settings/pim-directory-roles-settings-require-approval-select-approvers.png)
-
-1. Select one or more approvers in addition to the Privileged role administrator and then click **Select**. We recommend that you add at least two approvers. Even if you add yourself as an approver, you can't self-approve a role activation. Your selections will appear in the list of selected approvers.
-
-1. After you have specified your all your role settings, select **Save** to save your changes.
--- ## Next steps - [Assign Azure AD roles in Privileged Identity Management](pim-how-to-add-role-to-user.md)
active-directory Pim How To Configure Security Alerts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/privileged-identity-management/pim-how-to-configure-security-alerts.md
Previously updated : 03/05/2020 Last updated : 06/03/2021
Privileged Identity Management (PIM) generates alerts when there is suspicious or unsafe activity in your Azure Active Directory (Azure AD) organization. When an alert is triggered, it shows up on the Privileged Identity Management dashboard. Select the alert to see a report that lists the users or roles that triggered the alert.
-## Determine your version of PIM
-
-Beginning in November 2019, the Azure AD roles portion of Privileged Identity Management is being updated to a new version that matches the experiences for Azure resource roles. This creates additional features as well as [changes to the existing API](azure-ad-roles-features.md#api-changes). While the new version is being rolled out, which procedures that you follow in this article depend on the version of Privileged Identity Management you currently have. Follow the steps in this section to determine which version of Privileged Identity Management you have. After you know your version of Privileged Identity Management, you can select the procedures in this article that match that version.
-
-1. Sign in to the [Azure portal](https://portal.azure.com/) with a user who is in the [Privileged role administrator](../roles/permissions-reference.md#privileged-role-administrator) role.
-1. Open **Azure AD Privileged Identity Management**. If you have a banner on the top of the overview page, follow the instructions in the **New version** tab of this article. Otherwise, follow the instructions in the **Previous version** tab.
-
- [![Select Azure AD > Privileged Identity Management.](media/pim-how-to-add-role-to-user/pim-new-version.png)](media/pim-how-to-add-role-to-user/pim-new-version.png#lightbox)
-
-Follow the steps in this article to investigate security alerts for Azure AD roles.
-
-# [New version](#tab/new)
- ![Screenshot that shows the "Alerts" page with a list of alerts and their severity.](./media/pim-how-to-configure-security-alerts/view-alerts.png) ## Security alerts
Customize settings on the different alerts to work with your environment and sec
![Setting page for an alert to enable and configure settings](media/pim-how-to-configure-security-alerts/security-alert-settings.png)
-# [Previous version](#tab/previous)
-
-![Azure AD roles - Alert pane listing alert and the severity](./media/pim-how-to-configure-security-alerts/pim-directory-alerts.png)
-
-## Security alert details
-
-This section lists all the security alerts for Azure AD roles, along with how to fix and how to prevent. Severity has the following meaning:
--- **High**: Requires immediate action because of a policy violation.-- **Medium**: Does not require immediate action but signals a potential policy violation.-- **Low**: Does not require immediate action but suggests a preferable policy change.-
-### Administrators aren't using their privileged roles
-
-Severity: **Low**
-
-| | Description |
-| | |
-| **Why do I get this alert?** | Users that have been assigned privileged roles they don't need increases the chance of an attack. It is also easier for attackers to remain unnoticed in accounts that are not actively being used. |
-| **How to fix?** | Review the users in the list and remove them from privileged roles that they do not need. |
-| **Prevention** | Assign privileged roles only to users who have a business justification. </br>Schedule regular [access reviews](pim-how-to-start-security-review.md) to verify that users still need their access. |
-| **In-portal mitigation action** | Removes the account from their privileged role. |
-| **Trigger** | Triggered if a user goes over a specified number of days without activating a role. |
-| **Number of days** | This setting specifies the maximum number of days, from 0 to 100, that a user can go without activating a role.|
-
-### Roles don't require multi-factor authentication for activation
-
-Severity: **Low**
-
-| | Description |
-| | |
-| **Why do I get this alert?** | Without multi-factor authentication, compromised users can activate privileged roles. |
-| **How to fix?** | Review the list of roles and [require multi-factor authentication](pim-how-to-change-default-settings.md) for every role. |
-| **Prevention** | [Require MFA](pim-how-to-change-default-settings.md) for every role. |
-| **In-portal mitigation action** | Makes multi-factor authentication required for activation of the privileged role. |
-
-### The organization doesn't have Azure AD Premium P2
-
-Severity: **Low**
-
-| | Description |
-| | |
-| **Why do I get this alert?** | The current Azure AD organization does not have Azure AD Premium P2. |
-| **How to fix?** | Review information about [Azure AD editions](../fundamentals/active-directory-whatis.md). Upgrade to Azure AD Premium P2. |
-
-### Potential stale accounts in a privileged role
-
-Severity: **Medium**
-
-| | Description |
-| | |
-| **Why do I get this alert?** | Accounts in a privileged role have not changed their password in the past 90 days. These accounts might be service or shared accounts that aren't being maintained and are vulnerable to attackers. |
-| **How to fix?** | Review the accounts in the list. If they no longer need access, remove them from their privileged roles. |
-| **Prevention** | Ensure that accounts that are shared are rotating strong passwords when there is a change in the users that know the password. </br>Regularly review accounts with privileged roles using [access reviews](pim-how-to-start-security-review.md) and remove role assignments that are no longer needed. |
-| **In-portal mitigation action** | Removes the account from their privileged role. |
-| **Best practices** | Shared, service, and emergency access accounts that authenticate using a password and are assigned to highly privileged administrative roles such as Global administrator or Security administrator should have their passwords rotated for the following cases:<ul><li>After a security incident involving misuse or compromise of administrative access rights</li><li>After any user's privileges are changed so that they are no longer an administrator (for example, after an employee who was an administrator leaves IT or leaves the organization)</li><li>At regular intervals (for example, quarterly or yearly), even if there was no known breach or change to IT staffing</li></ul>Since multiple people have access to these accounts' credentials, the credentials should be rotated to ensure that people that have left their roles can no longer access the accounts. [Learn more](../roles/security-planning.md) |
-
-### Roles are being assigned outside of Privileged Identity Management
-
-Severity: **High**
-
-| | Description |
-| | |
-| **Why do I get this alert?** | Privileged role assignments made outside of Privileged Identity Management are not properly monitored and may indicate an active attack. |
-| **How to fix?** | Review the users in the list and remove them from privileged roles assigned outside of Privileged Identity Management. |
-| **Prevention** | Investigate where users are being assigned privileged roles outside of Privileged Identity Management and prohibit future assignments from there. |
-| **In-portal mitigation action** | Removes the user from their privileged role. |
-
-### There are too many global administrators
-
-Severity: **Low**
-
-| | Description |
-| | |
-| **Why do I get this alert?** | Global administrator is the highest privileged role. If a Global Administrator is compromised, the attacker gains access to all of their permissions, which puts your whole system at risk. |
-| **How to fix?** | Review the users in the list and remove any that do not absolutely need the Global administrator role. </br>Assign lower privileged roles to these users instead. |
-| **Prevention** | Assign users the least privileged role they need. |
-| **In-portal mitigation action** | Removes the account from their privileged role. |
-| **Trigger** | Triggered if two different criteria are met, and you can configure both of them. First, you need to reach a certain threshold of Global administrators. Second, a certain percentage of your total role assignments must be Global administrators. If you only meet one of these measurements, the alert does not appear. |
-| **Minimum number of Global Administrators** | This setting specifies the number of Global administrators, from 2 to 100, that you consider to be too few for your Azure AD organization. |
-| **Percentage of Global Administrators** | This setting specifies the minimum percentage of administrators who are Global administrators, from 0% to 100%, below which you do not want your Azure AD organization to dip. |
-
-### Roles are being activated too frequently
-
-Severity: **Low**
-
-| | Description |
-| | |
-| **Why do I get this alert?** | Multiple activations to the same privileged role by the same user is a sign of an attack. |
-| **How to fix?** | Review the users in the list and ensure that the [activation duration](pim-how-to-change-default-settings.md) for their privileged role is set long enough for them to perform their tasks. |
-| **Prevention** | Ensure that the [activation duration](pim-how-to-change-default-settings.md) for privileged roles is set long enough for users to perform their tasks.</br>[Require multi-factor authentication](pim-how-to-change-default-settings.md) for privileged roles that have accounts shared by multiple administrators. |
-| **In-portal mitigation action** | N/A |
-| **Trigger** | Triggered if a user activates the same privileged role multiple times within a specified period. You can configure both the time period and the number of activations. |
-| **Activation renewal timeframe** | This setting specifies in days, hours, minutes, and second the time period you want to use to track suspicious renewals. |
-| **Number of activation renewals** | This setting specifies the number of activations, from 2 to 100, at which you would like to be notified, within the timeframe you chose. You can change this setting by moving the slider, or typing a number in the text box. |
-
-## Configure security alert settings
-
-You can customize some of the security alerts in Privileged Identity Management to work with your organization's needs and security goals. Follow these steps to open the security alert settings:
-
-1. Open **Privileged Identity Management** in Azure AD.
-
-1. Select **Azure AD roles**.
-
-1. Select **Settings** and then **Alerts**.
-
- ![Azure AD roles - Settings with Alerts selected](./media/pim-how-to-configure-security-alerts/settings-alerts.png)
-
-1. Select an alert name to configure the setting for that alert.
-
- ![For the selected alert, security alert settings pane](./media/pim-how-to-configure-security-alerts/security-alert-settings.png)
--- ## Next steps - [Configure Azure AD role settings in Privileged Identity Management](pim-how-to-change-default-settings.md)
active-directory Pim How To Give Access To Pim https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/privileged-identity-management/pim-how-to-give-access-to-pim.md
- Title: Grant access to manage PIM - Azure Active Directory | Microsoft Docs
-description: Learn how to grant access to other administrations to manage Azure AD Privileged Identity Management (PIM).
-------- Previously updated : 08/06/2020----
-# Delegate access to Privileged Identity Management
-
-To delegate access to Privileged Identity Management (PIM), a Global Administrator can assign other users to the Privileged Role Administrator role. By default, Security administrators and Security readers have read-only access to Privileged Identity Management. To grant access to Privileged Identity Management, the first user can assign others to the **Privileged Role Administrator** role. The Privileged Role Administrator role is required for managing Azure AD roles only. Privileged Role Administrator permissions aren't required to manage settings for Azure resources.
-
-> [!NOTE]
-> Managing Privileged Identity Management requires Azure AD Multi-Factor Authentication. Because Microsoft accounts can't register for Azure AD Multi-Factor Authentication, a user who signs in with a Microsoft account can't access Privileged Identity Management.
-
-Make sure there are always at least two users in a Privileged Role Administrator role, in case one user is locked out or their account is deleted.
-
-## Delegate access to manage PIM
-
-1. Sign in to the [Azure portal](https://portal.azure.com/).
-
-1. In Azure AD, open **Privileged Identity Management**.
-
-1. Select **Azure AD roles**.
-
-1. Select **Roles**.
-
- ![Privileged Identity Management Azure AD roles - Roles](./media/pim-how-to-give-access-to-pim/pim-directory-roles-roles.png)
-
-1. Select the **Privileged Role Administrator** role to open the members page.
-
- ![Privileged Role Administrator - Members](./media/pim-how-to-give-access-to-pim/pim-pra-members.png)
-
-1. Select **Add member** to open the **Add managed members** pane.
-
-1. Select **Select members** to open the **Select members** pane.
-
- ![Privileged Role Administrator - Select members](./media/pim-how-to-give-access-to-pim/pim-pra-select-members.png)
-
-1. Select a member and then click **Select**.
-
-1. Select **OK** to make the member eligible for the **Privileged Role Administrator** role.
-
- When you assign a new role to someone in Privileged Identity Management, they are automatically configured as **Eligible** to activate the role.
-
-1. To make the member permanent, select the user in the Privileged Role Administrator member list.
-
-1. Select **More** and then **Make permanent** to make the assignment permanent.
-
- ![Privileged Role Administrator - Make permanent](./media/pim-how-to-give-access-to-pim/pim-pra-make-permanent.png)
-
-1. Send the user a link to [Start using Privileged Identity Management](pim-getting-started.md).
-
-## Remove access to manage PIM
-
-Before you remove someone from the Privileged Role Administrator role, always make sure there will still be at least two users assigned to it.
-
-1. Sign in to the [Azure portal](https://portal.azure.com/).
-
-1. Open **Azure AD Privileged Identity Management**.
-
-1. Select **Azure AD roles**.
-
-1. Select **Roles**.
-
-1. Select the **Privileged Role Administrator** role to open the members page.
-
-1. Select the checkbox next to the user you want to remove and then select **Remove member**.
-
- ![Privileged Role Administrator - Remove member](./media/pim-how-to-give-access-to-pim/pim-pra-remove-member.png)
-
-1. When you are asked to confirm that you want to remove the member from the role, select **Yes**.
-
-## Next steps
--- [Start using Privileged Identity Management](pim-getting-started.md)
active-directory Pim How To Use Audit Log https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/privileged-identity-management/pim-how-to-use-audit-log.md
Previously updated : 01/07/2019 Last updated : 06/03/2021
You can use the Privileged Identity Management (PIM) audit history to see all role assignments and activations within the past 30 days for all privileged roles. If you want to retain audit data for longer than the default retention period, you can use Azure Monitor to route it to an Azure storage account. For more information, see [Archive Azure AD logs to an Azure storage account](../reports-monitoring/quickstart-azure-monitor-route-logs-to-storage-account.md). If you want to see the full audit history of activity in your Azure Active Directory (Azure AD) organization, including administrator, end user, and synchronization activity, you can use the [Azure Active Directory security and activity reports](../reports-monitoring/overview-reports.md).
-## Determine your version of PIM
-
-Beginning in November 2019, the Azure AD roles portion of Privileged Identity Management is being updated to a new version that matches the experiences for Azure resource roles. This creates additional features as well as [changes to the existing API](azure-ad-roles-features.md#api-changes). While the new version is being rolled out, which procedures that you follow in this article depend on version of Privileged Identity Management you currently have. Follow the steps in this section to determine which version of Privileged Identity Management you have. After you know your version of Privileged Identity Management, you can select the procedures in this article that match that version.
-
-1. Sign in to the [Azure portal](https://portal.azure.com/) with a user who is in the [Privileged role administrator](../roles/permissions-reference.md#privileged-role-administrator) role.
-1. Open **Azure AD Privileged Identity Management**. If you have a banner on the top of the overview page, follow the instructions in the **New version** tab of this article. Otherwise, follow the instructions in the **Previous version** tab.
-
- [![Screenshot that shows the "Azure AD roles - Directory roles audit history" page.](media/pim-how-to-use-audit-log/directory-roles-audit-history.png "Select the tab for your version")](media/pim-how-to-use-audit-log/directory-roles-audit-history.png)
-
-# [New version](#tab/new)
- Follow these steps to view the audit history for Azure AD roles. ## View resource audit history
My audit enables you to view your personal role activity.
![Audit list for the current user](media/azure-pim-resource-rbac/my-audit-time.png)
-# [Previous version](#tab/previous)
-
-## View audit history
-
-Follow these steps to view the audit history for Azure AD roles.
-
-1. Sign in to [Azure portal](https://portal.azure.com/) with a user that is a member of the [Privileged Role Administrator](../roles/permissions-reference.md#privileged-role-administrator) role.
-
-1. Open **Azure AD Privileged Identity Management**.
-
-1. Select **Azure AD roles**.
-
-1. Select **Directory roles audit history**.
-
- Depending on your audit history, a column chart is displayed along with the total activations, max activations per day, and average activations per day.
-
- [![Azure AD roles new version](media/pim-how-to-use-audit-log/directory-roles-audit-history.png "View directory roles audit history")](media/pim-how-to-use-audit-log/directory-roles-audit-history.png)
-
- At the bottom of the page, a table is displayed with information about each action in the available audit history. The columns have the following meanings:
-
- | Column | Description |
- | | |
- | Time | When the action occurred. |
- | Requestor | User who requested the role activation or change. If the value is **Azure System**, check the Azure audit history for more information. |
- | Action | Actions taken by the requestor. Actions can include Assign, Unassign, Activate, Deactivate, or AddedOutsidePIM. |
- | Member | User who is activating or assigned to a role. |
- | Role | Role assigned or activated by the user. |
- | Reasoning | Text that was entered into the reason field during activation. |
- | Expiration | When an activated role expires. Applies only to eligible role assignments. |
-
-1. To sort the audit history, click the **Time**, **Action**, and **Role** buttons.
-
-## Filter audit history
-
-1. At the top of the audit history page, click the **Filter** button.
-
- The **Update chart parameters** pane appears.
-
-1. In **Time range**, select a time range.
-
-1. In **Roles**, select the checkboxes to indicate the roles you want to view.
-
- ![Update chart parameters pane](media/pim-how-to-use-audit-log/update-chart-parameters.png)
-
-1. Select **Done** to view the filtered audit history.
-
-## Get reason, approver, and ticket number for approval events
-
-1. Sign in to the [Azure portal](https://aad.portal.azure.com) with Privileged Role administrator role permissions, and open Azure AD.
-1. Select **Audit logs**.
-1. Use the **Service** filter to display only audit events for the Privileged identity Management service. On the **Audit logs** page, you can:
-
- - See the reason for an audit event in the **Status reason** column.
- - See the approver in the **Initiated by (actor)** column for the "add member to role request approved" event.
-
- [![Screenshot that shows the "audit logs" page with the "Initiated by (actor) menu open and "PIM" selected.](media/pim-how-to-use-audit-log/filter-audit-logs.png "Filter the audit log for the PIM service")](media/pim-how-to-use-audit-log/filter-audit-logs.png)
-
-1. Select an audit log event to see the ticket number on the **Activity** tab of the **Details** pane.
-
- [![Screenshot that shows the "Audit logs" page with the ticket number highlighted in the "Details" pane.](media/pim-how-to-use-audit-log/audit-event-ticket-number.png "Check the ticket number for the audit event")](media/pim-how-to-use-audit-log/audit-event-ticket-number.png)
-
-1. You can view the requester (person activating the role) on the **Targets** tab of the **Details** pane for an audit event. There are two target types for Azure AD roles:
-
- - The role (**Type** = Role)
- - The requester (**Type** = User)
-
-Typically, the audit log event immediately above the approval event is an event for "Add member to role completed" where the **Initiated by (actor)** is the requester. In most cases, you won't need to find the requester in the approval request from an auditing perspective.
--- ## Next steps - [View activity and audit history for Azure resource roles in Privileged Identity Management](azure-pim-resource-rbac.md)
active-directory Pim Resource Roles Assign Roles https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/privileged-identity-management/pim-resource-roles-assign-roles.md
na Previously updated : 05/11/2020 Last updated : 06/15/2021
Azure Active Directory (Azure AD) Privileged Identity Management (PIM) can manag
> [!NOTE] > Users or members of a group assigned to the Owner or User Access Administrator subscription roles, and Azure AD Global administrators that enable subscription management in Azure AD have Resource administrator permissions by default. These administrators can assign roles, configure role settings, and review access using Privileged Identity Management for Azure resources. A user can't manage Privileged Identity Management for Resources without Resource administrator permissions. View the list of [Azure built-in roles](../../role-based-access-control/built-in-roles.md).
+Privileged Identity Management support both built-in and custom Azure roles. For more information on Azure custom roles, see [Azure custom roles](../../role-based-access-control/custom-roles.md).
+ ## Role assignment conditions You can use the Azure attribute-based access control (Azure ABAC) preview to place resource conditions on eligible role assignments using Privileged Identity Management (PIM). With PIM, your end users must activate an eligible role assignment to get permission to perform certain actions. Using Azure ABAC conditions in PIM enables you not only to limit a userΓÇÖs role permissions to a resource using fine-grained conditions, but also to use PIM to secure the role assignment with a time-bound setting, approval workflow, audit trail, and so on. For more information, see [Azure attribute-based access control public preview](../../role-based-access-control/conditions-overview.md).
Follow these steps to make a user eligible for an Azure resource role.
1. Sign in to [Azure portal](https://portal.azure.com/) with Owner or User Access Administrator role permissions.
- For information about how to grant another administrator access to manage Privileged Identity Management, see [Grant access to other administrators to manage Privileged Identity Management](pim-how-to-give-access-to-pim.md).
- 1. Open **Azure AD Privileged Identity Management**. 1. Select **Azure resources**.
active-directory Pim Resource Roles External Users https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/privileged-identity-management/pim-resource-roles-external-users.md
- Title: Assign Azure resource roles to guests in PIM - Azure AD | Microsoft Docs
-description: Learn how to invite external guest users and assign Azure resource roles in Azure AD Privileged Identity Management (PIM).
------- Previously updated : 11/08/2019-----
-# Invite guest users and assign Azure resource roles in Privileged Identity Management
-
-Azure Active Directory (Azure AD) guest users are part of the business-to-business (B2B) collaboration capabilities within Azure AD so that you can manage external guest users and vendors as guests in Azure AD. When you combine B2B collaboration with Azure AD Privileged Identity Management (PIM), you can extend your compliance and governance requirements to guests. For example, you can use these Privileged Identity Management features for Azure identity tasks with guests:
--- Assign access to specific Azure resources-- Enable just-in-time access-- Specify assignment duration and end date-- Require multi-factor authentication on active assignment or activation-- Perform access reviews-- Utilize alerts and audit logs-
-This article describes how to invite a guest to your organization and manage their access to Azure resources using Privileged Identity Management.
-
-## When would you invite guests?
-
-Here are a couple examples of when you might invite guests to your organization:
--- Allow an external self-employed vendor that only has an email account to access your Azure resources for a project.-- Allow an external partner in a large organization that uses on-premises Active Directory Federation Services to access your expense application.-- Allow support engineers not in your organization (such as Microsoft support) to temporarily access your Azure resource to troubleshoot issues.-
-## How does collaboration using B2B guests work?
-
-When you use B2B collaboration, you can invite an external user to your organization as a guest. The guest can be managed as a user in your organization, but a guest has to be authenticated in their home organization and not in your Azure AD organization. This means that if the guest no longer has access to their home organization, they also lose access to your organization. For example, if the guest leaves their organization, they automatically lose access to any resources you shared with them in Azure AD without you having to do anything. For more information about B2B collaboration, see [What is guest user access in Azure Active Directory B2B?](../external-identities/what-is-b2b.md).
-
-![Diagram showing how a guest user is authenticated in their home directory](./media/pim-resource-roles-external-users/b2b-external-user.png)
-
-## Check guest collaboration settings
-
-To make sure you can invite guests into your organization, you should check your guest collaboration settings.
-
-1. Sign in to [Azure portal](https://portal.azure.com/).
-
-1. Select **Azure Active Directory** > **User settings**.
-
-1. Select **Manage external collaboration settings**.
-
- ![External collaboration settings page showing permission, invite, and collaboration restriction settings](./media/pim-resource-roles-external-users/external-collaboration-settings.png)
-
-1. Ensure that the **Admins and users in the guest inviter role can invite** switch is set to **Yes**.
-
-## Invite a guest and assign a role
-
-Using Privileged Identity Management, you can invite a guest and make them eligible for an Azure resource role.
-
-1. Sign in to [Azure portal](https://portal.azure.com/) with a user that is a member of the [Privileged Role Administrator](../roles/permissions-reference.md#privileged-role-administrator) or [User Administrator](../roles/permissions-reference.md#user-administrator) role.
-
-1. Open **Azure AD Privileged Identity Management**.
-
-1. Select **Azure resources**.
-
-1. Use the **Resource filter** to filter the list of managed resources.
-
-1. Select the resource you want to manage, such as a resource, resource group, subscription, or management group.
-
- You should set the scope to only what the guest needs.
-
-1. Under Manage, select **Roles** to see the list of roles for Azure resources.
-
- ![Azure resources roles list showing number of users that are active and eligible](./media/pim-resource-roles-external-users/resources-roles.png)
-
-1. Select the minimum role that the user will need.
-
- ![Selected role page listing the current members of that role](./media/pim-resource-roles-external-users/selected-role.png)
-
-1. On the role page, select **Add member** to open the New assignment pane.
-
-1. Click **Select a member or group**.
-
- ![New assignment - Select a member or group pane listing users and groups along with an Invite option](./media/pim-resource-roles-external-users/select-member-group.png)
-
-1. To invite a guest, click **Invite**.
-
- ![Invite a guest page with boxes to enter an email address and specify a personal message](./media/pim-resource-roles-external-users/invite-guest.png)
-
-1. After you have selected a guest, click **Invite**.
-
- The guest should be added as a selected member.
-
-1. In the **Select a member or group** pane, click **Select**.
-
-1. In the **Membership settings** pane, select the assignment type and duration.
-
- ![New assignment - Membership settings page with options to specify assignment type, start date, and end date](./media/pim-resource-roles-external-users/membership-settings.png)
-
-1. To complete the assignment, select **Done** and then **Add**.
-
- The guest role assignment will appear in your role list.
-
- ![Role page listing the guest as eligible](./media/pim-resource-roles-external-users/role-assignment.png)
-
-## Activate role as a guest
-
-If you are an external user, you must accept the invite to be a guest in the Azure AD organization and possibly activate your role assignment.
-
-1. Open the email with your invitation. The email will look similar to the following.
-
- ![Email invite with directory name, personal message, and a Get Started link](./media/pim-resource-roles-external-users/email-invite.png)
-
-1. Select the **Get Started** link in the email.
-
-1. After reviewing the permissions, click **Accept**.
-
- ![Review permissions page in a browser with a list of permissions that the organization would like you to review](./media/pim-resource-roles-external-users/invite-accept.png)
-
-1. You might be asked to accept a terms of use and specify whether you want to stay signed in. In the Azure portal, if you are *eligible* for a role, you won't yet have access to resources.
-
-1. To activate your role assignment, open the email with your activate role link. The email will look similar to the following.
-
- ![Email indicating that you eligible for a role with an Activate role link](./media/pim-resource-roles-external-users/email-role-assignment.png)
-
-1. Select **Activate role** to open your eligible roles in Privileged Identity Management.
-
- ![My roles page in Privileged Identity Management listing your eligible roles](./media/pim-resource-roles-external-users/my-roles-eligible.png)
-
-1. Under Action, select the **Activate** link.
-
- Depending on the role settings, you'll need to specify some information to activate the role.
-
-1. Once you have specified the settings for the role, click **Activate** to activate the role.
-
- ![Activate page listing scope and options to specify the start time, duration, and reason](./media/pim-resource-roles-external-users/activate-role.png)
-
- Unless the administrator is required to approve your request, you should have access to the specified resources.
-
-## View activity for a guest
-
-You can view audit logs to keep track of what guests are doing.
-
-1. As an administrator, open Privileged Identity Management and select the resource that has been shared.
-
-1. Select **Resource audit** to view the activity for that resource. The following shows an example of the activity for a resource group.
-
- ![Azure resources - Resource audit page listing the time, requestor, and action](./media/pim-resource-roles-external-users/audit-resource.png)
-
-1. To view the activity for the guest, select **Azure Active Directory** > **Users** > *guest name*.
-
-1. Select **Audit logs** to see the audit logs for the organization. If necessary, you can specify filters.
-
- ![Directory audit logs listing date, target, initiated by, and activity](./media/pim-resource-roles-external-users/audit-directory.png)
-
-## Next steps
--- [Assign Azure AD admin roles in Privileged Identity Management](pim-how-to-add-role-to-user.md)-- [What is guest user access in Azure AD B2B collaboration?](../external-identities/what-is-b2b.md)
active-directory Pim Security Wizard https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/privileged-identity-management/pim-security-wizard.md
Previously updated : 09/01/2020 Last updated : 06/03/2021
Also, keep role assignments permanent if a user has a Microsoft account (in othe
## Next steps - [Assign Azure AD roles in Privileged Identity Management](pim-how-to-add-role-to-user.md)-- [Grant access to other administrators to manage Privileged Identity Management](pim-how-to-give-access-to-pim.md)
active-directory Powershell For Azure Ad Roles https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/privileged-identity-management/powershell-for-azure-ad-roles.md
Set-AzureADMSPrivilegedRoleSetting -ProviderId 'aadRoles' -Id 'ff518d09-47f5-45a
## Next steps -- [Assign an Azure AD custom role](azure-ad-custom-roles-assign.md)-- [Remove or update an Azure AD custom role assignment](azure-ad-custom-roles-update-remove.md)-- [Configure an Azure AD custom role assignment](azure-ad-custom-roles-configure.md) - [Role definitions in Azure AD](../roles/permissions-reference.md)
active-directory Security Planning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/security-planning.md
Stage 1 of the roadmap is focused on critical tasks that are fast and easy to im
### General preparation
-#### Turn on Azure AD Privileged Identity Management
+#### Use Azure AD Privileged Identity Management
-We recommend that you turn on Azure AD Privileged Identity Management (PIM) in your Azure AD production environment. After you turn on PIM, you'll receive notification email messages for privileged access role changes. Notifications provide early warning when additional users are added to highly privileged roles.
+We recommend that you start using Azure AD Privileged Identity Management (PIM) in your Azure AD production environment. After you start using PIM, you'll receive notification email messages for privileged access role changes. Notifications provide early warning when additional users are added to highly privileged roles.
Azure AD Privileged Identity Management is included in Azure AD Premium P2 or EMS E5. To help you protect access to applications and resources on-premises and in the cloud, sign up for the [Enterprise Mobility + Security free 90-day trial](https://www.microsoft.com/cloud-platform/enterprise-mobility-security-trial). Azure AD Privileged Identity Management and Azure AD Identity Protection monitor security activity using Azure AD reporting, auditing, and alerts.
-After you turn on Azure AD Privileged Identity Management:
+After you start using Azure AD Privileged Identity Management:
1. Sign in to the [Azure portal](https://portal.azure.com/) with an account that is a Global Administrator of your Azure AD production organization.
Make sure the first person to use PIM in your organization is assigned to the **
#### Identify and categorize accounts that are in highly privileged roles
-After turning on Azure AD Privileged Identity Management, view the users who are in the following Azure AD roles:
+After starting to use Azure AD Privileged Identity Management, view the users who are in the following Azure AD roles:
* Global Administrator * Privileged Role Administrator
aks Quotas Skus Regions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/quotas-skus-regions.md
Each node in an AKS cluster contains a fixed amount of compute resources such as
- Standard_B1ms - Standard_F1 - Standard_F1s
+- Standard_A2
+- Standard_D1
+- Standard_D1_v2
+- Standard_DS1
+- Standard_DS1_v2
For more information on VM types and their compute resources, see [Sizes for virtual machines in Azure][vm-skus].
api-management How To Deploy Self Hosted Gateway Azure Arc https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/how-to-deploy-self-hosted-gateway-azure-arc.md
Deploying the API Management gateway on an Arc-enabled Kubernetes cluster expand
1. In your provisioned gateway resource, click **Deployment** from the side navigation menu. 1. Make note of the **Token** and **Configuration URL** values for the next step. 1. In Azure CLI, deploy the gateway extension using the `az k8s-extension create` command. Fill in the `token` and `configuration URL` values.
- * The following example uses the `service.Type='NodePort'` extension configuration. See more [available extension configurations](#available-extension-configurations).
+ * The following example uses the `service.type='LoadBalancer'` extension configuration. See more [available extension configurations](#available-extension-configurations).
```azurecli az k8s-extension create --cluster-type connectedClusters --cluster-name <cluster-name> \ --resource-group <rg-name> --name <extension-name> --extension-type Microsoft.ApiManagement.Gateway \ --scope namespace --target-namespace <namespace> \ --configuration-settings gateway.endpoint='<Configuration URL>' \
- --configuration-protected-settings gateway.authKey='<token>' --release-train preview
+ --configuration-protected-settings gateway.authKey='<token>' \
+ --configuration-settings service.type='LoadBalancer' --release-train preview
``` > [!TIP]
The following extension configurations are **required**.
| - | -- | | `gateway.endpoint` | The gateway endpoint's Configuration URL. | | `gateway.authKey` | Token for access to the gateway. |
-| `service.Type` | Kubernetes service configuration for the gateway: `LoadBalancer`, `NodePort`, or `ClusterIP`. |
+| `service.type` | Kubernetes service configuration for the gateway: `LoadBalancer`, `NodePort`, or `ClusterIP`. |
### Log Analytics settings
To enable monitoring of the self-hosted gateway, configure the following Log Ana
* To learn more about the self-hosted gateway, see [Azure API Management self-hosted gateway overview](self-hosted-gateway-overview.md). * Discover all [Azure Arc enabled Kubernetes extensions](../azure-arc/kubernetes/extensions.md).
-* Learn more about [Azure Arc enabled Kubernetes](../azure-arc/kubernetes/overview.md).
+* Learn more about [Azure Arc enabled Kubernetes](../azure-arc/kubernetes/overview.md).
api-management How To Deploy Self Hosted Gateway Azure Kubernetes Service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/how-to-deploy-self-hosted-gateway-azure-kubernetes-service.md
editor: ''
Previously updated : 05/25/2021 Last updated : 06/11/2021 # Deploy to Azure Kubernetes Service
-This article provides the steps for deploying self-hosted gateway component of Azure API Management to [Azure Kubernetes Service](https://azure.microsoft.com/services/kubernetes-service/).
+This article provides the steps for deploying self-hosted gateway component of Azure API Management to [Azure Kubernetes Service](https://azure.microsoft.com/services/kubernetes-service/). For deploying self-hosted gateway to a Kubernetes cluster, see the [how-to article](how-to-deploy-self-hosted-gateway-kubernetes.md).
> [!NOTE] > You can also deploy self-hosted gateway to an [Azure Arc enabled Kubernetes cluster](how-to-deploy-self-hosted-gateway-azure-arc.md) as a [cluster extension](../azure-arc/kubernetes/extensions.md).
This article provides the steps for deploying self-hosted gateway component of A
1. Select **Gateways** from under **Deployment and infrastructure**. 2. Select the self-hosted gateway resource you intend to deploy. 3. Select **Deployment**.
-4. Note that a new token in the **Token** text box was autogenerated for you using the default **Expiry** and **Secret Key** values. Adjust either or both if desired and select **Generate** to create a new token.
+4. A new token in the **Token** text box was autogenerated for you using the default **Expiry** and **Secret Key** values. Adjust either or both if desired and select **Generate** to create a new token.
5. Make sure **Kubernetes** is selected under **Deployment scripts**. 6. Select **<gateway-name>.yml** file link next to **Deployment** to download the file.
-7. Adjust the port mappings and container name in the yml file as needed.
-8. Depending on your scenario, you might need to change the [service type](../aks/concepts-network.md#services). The default value is `NodePort`.
-9. Select the **copy** icon located at the right end of the **Deploy** text box to save the `kubectl` command to clipboard.
-10. Paste the command to the terminal (or command) window. Note that the command expects the downloaded environment file to be present in the current directory.
-```console
- kubectl apply -f <gateway-name>.yaml
-```
-11. Execute the command. The command instructs your AKS cluster to run the container, using self-hosted gateway's image downloaded from the Microsoft Container Registry, and to configure the container to expose HTTP (8080) and HTTPS (443) ports.
-12. Run the below command to check the gateway pod is running. Note that your pod name will be different.
-```console
-kubectl get pods
-NAME READY STATUS RESTARTS AGE
-contoso-apim-gateway-59f5fb94c-s9stz 1/1 Running 0 1m
-```
-13. Run the below command to check the gateway service is running. Note that your service name and IP addresses will be different.
-```console
-kubectl get services
-NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
-contosogateway NodePort 10.110.230.87 <none> 80:32504/TCP,443:30043/TCP 1m
-```
-14. Go back to the Azure portal and confirm that gateway node you just deployed is reporting healthy status.
+7. Adjust the `config.service.endpoint`, port mappings, and container name in the .yml file as needed.
+8. Depending on your scenario, you might need to change the [service type](../aks/concepts-network.md#services).
+ * The default value is `LoadBalancer`, which is the external load balancer.
+ * You can use the [internal load balancer](../aks/internal-lb.md) to restrict the access to the self-hosted gateway to only internal users.
+ * The sample below uses `NodePort`.
+1. Select the **copy** icon located at the right end of the **Deploy** text box to save the `kubectl` command to clipboard.
+1. Paste the command to the terminal (or command) window. The command expects the downloaded environment file to be present in the current directory.
+ ```console
+ kubectl apply -f <gateway-name>.yaml
+ ```
+1. Execute the command. The command instructs your AKS cluster to:
+ * Run the container, using self-hosted gateway's image downloaded from the Microsoft Container Registry.
+ * Configure the container to expose HTTP (8080) and HTTPS (443) ports.
+1. Run the below command to check the gateway pod is running. Your pod name will be different.
+ ```console
+ kubectl get pods
+ NAME READY STATUS RESTARTS AGE
+ contoso-apim-gateway-59f5fb94c-s9stz 1/1 Running 0 1m
+ ```
+1. Run the below command to check the gateway service is running. Your service name and IP addresses will be different.
+ ```console
+ kubectl get services
+ NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+ contosogateway NodePort 10.110.230.87 <none> 80:32504/TCP,443:30043/TCP 1m
+ ```
+1. Return to the Azure portal and confirm that gateway node you deployed is reporting healthy status.
> [!TIP] > Use <code>kubectl logs <gateway-pod-name></code> command to view a snapshot of self-hosted gateway log.
api-management Zone Redundancy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/zone-redundancy.md
In the portal, optionally enable zone redundancy when you add a location to your
## Next steps * Learn more about [deploying an Azure API Management service instance to multiple Azure regions](api-management-howto-deploy-multi-region.md).
-* You can also enable zone redundancy using an [Azure Resource Manager template](https://github.com/Azure/azure-quickstart-templates/tree/master/101-api-management-simple-zones).
+* You can also enable zone redundancy using an [Azure Resource Manager template](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.apimanagement/api-management-simple-zones).
* Learn more about [Azure services that support availability zones](../availability-zones/az-region.md). * Learn more about building for [reliability](/azure/architecture/framework/resiliency/overview) in Azure.
app-service Tutorial Nodejs Mongodb App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/tutorial-nodejs-mongodb-app.md
In this step, you connect your MEAN.js sample application to the Cosmos DB datab
### Retrieve the database key
-To connect to the Cosmos DB database, you need the database key. In the Cloud Shell, use the [`az cosmosdb list-keys`](/cli/azure/cosmosdb#az_cosmosdb_list_keys) command to retrieve the primary key.
+To connect to the Cosmos DB database, you need the database key. In the Cloud Shell, use the [`az cosmosdb keys list`](/cli/azure/cosmosdb#az_cosmosdb_keys_list) command to retrieve the primary key.
```azurecli-interactive
-az cosmosdb list-keys --name <cosmosdb-name> --resource-group myResourceGroup
+az cosmosdb keys list --name <cosmosdb-name> --resource-group myResourceGroup
``` The Azure CLI shows information similar to the following example:
Delta compression using up to 4 threads.
Compressing objects: 100% (5/5), done. Writing objects: 100% (5/5), 489 bytes | 0 bytes/s, done. Total 5 (delta 3), reused 0 (delta 0)
-remote: Updating branch 'main'.
+remote: Updating branch 'master'.
remote: Updating submodules. remote: Preparing deployment for commit id '6c7c716eee'. remote: Running custom deployment command...
remote: Handling node.js deployment.
. remote: Deployment successful. To https://&lt;app-name&gt;.scm.azurewebsites.net/&lt;app-name&gt;.git
- * [new branch]      main -> main
+ * [new branch]      master -> master
</pre> You may notice that the deployment process runs [Gulp](https://gulpjs.com/) after `npm install`. App Service does not run Gulp or Grunt tasks during deployment, so this sample repository has two additional files in its root directory to enable it:
In the local terminal window, commit your changes in Git, then push the code cha
```bash git commit -am "added article comment"
-git push azure main
+git push azure master
``` Once the `git push` is complete, navigate to your Azure app and try out the new functionality.
automation Automation Role Based Access Control https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-role-based-access-control.md
description: This article describes how to use Azure role-based access control (
keywords: automation rbac, role based access control, azure rbac Previously updated : 05/17/2020 Last updated : 06/15/2021 + # Manage role permissions and security Azure role-based access control (Azure RBAC) enables access management for Azure resources. Using [Azure RBAC](../role-based-access-control/overview.md), you can segregate duties within your team and grant only the amount of access to users, groups, and applications that they need to perform their jobs. You can grant role-based access to users using the Azure portal, Azure Command-Line tools, or Azure Management APIs.
In Azure Automation, access is granted by assigning the appropriate Azure role t
|: |: | | Owner |The Owner role allows access to all resources and actions within an Automation account including providing access to other users, groups, and applications to manage the Automation account. | | Contributor |The Contributor role allows you to manage everything except modifying other userΓÇÖs access permissions to an Automation account. |
-| Reader |The Reader role allows you to view all the resources in an Automation account but cannot make any changes. |
+| Reader |The Reader role allows you to view all the resources in an Automation account but can't make any changes. |
| Automation Operator |The Automation Operator role allows you to view runbook name and properties and to create and manage jobs for all runbooks in an Automation account. This role is helpful if you want to protect your Automation account resources like credentials assets and runbooks from being viewed or modified but still allow members of your organization to execute these runbooks. | |Automation Job Operator|The Automation Job Operator role allows you to create and manage jobs for all runbooks in an Automation account.| |Automation Runbook Operator|The Automation Runbook Operator role allows you to view a runbookΓÇÖs name and properties.|
A Contributor can manage everything except access. The following table shows the
### Reader
-A Reader can view all the resources in an Automation account but cannot make any changes.
+A Reader can view all the resources in an Automation account but can't make any changes.
|**Actions** |**Description** | |||
The following table shows the permissions granted for the role:
### Automation Job Operator
-An Automation Job Operator role is granted at the Automation account scope. This allows the operator permissions to create and manage jobs for all runbooks in the account. If the Job Operator role is granted read permissions on the resource group containing the Automation account, members of the role have the ability to start runbooks. However, they do not have the ability to create, edit, or delete them.
+An Automation Job Operator role is granted at the Automation account scope. This allows the operator permissions to create and manage jobs for all runbooks in the account. If the Job Operator role is granted read permissions on the resource group containing the Automation account, members of the role have the ability to start runbooks. However, they don't have the ability to create, edit, or delete them.
The following table shows the permissions granted for the role:
The following sections describe the minimum required permissions needed for enab
|Create / edit saved search | Microsoft.OperationalInsights/workspaces/write | Workspace | |Create / edit scope config | Microsoft.OperationalInsights/workspaces/write | Workspace|
-## Update management permissions
-
-Update management reaches across multiple services to provide its service. The following table shows the permissions needed to manage update management deployments:
+## Custom Azure Automation Contributor role
+
+Microsoft intends to remove the Automation account rights from the Log Analytics Contributor role. Currently, the built-in [Log Analytics Contributor](#log-analytics-contributor) role described above can escalate privileges to the subscription [Contributor](./../role-based-access-control/built-in-roles.md#contributor) role. Since Automation account Run As accounts are initially configured with Contributor rights on the subscription, it can be used by an attacker to create new runbooks and execute code as a Contributor on the subscription.
+
+As a result of this security risk, we recommend you don't use the Log Analytics Contributor role to execute Automation jobs. Instead, create the Azure Automation Contributor custom role and use it for actions related to the Automation account. Perform the following steps to create this custom role.
+
+### Create using the Azure portal
+
+Perform the following steps to create the Azure Automation custom role in the Azure portal. If you would like to learn more, see [Azure custom roles](/role-based-access-control/custom-roles.md).
+
+1. Copy and paste the following JSON syntax into a file. Save the file on your local machine or in an Azure storage account. In the JSON file, replace the value for the **assignableScopes** property with the subscription GUID.
+
+ ```json
+ {
+ "properties":ΓÇ»{
+ "roleName":ΓÇ»"Automation account Contributor (custom)",
+ "description": "Allows access to manage Azure Automation and its resources",
+ "type":ΓÇ»"CustomRole",
+ "permissions":ΓÇ»[
+ {
+ "actions":ΓÇ»[
+ "Microsoft.Authorization/*/read",
+ "Microsoft.Insights/alertRules/*",
+ "Microsoft.Insights/metrics/read",
+ "Microsoft.Insights/diagnosticSettings/*",
+ "Microsoft.Resources/deployments/*",
+ "Microsoft.Resources/subscriptions/resourceGroups/read",
+ "Microsoft.Automation/automationAccounts/*",
+ "Microsoft.Support/*"
+ ],
+ "notActions":ΓÇ»[],
+ "dataActions":ΓÇ»[],
+ "notDataActions":ΓÇ»[]
+ }
+ ],
+ "assignableScopes":ΓÇ»[
+ "/subscriptions/XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXX"
+ ]
+ }
+ }
+ ```
+
+1. Complete the remaining steps as outlined in [Create or update Azure custom roles using the Azure portal](/role-based-access-control/custom-roles-portal.md#start-from-json). For [Step 3:Basics](/role-based-access-control/custom-roles-portal.md#step-3-basics), note the following:
+
+ - In the **Custom role name** field, enter **Automation account Contributor (custom)** or a name matching your naming standards.
+ - For **Baseline permissions**, select **Start from JSON**. Then select the custom JSON file you saved earlier.
+
+1. Complete the remaining steps, and then review and create the custom role. It can take a few minutes for your custom role to appear everywhere.
+
+### Create using PowerShell
+
+Perform the following steps to create the Azure Automation custom role with PowerShell. If you would like to learn more, see [Azure custom roles](/role-based-access-control/custom-roles.md).
+
+1. Copy and paste the following JSON syntax into a file. Save the file on your local machine or in an Azure storage account. In the JSON file, replace the value for the **AssignableScopes** property with the subscription GUID.
+
+ ```json
+ {
+ "Name":ΓÇ»"Automation account Contributor (custom)",
+ "Id":ΓÇ»"",
+ "IsCustom": true,
+ "Description": "Allows access to manage Azure Automation and its resources",
+ "Actions":ΓÇ»[
+ "Microsoft.Authorization/*/read",
+ "Microsoft.Insights/alertRules/*",
+ "Microsoft.Insights/metrics/read",
+ "Microsoft.Insights/diagnosticSettings/*",
+ "Microsoft.Resources/deployments/*",
+ "Microsoft.Resources/subscriptions/resourceGroups/read",
+ "Microsoft.Automation/automationAccounts/*",
+ "Microsoft.Support/*"
+ ],
+ "NotActions":ΓÇ»[],
+ "AssignableScopes":ΓÇ»[
+ "/subscriptions/XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXX"
+ ]
+ }
+ ```
+
+1. Complete the remaining steps as outlined in [Create or update Azure custom roles using Azure PowerShell](/role-based-access-control/custom-roles-powershell.md#create-a-custom-role-with-json-template). It can take a few minutes for your custom role to appear everywhere.
+
+## Update Management permissions
+
+Update Management can be used to assess and schedule update deployments to machines in multiple subscriptions in the same Azure Active Directory (Azure AD) tenant, or across tenants using Azure Lighthouse. The following table lists the permissions needed to manage update deployments.
|**Resource** |**Role** |**Scope** | ||||
-|Automation account |Log Analytics Contributor |Automation account |
+|Automation account |[Custom Azure Automation Contributor role](#custom-azure-automation-contributor-role) |Automation account |
|Automation account |Virtual Machine Contributor |Resource Group for the account | |Log Analytics workspace Log Analytics Contributor|Log Analytics workspace | |Log Analytics workspace |Log Analytics Reader|Subscription|
The following section shows you how to configure Azure RBAC on your Automation a
#### Remove a user
-You can remove the access permission for a user who is not managing the Automation account, or who no longer works for the organization. Following are the steps to remove a user:
+You can remove the access permission for a user who isn't managing the Automation account, or who no longer works for the organization. Following are the steps to remove a user:
1. From the Access control (IAM) page, select the user to remove and click **Remove**. 2. Click the **Remove** button in the assignment details pane.
In the preceding example, replace `sign-in ID of a user you wish to remove`, `Su
### User experience for Automation Operator role - Automation account
-When a user assigned to the Automation Operator role on the Automation account scope views the Automation account to which he/she is assigned, the user can only view the list of runbooks, runbook jobs, and schedules created in the Automation account. This user cannot view the definitions of these items. The user can start, stop, suspend, resume, or schedule the runbook job. However, the user does not have access to other Automation resources, such as configurations, hybrid worker groups, or DSC nodes.
+When a user assigned to the Automation Operator role on the Automation account scope views the Automation account to which he/she is assigned, the user can only view the list of runbooks, runbook jobs, and schedules created in the Automation account. This user can't view the definitions of these items. The user can start, stop, suspend, resume, or schedule the runbook job. However, the user doesn't have access to other Automation resources, such as configurations, Hybrid Runbook Worker groups, or DSC nodes.
![No access to resources](media/automation-role-based-access-control/automation-10-no-access-to-resources.png)
automation Automation Security Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-security-overview.md
Role-based access control is available with Azure Resource Manager to grant perm
If you have strict security controls for permission assignment in resource groups, you need to assign the Run As account membership to the **Contributor** role in the resource group.
+> [!NOTE]
+> We recommend you don't use the **Log Analytics Contributor** role to execute Automation jobs. Instead, create the Azure Automation Contributor custom role and use it for actions related to the Automation account. For more information, see [Custom Azure Automation Contributor role](./automation-role-based-access-control.md#custom-azure-automation-contributor-role).
+ ## Runbook authentication with Hybrid Runbook Worker Runbooks running on a Hybrid Runbook Worker in your datacenter or against computing services in other cloud environments like AWS, cannot use the same method that is typically used for runbooks authenticating to Azure resources. This is because those resources are running outside of Azure and therefore, requires their own security credentials defined in Automation to authenticate to resources that they access locally. For more information about runbook authentication with runbook workers, see [Run runbooks on a Hybrid Runbook Worker](automation-hrw-run-runbooks.md).
automation Automation Send Email https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-send-email.md
You can send an email from a runbook with [SendGrid](https://sendgrid.com/soluti
* Azure subscription. If you don't have one yet, you can [activate your MSDN subscriber benefits](https://azure.microsoft.com/pricing/member-offers/msdn-benefits-details/) or sign up for a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). * [A SendGrid account](../sendgrid-dotnet-how-to-send-email.md#create-a-sendgrid-account).
+* Sender Verification has been configured in Send Grid. Either [Domain or Single Sender](https://sendgrid.com/docs/for-developers/sending-email/sender-identity/)
* [Automation account](./index.yml) with **Az** modules. * [Run As account](./automation-security-overview.md#run-as-accounts) to store and execute the runbook.
automation Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/whats-new.md
This page is updated monthly, so revisit it regularly.
## June 2021
+### Security update for Log Analytics Contributor role
+
+**Type:** Plan for change
+
+Microsoft intends to remove the Automation account rights from the Log Analytics Contributor role. Currently, the built-in [Log Analytics Contributor](./automation-role-based-access-control.md#log-analytics-contributor) role can escalate privileges to the subscription [Contributor](./../role-based-access-control/built-in-roles.md#contributor) role. Since Automation account Run As accounts are initially configured with Contributor rights on the subscription, it can be used by an attacker to create new runbooks and execute code as a Contributor on the subscription.
+
+As a result of this security risk, we recommend you don't use the Log Analytics Contributor role to execute Automation jobs. Instead, create the Azure Automation Contributor custom role and use it for actions related to the Automation account. For implementation steps, see [Custom Azure Automation Contributor role](./automation-role-based-access-control.md#custom-azure-automation-contributor-role).
+ ### Support for Automation and State Configuration available in West US 3 **Type:** New feature
azure-arc Agent Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/agent-release-notes.md
Title: What's new with Azure Arc enabled servers agent description: This article has release notes for Azure Arc enabled servers agent. For many of the summarized issues, there are links to more details. Previously updated : 05/24/2021 Last updated : 06/16/2021 # What's new with Azure Arc enabled servers agent
The Azure Arc enabled servers Connected Machine agent receives improvements on a
- Known issues - Bug fixes
+## June 2021
+
+Version 1.7
+
+## New features
+
+- Improved reliability during onboarding:
+ - Improved retry logic when HIMDS is unavailable
+ - Onboarding will now continue instead of aborting if OS information cannot be obtained
+- Improved reliability when installing the OMS agent extension on Red Hat and CentOS systems
+ ## May 2021 Version 1.6
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/overview.md
To deliver this experience with your hybrid machines hosted outside of Azure, th
## Supported scenarios When you connect your machine to Azure Arc enabled servers, it enables the ability to perform the following configuration management and monitoring tasks:- - Assign [Azure Policy guest configurations](../../governance/policy/concepts/guest-configuration.md) using the same experience as policy assignment for Azure virtual machines. Today, most Guest Configuration policies do not apply configurations, they only audit settings inside the machine. To understand the cost of using Azure Policy Guest Configuration policies with Arc enabled servers, see Azure Policy [pricing guide](https://azure.microsoft.com/pricing/details/azure-policy/). - Report on configuration changes about installed software, Microsoft services, Windows registry and files, and Linux daemons on monitored servers using Azure Automation [Change Tracking and Inventory](../../automation/change-tracking/overview.md) and [Azure Security Center File Integrity Monitoring](../../security-center/security-center-file-integrity-monitoring.md), for servers enabled with [Azure Defender for servers](../../security-center/defender-for-servers-introduction.md).
When you connect your machine to Azure Arc enabled servers, it enables the abili
> [!NOTE] > At this time, enabling Update Management directly from an Arc enabled server is not supported. See [Enable Update Management from your Automation account](../../automation/update-management/enable-from-automation-account.md) to understand requirements and how to enable for your server. -- Include your non-Azure servers for threat detection and proactively monitor for potential security threats using [Azure Security Center](../../security-center/security-center-introduction.md).
+- Include your non-Azure servers for advanced threat detection and proactively monitor for potential security threats using [Azure Security Center](../../security-center/security-center-introduction.md) or [Azure Defender](../../security-center/azure-defender.md).
+
+- Protect non-Azure servers with [Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/microsoft-defender-endpoint), included through [Azure Defender](../../security-center/azure-defender.md), for threat detection, for vulnerability management, and to proactively monitor for potential security threats.
Log data collected and stored in a Log Analytics workspace from the hybrid machine now contains properties specific to the machine, such as a Resource ID. This can be used to support [resource-context](../../azure-monitor/logs/design-logs-deployment.md#access-mode) log access.
The Connected Machine agent sends a regular heartbeat message to the service eve
## Next steps
-Before evaluating or enabling Arc enabled servers across multiple hybrid machines, review [Connected Machine agent overview](agent-overview.md) to understand requirements, technical details about the agent, and deployment methods.
+Before evaluating or enabling Arc enabled servers across multiple hybrid machines, review [Connected Machine agent overview](agent-overview.md) to understand requirements, technical details about the agent, and deployment methods.
azure-cache-for-redis Cache Aspnet Session State Provider https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-aspnet-session-state-provider.md
It's often not practical in a real-world cloud app to avoid storing some form of
## Store ASP.NET session state in the cache
-To configure a client application in Visual Studio using the Azure Cache for Redis Session State NuGet package, click **NuGet Package Manager**, **Package Manager Console** from the **Tools** menu.
+To configure a client application in Visual Studio using the Azure Cache for Redis Session State NuGet package, select **NuGet Package Manager**, **Package Manager Console** from the **Tools** menu.
Run the following command from the `Package Manager Console` window.
-
```powershell Install-Package Microsoft.Web.RedisSessionStateProvider
Install-Package Microsoft.Web.RedisSessionStateProvider
> [!IMPORTANT] > If you are using the clustering feature from the premium tier, you must use [RedisSessionStateProvider](https://www.nuget.org/packages/Microsoft.Web.RedisSessionStateProvider) 2.0.1 or higher or an exception is thrown. Moving to 2.0.1 or higher is a breaking change; for more information, see [v2.0.0 Breaking Change Details](https://github.com/Azure/aspnet-redis-providers/wiki/v2.0.0-Breaking-Change-Details). At the time of this article update, the current version of this package is 2.2.3.
->
->
+>
+>
The Redis Session State Provider NuGet package has a dependency on the StackExchange.Redis.StrongName package. If the StackExchange.Redis.StrongName package is not present in your project, it is installed.
The NuGet package downloads and adds the required assembly references and adds t
The commented section provides an example of the attributes and sample settings for each attribute.
-Configure the attributes with the values from your cache blade in the Microsoft Azure portal, and configure the other values as desired. For instructions on accessing your cache properties, see [Configure Azure Cache for Redis settings](cache-configure.md#configure-azure-cache-for-redis-settings).
+Configure the attributes with the values from your cache in the Microsoft Azure portal, and configure the other values as desired. For instructions on accessing your cache properties, see [Configure Azure Cache for Redis settings](cache-configure.md#configure-azure-cache-for-redis-settings).
* **host** ΓÇô specify your cache endpoint. * **port** ΓÇô use either your non-TLS/SSL port or your TLS/SSL port, depending on the TLS settings.
Once these steps are performed, your application is configured to use the Azure
> [!IMPORTANT] > Data stored in the cache must be serializable, unlike the data that can be stored in the default in-memory ASP.NET Session State Provider. When the Session State Provider for Redis is used, be sure that the data types that are being stored in session state are serializable.
->
->
+>
+>
## ASP.NET Session State options
For more information about session state and other best practices, see [Web Deve
## Next steps
-Check out the [ASP.NET Output Cache Provider for Azure Cache for Redis](cache-aspnet-output-cache-provider.md).
+Check out the [ASP.NET Output Cache Provider for Azure Cache for Redis](cache-aspnet-output-cache-provider.md).
azure-cache-for-redis Cache Dotnet Core Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-dotnet-core-quickstart.md
If you will be continuing to the next tutorial, you can keep the resources creat
Otherwise, if you are finished with the quickstart sample application, you can delete the Azure resources created in this quickstart to avoid charges. > [!IMPORTANT]
-> Deleting a resource group is irreversible and that the resource group and all the resources in it are permanently deleted. Make sure that you do not accidentally delete the wrong resource group or resources. If you created the resources for hosting this sample inside an existing resource group that contains resources you want to keep, you can delete each resource individually from their respective blades instead of deleting the resource group.
+> Deleting a resource group is irreversible and that the resource group and all the resources in it are permanently deleted. Make sure that you do not accidentally delete the wrong resource group or resources. If you created the resources for hosting this sample inside an existing resource group that contains resources you want to keep, you can delete each resource individually on the left instead of deleting the resource group.
>
-Sign in to the [Azure portal](https://portal.azure.com) and click **Resource groups**.
+Sign in to the [Azure portal](https://portal.azure.com) and select **Resource groups**.
-In the **Filter by name...** textbox, type the name of your resource group. The instructions for this article used a resource group named *TestResources*. On your resource group in the result list, click **...** then **Delete resource group**.
+In the **Filter by name...** textbox, type the name of your resource group. The instructions for this article used a resource group named *TestResources*. On your resource group in the result list, select **...** then **Delete resource group**.
![Delete](./media/cache-dotnet-core-quickstart/cache-delete-resource-group.png)
-You will be asked to confirm the deletion of the resource group. Type the name of your resource group to confirm, and click **Delete**.
+You will be asked to confirm the deletion of the resource group. Type the name of your resource group to confirm, and select **Delete**.
After a few moments, the resource group and all of its contained resources are deleted.
azure-cache-for-redis Cache Dotnet How To Use Azure Redis Cache https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-dotnet-how-to-use-azure-redis-cache.md
Replace `<access-key>` with the primary key for your cache.
## Create a console app
-In Visual Studio, click **File** > **New** > **Project**.
+In Visual Studio, select **File** > **New** > **Project**.
-Select **Console App (.NET Framework)**, and **Next** to configure your app. Type a **Project name**, verify that **.NET Framework 4.6.1** or higher is selected, and then click **Create** to create a new console application.
+Select **Console App (.NET Framework)**, and **Next** to configure your app. Type a **Project name**, verify that **.NET Framework 4.6.1** or higher is selected, and then select **Create** to create a new console application.
<a name="configure-the-cache-clients"></a>
Select **Console App (.NET Framework)**, and **Next** to configure your app. Typ
In this section, you will configure the console application to use the [StackExchange.Redis](https://github.com/StackExchange/StackExchange.Redis) client for .NET.
-In Visual Studio, click **Tools** > **NuGet Package Manager** > **Package Manager Console**, and run the following command from the Package Manager Console window.
+In Visual Studio, select **Tools** > **NuGet Package Manager** > **Package Manager Console**, and run the following command from the Package Manager Console window.
```powershell Install-Package StackExchange.Redis
In Visual Studio, open your *App.config* file and update it to include an `appSe
</configuration> ```
-In Solution Explorer, right-click **References** and click **Add a reference**. Add a reference to the **System.Configuration** assembly.
+In Solution Explorer, right-click **References** and select **Add a reference**. Add a reference to the **System.Configuration** assembly.
Add the following `using` statements to *Program.cs*:
Azure Cache for Redis can cache both .NET objects and primitive data types, but
One simple way to serialize objects is to use the `JsonConvert` serialization methods in [Newtonsoft.Json](https://www.nuget.org/packages/Newtonsoft.Json/) and serialize to and from JSON. In this section, you will add a .NET object to the cache.
-In Visual Studio, click **Tools** > **NuGet Package Manager** > **Package Manager Console**, and run the following command from the Package Manager Console window.
+In Visual Studio, select **Tools** > **NuGet Package Manager** > **Package Manager Console**, and run the following command from the Package Manager Console window.
```powershell Install-Package Newtonsoft.Json
If you will be continuing to the next tutorial, you can keep the resources creat
Otherwise, if you are finished with the quickstart sample application, you can delete the Azure resources created in this quickstart to avoid charges. > [!IMPORTANT]
-> Deleting a resource group is irreversible and that the resource group and all the resources in it are permanently deleted. Make sure that you do not accidentally delete the wrong resource group or resources. If you created the resources for hosting this sample inside an existing resource group that contains resources you want to keep, you can delete each resource individually from their respective blades instead of deleting the resource group.
+> Deleting a resource group is irreversible and that the resource group and all the resources in it are permanently deleted. Make sure that you do not accidentally delete the wrong resource group or resources. If you created the resources for hosting this sample inside an existing resource group that contains resources you want to keep, you can delete each resource individually on the left instead of deleting the resource group.
>
-Sign in to the [Azure portal](https://portal.azure.com) and click **Resource groups**.
+Sign in to the [Azure portal](https://portal.azure.com) and select **Resource groups**.
-In the **Filter by name...** textbox, type the name of your resource group. The instructions for this article used a resource group named *TestResources*. On your resource group in the result list, click **...** then **Delete resource group**.
+In the **Filter by name...** textbox, type the name of your resource group. The instructions for this article used a resource group named *TestResources*. On your resource group in the result list, select **...** then **Delete resource group**.
![Delete](./media/cache-dotnet-how-to-use-azure-redis-cache/cache-delete-resource-group.png)
-You will be asked to confirm the deletion of the resource group. Type the name of your resource group to confirm, and click **Delete**.
+You will be asked to confirm the deletion of the resource group. Type the name of your resource group to confirm, and select **Delete**.
After a few moments, the resource group and all of its contained resources are deleted.
azure-cache-for-redis Cache Event Grid Quickstart Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-event-grid-quickstart-portal.md
In this step, you'll subscribe to a topic to tell Event Grid which events you wa
| **Name** | Enter a name for the event subscription. | The value must be between 3 and 64 characters long. It can only contain letters, numbers, and dashes. | | **Event Types** | Drop down and select which event type(s) you want to get pushed to your destination. For this quickstart, we'll be scaling our cache instance. | Patching, scaling, import and export are the available options. | | **Endpoint Type** | Select **Web Hook**. | Event handler to receive your events. |
- | **Endpoint** | Click **Select an endpoint**, and enter the URL of your web app and add `api/updates` to the home page URL (for example: `https://cache.azurewebsites.net/api/updates`), and then select **Confirm Selection**. | This is the URL of your web app that you created earlier. |
+ | **Endpoint** | Select **Select an endpoint**, and enter the URL of your web app and add `api/updates` to the home page URL (for example: `https://cache.azurewebsites.net/api/updates`), and then select **Confirm Selection**. | This is the URL of your web app that you created earlier. |
5. Now, on the **Create Event Subscription** page, select **Create** to create the event subscription.
Now, let's trigger an event to see how Event Grid distributes the message to you
1. In the Azure portal, navigate to your Azure Cache for Redis instance and select **Scale** on the left menu.
-1. Select the desired pricing tier from the **Scale** page and click **Select**.
+1. Select the desired pricing tier from the **Scale** page and select **Select**.
You can scale to a different pricing tier with the following restrictions:
Now, let's trigger an event to see how Event Grid distributes the message to you
* You can't scale from a **Basic** cache directly to a **Premium** cache. First, scale from **Basic** to **Standard** in one scaling operation, and then from **Standard** to **Premium** in a subsequent scaling operation. * You can't scale from a larger size down to the **C0 (250 MB)** size.
- While the cache is scaling to the new pricing tier, a **Scaling** status is displayed in the **Azure Cache for Redis** blade. When scaling is complete, the status changes from **Scaling** to **Running**.
+ While the cache is scaling to the new pricing tier, a **Scaling** status is displayed on the left in **Azure Cache for Redis**. When scaling is complete, the status changes from **Scaling** to **Running**.
1. You've triggered the event, and Event Grid sent the message to the endpoint you configured when subscribing. The message is in the JSON format and it contains an array with one or more events. In the following example, the JSON message contains an array with one event. View your web app and notice that a **ScalingCompleted** event was received.
azure-cache-for-redis Cache How To Active Geo Replication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-how-to-active-geo-replication.md
Active geo-replication groups two Enterprise Azure Cache for Redis instances int
![Configure active geo-replication](./media/cache-how-to-active-geo-replication/cache-active-geo-replication-not-configured.png)
-1. Click **Configure** to set up **Active geo-replication**.
+1. Select **Configure** to set up **Active geo-replication**.
1. Create a new replication group, for a first cache instance, or select an existing one from the list. ![Link caches](./media/cache-how-to-active-geo-replication/cache-active-geo-replication-new-group.png)
-1. Click **Configure** to finish.
+1. Select **Configure** to finish.
![Active geo-replication configured](./media/cache-how-to-active-geo-replication/cache-active-geo-replication-configured.png)
azure-cache-for-redis Cache How To Geo Replication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-how-to-geo-replication.md
After geo-replication is configured, the following restrictions apply to your li
## Add a geo-replication link
-1. To link two caches together for geo-replication, fist click **Geo-replication** from the Resource menu of the cache that you intend to be the primary linked cache. Next, click **Add cache replication link** from the **Geo-replication** blade.
+1. To link two caches together for geo-replication, fist select **Geo-replication** from the Resource menu of the cache that you intend to be the primary linked cache. Next, select **Add cache replication link** from the **Geo-replication** on the left.
![Add link](./media/cache-how-to-geo-replication/cache-geo-location-menu.png)
-2. Click the name of your intended secondary cache from the **Compatible caches** list. If your secondary cache isn't displayed in the list, verify that the [Geo-replication prerequisites](#geo-replication-prerequisites) for the secondary cache are met. To filter the caches by region, click the region in the map to display only those caches in the **Compatible caches** list.
+2. Select the name of your intended secondary cache from the **Compatible caches** list. If your secondary cache isn't displayed in the list, verify that the [Geo-replication prerequisites](#geo-replication-prerequisites) for the secondary cache are met. To filter the caches by region, select the region in the map to display only those caches in the **Compatible caches** list.
![Geo-replication compatible caches](./media/cache-how-to-geo-replication/cache-geo-location-select-link.png)
After geo-replication is configured, the following restrictions apply to your li
![Geo-replication context menu](./media/cache-how-to-geo-replication/cache-geo-location-select-link-context-menu.png)
-3. Click **Link** to link the two caches together and begin the replication process.
+3. Select **Link** to link the two caches together and begin the replication process.
![Link caches](./media/cache-how-to-geo-replication/cache-geo-location-confirm-link.png)
-4. You can view the progress of the replication process on the **Geo-replication** blade.
+4. You can view the progress of the replication process on the **Geo-replication** on the left.
![Linking status](./media/cache-how-to-geo-replication/cache-geo-location-linking.png)
- You can also view the linking status on the **Overview** blade for both the primary and secondary caches.
+ You can also view the linking status using **Overview** on the left for both the primary and secondary caches.
![Screenshot that highlights how to view the linking status for the primary and secondary caches.](./media/cache-how-to-geo-replication/cache-geo-location-link-status.png)
After geo-replication is configured, the following restrictions apply to your li
## Remove a geo-replication link
-1. To remove the link between two caches and stop geo-replication, click **Unlink caches** from the **Geo-replication** blade.
+1. To remove the link between two caches and stop geo-replication, select **Unlink caches** from the **Geo-replication** on the left.
![Unlink caches](./media/cache-how-to-geo-replication/cache-geo-location-unlink.png)
azure-cache-for-redis Cache How To Multi Replicas https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-how-to-multi-replicas.md
To create a cache, follow these steps:
1. Leave other options in their default settings.
-1. Click **Create**.
+1. Select **Create**.
It takes a while for the cache to create. You can monitor progress on the Azure Cache for Redis **Overview** page. When **Status** shows as **Running**, the cache is ready to use.
azure-cache-for-redis Cache How To Redis Cli Tool https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-how-to-redis-cli-tool.md
With Azure Cache for Redis, only the TLS port (6380) is enabled by default. The
Run **stunnel GUI Start** to start the server.
- Right-click the taskbar icon for the stunnel server and click **Show Log Window**.
+ Right-click the taskbar icon for the stunnel server and select **Show Log Window**.
- On the stunnel Log Window menu, click **Configuration** > **Edit Configuration** to open the current configuration file.
+ On the stunnel Log Window menu, select **Configuration** > **Edit Configuration** to open the current configuration file.
Add the following entry for *redis-cli.exe* under the **Service definitions** section. Insert your actual cache name in place of `yourcachename`.
With Azure Cache for Redis, only the TLS port (6380) is enabled by default. The
Save and close the configuration file.
- On the stunnel Log Window menu, click **Configuration** > **Reload Configuration**.
+ On the stunnel Log Window menu, select **Configuration** > **Reload Configuration**.
## Connect using the Redis command-line tool.
azure-cache-for-redis Cache How To Version https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-how-to-version.md
To create a cache, follow these steps:
:::image type="content" source="media/cache-how-to-version/select-redis-version.png" alt-text="Redis version.":::
-1. Click **Create**.
+1. Select **Create**.
It takes a while for the cache to create. You can monitor progress on the Azure Cache for Redis **Overview** page. When **Status** shows as **Running**, the cache is ready to use.
azure-cache-for-redis Cache How To Zone Redundancy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-how-to-zone-redundancy.md
To create a cache, follow these steps:
> Zone redundancy doesn't support AOF persistence or work with geo-replication currently. >
-1. Click **Create**.
+1. Select **Create**.
It takes a while for the cache to create. You can monitor progress on the Azure Cache for Redis **Overview** page. When **Status** shows as **Running**, the cache is ready to use.
azure-cache-for-redis Cache Java Get Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-java-get-started.md
If you will be continuing to the next tutorial, you can keep the resources creat
Otherwise, if you are finished with the quickstart sample application, you can delete the Azure resources created in this quickstart to avoid charges. > [!IMPORTANT]
-> Deleting a resource group is irreversible and that the resource group and all the resources in it are permanently deleted. Make sure that you do not accidentally delete the wrong resource group or resources. If you created the resources for hosting this sample inside an existing resource group that contains resources you want to keep, you can delete each resource individually from their respective blades instead of deleting the resource group.
+> Deleting a resource group is irreversible and that the resource group and all the resources in it are permanently deleted. Make sure that you do not accidentally delete the wrong resource group or resources. If you created the resources for hosting this sample inside an existing resource group that contains resources you want to keep, you can delete each resource individually on the left instead of deleting the resource group.
> 1. Sign in to the [Azure portal](https://portal.azure.com) and select **Resource groups**.
azure-cache-for-redis Cache Ml https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-ml.md
If you're continuing to the next tutorial, you can keep the resources that you c
Otherwise, if you're finished with the quickstart, you can delete the Azure resources that you created in this quickstart to avoid charges. > [!IMPORTANT]
-> Deleting a resource group is irreversible. When you delete a resource group, all the resources in it are permanently deleted. Make sure that you do not accidentally delete the wrong resource group or resources. If you created the resources for hosting this sample inside an existing resource group that contains resources you want to keep, you can delete each resource individually from their respective blades instead of deleting the resource group.
+> Deleting a resource group is irreversible. When you delete a resource group, all the resources in it are permanently deleted. Make sure that you do not accidentally delete the wrong resource group or resources. If you created the resources for hosting this sample inside an existing resource group that contains resources you want to keep, you can delete each resource individually on the left instead of deleting the resource group.
### To delete a resource group
azure-cache-for-redis Cache Nodejs Get Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-nodejs-get-started.md
If you continue to the next tutorial, can keep the resources created in this qui
Otherwise, if you're finished with the quickstart sample application, you can delete the Azure resources created in this quickstart to avoid charges. > [!IMPORTANT]
-> Deleting a resource group is irreversible and that the resource group and all the resources in it are permanently deleted. Make sure that you do not accidentally delete the wrong resource group or resources. If you created the resources for hosting this sample inside an existing resource group that contains resources you want to keep, you can delete each resource individually from their respective blades instead of deleting the resource group.
+> Deleting a resource group is irreversible and that the resource group and all the resources in it are permanently deleted. Make sure that you do not accidentally delete the wrong resource group or resources. If you created the resources for hosting this sample inside an existing resource group that contains resources you want to keep, you can delete each resource individually on the left instead of deleting the resource group.
> Sign in to the [Azure portal](https://portal.azure.com) and select **Resource groups**.
azure-cache-for-redis Cache Web App Aspnet Core Howto https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-web-app-aspnet-core-howto.md
If you're continuing to the next tutorial, you can keep the resources that you c
Otherwise, if you're finished with the quickstart sample application, you can delete the Azure resources that you created in this quickstart to avoid charges. > [!IMPORTANT]
-> Deleting a resource group is irreversible. When you delete a resource group, all the resources in it are permanently deleted. Make sure that you do not accidentally delete the wrong resource group or resources. If you created the resources for hosting this sample inside an existing resource group that contains resources you want to keep, you can delete each resource individually from their respective blades instead of deleting the resource group.
+> Deleting a resource group is irreversible. When you delete a resource group, all the resources in it are permanently deleted. Make sure that you do not accidentally delete the wrong resource group or resources. If you created the resources for hosting this sample inside an existing resource group that contains resources you want to keep, you can delete each resource individually on the left instead of deleting the resource group.
### To delete a resource group
azure-cache-for-redis Cache Web App Cache Aside Leaderboard https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-web-app-cache-aside-leaderboard.md
Select some of the actions and experiment with retrieving the data from the diff
When you're finished with the sample tutorial application, you can delete the Azure resources to conserve cost and resources. All of your resources should be contained in the same resource group. You can delete them together in one operation by deleting the resource group. The instructions in this article used a resource group named *TestResources*. > [!IMPORTANT]
-> Deleting a resource group is irreversible and that the resource group and all the resources in it are permanently deleted. Make sure that you do not accidentally delete the wrong resource group or resources. If you created the resources for hosting this sample inside an existing resource group, that contains resources you want to keep, you can delete each resource individually from their respective blades.
+> Deleting a resource group is irreversible and that the resource group and all the resources in it are permanently deleted. Make sure that you do not accidentally delete the wrong resource group or resources. If you created the resources for hosting this sample inside an existing resource group, that contains resources you want to keep, you can delete each resource individually on the left.
> 1. Sign in to the [Azure portal](https://portal.azure.com) and select **Resource groups**.
azure-cache-for-redis Cache Web App Howto https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-web-app-howto.md
If you're continuing to the next tutorial, you can keep the resources that you c
Otherwise, if you're finished with the quickstart sample application, you can delete the Azure resources that you created in this quickstart to avoid charges. > [!IMPORTANT]
-> Deleting a resource group is irreversible. When you delete a resource group, all the resources in it are permanently deleted. Make sure that you do not accidentally delete the wrong resource group or resources. If you created the resources for hosting this sample inside an existing resource group that contains resources you want to keep, you can delete each resource individually from their respective blades instead of deleting the resource group.
+> Deleting a resource group is irreversible. When you delete a resource group, all the resources in it are permanently deleted. Make sure that you do not accidentally delete the wrong resource group or resources. If you created the resources for hosting this sample inside an existing resource group that contains resources you want to keep, you can delete each resource individually on the left instead of deleting the resource group.
### To delete a resource group
azure-functions Durable Functions Perf And Scale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/durable/durable-functions-perf-and-scale.md
If the maximum number of activities or orchestrations/entities on a worker VM is
> These settings are useful to help manage memory and CPU usage on a single VM. However, when scaled out across multiple VMs, each VM has its own set of limits. These settings can't be used to control concurrency at a global level. > [!NOTE]
-> Orchestrations and entities are only loaded into memory when they are actively processing events or operations. After executing their logic and awaiting (i.e. hitting an `await` (C#) or `yield` (JavaScript, Python) statement in the orchestrator function code), they are unloaded from memory. Orchestrations and entities that are unloaded from memory don't count towards the `maxConcurrentOrchestratorFunctions` throttle. Even if millions of orchestrations or entities are in the "Running" state, only orchestrations or entities or do not count towards the throttle limit unless they are loaded into active memory. An orchestration that schedules an activity function similarly doesn't count towards the throttle if the orchestration is waiting for the activity to finish executing.
+> Orchestrations and entities are only loaded into memory when they are actively processing events or operations. After executing their logic and awaiting (i.e. hitting an `await` (C#) or `yield` (JavaScript, Python) statement in the orchestrator function code), they are unloaded from memory. Orchestrations and entities that are unloaded from memory don't count towards the `maxConcurrentOrchestratorFunctions` throttle. Even if millions of orchestrations or entities are in the "Running" state, they only count towards the throttle limit when they are loaded into active memory. An orchestration that schedules an activity function similarly doesn't count towards the throttle if the orchestration is waiting for the activity to finish executing.
### Language runtime considerations
azure-functions Functions Add Output Binding Cosmos Db Vs Code https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-add-output-binding-cosmos-db-vs-code.md
Before you get started, make sure to install the [Azure Databases extension](htt
2. Click **Create a resource** > **Databases** > **Azure Cosmos DB**.
- :::image type="content" source="../../includes/media/cosmos-db-create-dbaccount/create-nosql-db-databases-json-tutorial-1.png" alt-text="The Azure portal Databases pane" border="true":::
+ :::image type="content" source="../cosmos-db/includes/media/cosmos-db-create-dbaccount/create-nosql-db-databases-json-tutorial-1.png" alt-text="The Azure portal Databases pane" border="true":::
3. In the **Create Azure Cosmos DB Account** page, enter the settings for your new Azure Cosmos DB account.
azure-functions Functions Bindings Service Bus https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-bindings-service-bus.md
The example host.json file below contains only the settings for version 5.0.0 an
"version": "2.0", "extensions": { "serviceBus": {
- "serviceBusOptions": {
- "retryOptions":{
- "mode": "exponential",
- "tryTimeout": "00:00:10",
- "delay": "00:00:00.80",
- "maxDelay": "00:01:00",
- "maxRetries": 4
- },
- "prefetchCount": 100,
- "autoCompleteMessages": true,
- "maxAutoLockRenewalDuration": "00:05:00",
- "maxConcurrentCalls": 32,
- "maxConcurrentSessions": 10,
- "maxMessages": 2000,
- "sessionIdleTimeout": "00:01:00"
- }
+ "retryOptions":{
+ "mode": "exponential",
+ "tryTimeout": "00:01:00",
+ "delay": "00:00:00.80",
+ "maxDelay": "00:01:00",
+ "maxRetries": 3
+ },
+ "prefetchCount": 0,
+ "autoCompleteMessages": true,
+ "maxAutoLockRenewalDuration": "00:05:00",
+ "maxConcurrentCalls": 16,
+ "maxConcurrentSessions": 8,
+ "maxMessages": 1000,
+ "sessionIdleTimeout": "00:01:00"
} } }
In addition to the above configuration properties when using version 5.x and hig
|Property |Default | Description | |||| |mode|Exponential|The approach to use for calculating retry delays. The default exponential mode will retry attempts with a delay based on a back-off strategy where each attempt will increase the duration that it waits before retrying. The `Fixed` mode will retry attempts at fixed intervals with each delay having a consistent duration.|
-|tryTimeout|00:00:10|The maximum duration to wait for an operation per attempt.|
+|tryTimeout|00:01:00|The maximum duration to wait for an operation per attempt.|
|delay|00:00:00.80|The delay or back-off factor to apply between retry attempts.| |maxDelay|00:01:00|The maximum delay to allow between retry attempts| |maxRetries|3|The maximum number of retry attempts before considering the associated operation to have failed.|
azure-functions Functions Create Cosmos Db Triggered Function https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-create-cosmos-db-triggered-function.md
Sign in to the [Azure portal](https://portal.azure.com/) with your Azure account
You must have an Azure Cosmos DB account that uses the SQL API before you create the trigger. ## Create a function app in Azure
azure-functions Functions Integrate Store Unstructured Data Cosmosdb https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-integrate-store-unstructured-data-cosmosdb.md
To complete this tutorial:
You must have an Azure Cosmos DB account that uses the SQL API before you create the output binding. ## Add an output binding
azure-maps Data Driven Style Expressions Web Sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/data-driven-style-expressions-web-sdk.md
For more information, see the [Add a heat map layer](map-add-heat-map-layer.md)
### Line progress expression
-A line progress expression retrieves the progress along a gradient line in a line layer and is defined as `['line-progress']`. This value is a number between 0 and 1. It's used in combination with an `interpolation` or `step` expression. This expression can only be used with the [strokeGradient option]( https://docs.microsoft.com/javascript/api/azure-maps-control/atlas.linelayeroptions#strokegradient) of the line layer.
+A line progress expression retrieves the progress along a gradient line in a line layer and is defined as `['line-progress']`. This value is a number between 0 and 1. It's used in combination with an `interpolation` or `step` expression. This expression can only be used with the [strokeGradient option](/javascript/api/azure-maps-control/atlas.linelayeroptions#strokegradient) of the line layer.
> [!NOTE] > The `strokeGradient` option of the line layer requires the `lineMetrics` option of the data source to be set to `true`.
azure-maps How To Search For Address https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/how-to-search-for-address.md
The [Azure Maps Search Service](/rest/api/maps/search) is a set of RESTful APIs
In this article, you'll learn how to:
-* Request latitude and longitude coordinates for an address (geocode address location) by using the [Search Address API]( https://docs.microsoft.com/rest/api/maps/search/getsearchaddress).
+* Request latitude and longitude coordinates for an address (geocode address location) by using the [Search Address API](/rest/api/maps/search/getsearchaddress).
* Search for an address or Point of Interest (POI) using the [Fuzzy Search API](/rest/api/maps/search/getsearchfuzzy). * Make a [Reverse Address Search](/rest/api/maps/search/getsearchaddressreverse) to translate coordinate location to street address. * Translate coordinate location into a human understandable cross street by using [Search Address Reverse Cross Street API](/rest/api/maps/search/getsearchaddressreversecrossstreet). Most often, this is needed in tracking applications that receive a GPS feed from a device or asset, and wish to know where the coordinate is located.
In this example, we'll use Fuzzy Search to search the entire world for `pizza`.
## Search for a street address using Reverse Address Search
-The Azure Maps [Get Search Address Reverse API]( https://docs.microsoft.com/rest/api/maps/search/getsearchaddressreverse) translates coordinates into human readable street addresses. This API is often used for applications that consume GPS feeds and want to discover addresses at specific coordinate points.
+The Azure Maps [Get Search Address Reverse API](/rest/api/maps/search/getsearchaddressreverse) translates coordinates into human readable street addresses. This API is often used for applications that consume GPS feeds and want to discover addresses at specific coordinate points.
>[!IMPORTANT] >To geobias results to the relevant area for your users, always add as many location details as possible. To learn more, see [Best Practices for Search](how-to-use-best-practices-for-search.md#geobiased-search-results).
azure-maps Migrate From Bing Maps Web App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/migrate-from-bing-maps-web-app.md
Web apps that use Bing Maps often use the Bing Maps V8 JavaScript SDK. The Azure
If migrating an existing web application, check to see if it is using an open-source map control library such as Cesium, Leaflet, and OpenLayers. If it is and you would prefer to continue to use that library, you can connect it to the Azure Maps tile services ([road tiles](/rest/api/maps/render/getmaptile) \| [satellite tiles](/rest/api/maps/render/getmapimagerytile)). The links below provide details on how to use Azure Maps in some commonly used open-source map control libraries.
-* [Cesium](https://cesiumjs.org/) - A 3D map control for the web. [Code samples](https://azuremapscodesamples.azurewebsites.net/?search=Cesium) \| [Plugin repo]()
+* [Cesium](https://www.cesium.com/) - A 3D map control for the web. [Code samples](https://azuremapscodesamples.azurewebsites.net/?search=Cesium) \| [Plugin repo]()
* [Leaflet](https://leafletjs.com/) ΓÇô Lightweight 2D map control for the web. [Code samples](https://azuremapscodesamples.azurewebsites.net/?search=leaflet) \| [Plugin repo]() * [OpenLayers](https://openlayers.org/) - A 2D map control for the web that supports projections. [Code samples](https://azuremapscodesamples.azurewebsites.net/?search=openlayers) \| [Plugin repo]()
azure-maps Migrate From Google Maps Web App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/migrate-from-google-maps-web-app.md
You will also learn:
If migrating an existing web application, check to see if it is using an open-source map control library. Examples of open-source map control library are: Cesium, Leaflet, and OpenLayers. You can still migrate your application, even if it uses an open-source map control library, and you do not want to use the Azure Maps Web SDK. In such case, connect your application to the Azure Maps tile services ([road tiles](/rest/api/maps/render/getmaptile) \| [satellite tiles](/rest/api/maps/render/getmapimagerytile)). The following points detail on how to use Azure Maps in some commonly used open-source map control libraries.
-* Cesium - A 3D map control for the web. [Code sample](https://azuremapscodesamples.azurewebsites.net/https://docsupdatetracker.net/index.html?sample=Raster%20Tiles%20in%20Cesium%20JS) \| [Documentation](https://cesiumjs.org/)
+* Cesium - A 3D map control for the web. [Code sample](https://azuremapscodesamples.azurewebsites.net/https://docsupdatetracker.net/index.html?sample=Raster%20Tiles%20in%20Cesium%20JS) \| [Documentation](https://www.cesium.com/)
* Leaflet ΓÇô Lightweight 2D map control for the web. [Code sample](https://azuremapscodesamples.azurewebsites.net/https://docsupdatetracker.net/index.html?sample=Azure%20Maps%20Raster%20Tiles%20in%20Leaflet%20JS) \| [Documentation](https://leafletjs.com/) * OpenLayers - A 2D map control for the web that supports projections. [Code sample](https://azuremapscodesamples.azurewebsites.net/https://docsupdatetracker.net/index.html?sample=Raster%20Tiles%20in%20OpenLayers) \| [Documentation](https://openlayers.org/)
azure-maps Tutorial Creator Indoor Maps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/tutorial-creator-indoor-maps.md
To create a dataset:
6. Enter the following URL to the [Dataset API](/rest/api/maps/v2/dataset). The request should look like the following URL (replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key, and `{conversionId`} with the `conversionId` obtained in [Check Drawing package conversion status](#check-the-drawing-package-conversion-status)): ```http
- https://us.atlas.microsoft.com/datasets?api-version=2.0&conversionId={conversionId}&type=facility&subscription-key={Azure-Maps-Primary-Subscription-key}
+ https://us.atlas.microsoft.com/datasets?api-version=2.0&conversionId={conversionId}&subscription-key={Azure-Maps-Primary-Subscription-key}
``` 7. Select **Send**.
azure-maps Web Sdk Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/web-sdk-best-practices.md
https://azure.com/support
If related to the Azure Maps visual in Power BI: https://powerbi.microsoft.com/support/ For all other Azure Maps
-or the developer forums: https://docs.microsoft.com/answers/topics/azure-maps.html
+or the developer forums: [https://docs.microsoft.com/answers/topics/azure-maps.html](/answers/topics/azure-maps.html).
**How do I make a feature request?**
azure-monitor Asp Net Core https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/asp-net-core.md
The `.cshtml` file names referenced earlier are from a default MVC application t
If your project doesn't include `_Layout.cshtml`, you can still add [client-side monitoring](./website-monitoring.md). To do this, add the JavaScript snippet to an equivalent file that controls the `<head>` of all pages within your app. Or you can add the snippet to multiple pages, but this solution is difficult to maintain and we generally don't recommend it. > [!NOTE]
-> JavaScript injection provides a default configuration experience. If you require [configuration](./javascript.md#configuration) beyond setting the instrumentation key, you are required to manually add the [JavaScript SDK](./javascript.md#adding-the-javascript-sdk).
+> JavaScript injection provides a default configuration experience. If you require [configuration](./javascript.md#configuration) beyond setting the instrumentation key, you are required to remove auto-injection as described above and manually add the [JavaScript SDK](./javascript.md#adding-the-javascript-sdk).
## Configure the Application Insights SDK
azure-monitor Dotnet Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/dotnet-quickstart.md
+
+ Title: 'Quickstart: Monitor an ASP.NET Core app with Azure Monitor Application Insights'
+description: Instrument an ASP.NET Core web app for monitoring with Azure Monitor Application Insights.
++++ Last updated : 06/11/2021++++
+# Quickstart: Monitor an ASP.NET Core app with Azure Monitor Application Insights
+
+In this quickstart, you'll instrument an ASP.NET Core app using the Application Insights SDK to gather client-side and server-side telemetry.
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/dotnet).
+- .NET 5.0 SDK or later. [Install the latest .NET 5.0 SDK](https://dotnet.microsoft.com/download/dotnet/5.0) for your platform.
+
+## Create an Application Insights resource
+
+To begin ingesting telemetry, create an Application Insights resource in your Azure subscription.
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+
+1. Select **Create a resource** > **Developer tools** > **Application Insights**.
+
+1. Complete the form that appears:
+ 1. Verify the selected **Subscription**.
+ 1. Select an existing or new **Resource Group**.
+ 1. Specify a **Name** for this Application Insights resource.
+ 1. Select a **Region** near your location.
+ 1. Set the **Resource Mode** to *Classic*.
+
+1. Select the **Review + Create** button.
+1. Select the **Create** button.
+1. After deployment completes, select the **Go to resource** button.
+1. From the **Overview** that appears, copy your **Instrumentation Key** (found under **Essentials**).
+
+## Create and configure an ASP.NET Core web app
+
+Complete the following steps to create and configure a new ASP.NET Core web app:
+
+1. Create a new ASP.NET Core Razor Pages app:
+
+ ```dotnetcli
+ dotnet new razor -o ai.quickstart
+ ```
+
+ The previous command creates a new ASP.NET Core Razor Pages app in a directory named *ai.quickstart*.
+
+ > [!TIP]
+ > You may prefer to [use Visual Studio to create your app](/visualstudio/ide/quickstart-aspnet-core).
+
+1. From inside the project directory, add the `Microsoft.ApplicationInsights.AspNetCore` package to the project. If you're using Visual Studio, you can use [NuGet Package Manager](/nuget/consume-packages/install-use-packages-visual-studio).
+
+ ```dotnetcli
+ dotnet add package Microsoft.ApplicationInsights.AspNetCore --version 2.17.0
+ ```
+
+1. Using a text editor or IDE, modify *appsettings.json* to contain a value for `ApplicationInsights.InstrumentationKey`, as shown. Use the instrumentation key you copied earlier.
+
+ :::code language="json" source="~/dotnet-samples/azure/app-insights-aspnet-core-quickstart/appsettings.json" range="1-12" highlight="2-4":::
+
+ > [!IMPORTANT]
+ > The Application Insights SDK expects the `ApplicationInsights.InstrumentationKey` configuration value. Be sure to name it correctly!
+
+## Configure server-side telemetry
+
+In the `ConfigureServices` method of *Startup.cs*, add the Application Insights service to the pipeline. Add the highlighted line:
++
+## Configure client-side telemetry
+
+Complete the following steps to instrument the app to send client-side telemetry:
+
+1. In *Pages/_ViewImports.cshtml*, add the following line:
+
+ ```cshtml
+ @inject Microsoft.ApplicationInsights.AspNetCore.JavaScriptSnippet JavaScriptSnippet
+ ```
+
+ The previous change registers a `Microsoft.ApplicationInsights.AspNetCore.JavaScriptSnippet` dependency containing the Application Insights client-side script element.
+
+1. In *Pages/Shared/_Layout.cshtml*, in the `<head>` element, add the highlighted line:
+
+ :::code language="cshtml" source="~/dotnet-samples/azure/app-insights-aspnet-core-quickstart/pages/shared/_layout.cshtml" range="3-10" highlight="7":::
+
+ This change uses the injected `JavaScriptSnippet` object to ensure the `<script>` element is rendered in the `<head>` element of every page in the app.
+
+## Validate telemetry ingestion
+
+It takes several minutes for telemetry to be ingested into Application Insights for analysis. To verify that your app is sending telemetry in real-time, use **Live metrics**:
+
+1. Run the web app using `dotnet run` or your IDE.
+1. In the Azure portal, when viewing your Application Insights resource, select **Live metrics** under **Investigate**.
+1. In your app, select the **Home** and **Privacy** links repeatedly.
+1. Observe activity on the **Live metrics** display as requests are made in the app.
+
+## Next steps
+
+Congratulations! You can now use the telemetry sent by your app to:
+
+- [Find runtime exceptions](tutorial-runtime-exceptions.md).
+- [Find performance issues](tutorial-performance.md).
+- [Alert on app health](tutorial-alert.md).
+
+> [!div class="nextstepaction"]
+> [Learn more about Application Insights in ASP.NET Core](asp-net-core.md)
azure-monitor Javascript https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/javascript.md
Most configuration fields are named such that they can be defaulted to false. Al
| loggingLevelTelemetry | Sends **internal** Application Insights errors as telemetry. <br>0: off, <br>1: Critical errors only, <br>2: Everything (errors & warnings) | numeric<br/> 1 | | diagnosticLogInterval | (internal) Polling interval (in ms) for internal logging queue | numeric<br/> 10000 | | samplingPercentage | Percentage of events that will be sent. Default is 100, meaning all events are sent. Set this if you wish to preserve your data cap for large-scale applications. | numeric<br/>100 |
-| autoTrackPageVisitTime | If true, on a pageview, the previous instrumented page's view time is tracked and sent as telemetry and a new timer is started for the current pageview. | boolean<br/>false |
+| autoTrackPageVisitTime | If true, on a pageview, the _previous_ instrumented page's view time is tracked and sent as telemetry and a new timer is started for the current pageview. It is sent as a custom metric named `PageVisitTime` in `milliseconds` and is calculated via the Date [now()](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Date/now) function (if available) and falls back to (new Date()).[getTime()](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Date/getTime) if now() is unavailable (IE8 or less). Default is false. | boolean<br/>false |
| disableAjaxTracking | If true, Ajax calls are not autocollected. | boolean<br/> false | | disableFetchTracking | If true, Fetch requests are not autocollected.|boolean<br/>true | | overridePageViewDuration | If true, default behavior of trackPageView is changed to record end of page view duration interval when trackPageView is called. If false and no custom duration is provided to trackPageView, the page view performance is calculated using the navigation timing API. |boolean<br/>
azure-monitor Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/customer-managed-keys.md
These settings can be updated in Key Vault via CLI and PowerShell:
## Create cluster
-Clusters support two [managed identity types](../../active-directory/managed-identities-azure-resources/overview.md#managed-identity-types): System-assigned and User-assigned, while a single identity can be defined in a cluster depending on your scenario.
-- System-assigned managed identity is simpler and being generated automatically with the cluster creation when identity `type` is set to "*SystemAssigned*". This identity can be used later to grant storage access to your Key Vault for wrap and unwrap operations.
+Clusters support System-assigned managed identity and identity `type` property should be set to `SystemAssigned`. The identity is being generated automatically with the cluster creation and can be used later to grant storage access to your Key Vault for wrap and unwrap operations.
Identity settings in cluster for System-assigned managed identity ```json
Clusters support two [managed identity types](../../active-directory/managed-ide
} ``` -- If you want to configure Customer-managed key at cluster creation, you should have a key and User-assigned identity granted in your Key Vault beforehand, then create the cluster with these settings: identity `type` as "*UserAssigned*", `UserAssignedIdentities` with the *resource ID* of your identity.-
- Identity settings in cluster for User-assigned managed identity
- ```json
- {
- "identity": {
- "type": "UserAssigned",
- "userAssignedIdentities": {
- "subscriptions/<subscription-id>/resourcegroups/<resource-group-name>/providers/Microsoft.ManagedIdentity/UserAssignedIdentities/<cluster-assigned-managed-identity>"
- }
- }
- ```
-
-> [!IMPORTANT]
-> You can't use User-assigned managed identity if your Key Vault is in Private-Link (vNet). You can use System-assigned managed identity in this scenario.
- Follow the procedure illustrated in [Dedicated Clusters article](./logs-dedicated-clusters.md#creating-a-cluster). ## Grant Key Vault permissions
Customer-Managed key is provided on dedicated cluster and these operations are r
- Behavior with Key Vault availability - In normal operation -- Storage caches AEK for short periods of time and goes back to Key Vault to unwrap periodically.
- - Transient connection errors -- Storage handles transient errors (timeouts, connection failures, DNS issues) by allowing keys to stay in cache for a short while longer and this overcomes any small blips in availability. The query and ingestion capabilities continue without interruption.
+ - Key Vault connection errors -- Storage handles transient errors (timeouts, connection failures, DNS issues) by allowing keys to stay in cache for the duration of the availablility issue and this overcomes blips and availability issues. The query and ingestion capabilities continue without interruption.
- - Live site -- unavailability of about 30 minutes will cause the Storage account to become unavailable. The query capability is unavailable and ingested data is cached for several hours using Microsoft key to avoid data loss. When access to Key Vault is restored, query becomes available and the temporary cached data is ingested to the data-store and encrypted with Customer-managed key.
-
- - Key Vault access rate -- The frequency that Azure Monitor Storage accesses Key Vault for wrap and unwrap operations is between 6 to 60 seconds.
+- Key Vault access rate -- The frequency that Azure Monitor Storage accesses Key Vault for wrap and unwrap operations is between 6 to 60 seconds.
- If you update your cluster while the cluster is at provisioning or updating state, the update will fail.
azure-netapp-files Performance Linux Concurrency Session Slots https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/performance-linux-concurrency-session-slots.md
# Linux concurrency best practices for Azure NetApp Files - Session slots and slot table entries
-This article helps you understand concurrency best practices about session slots and slot table entries for Azure NetApp Files NFS protocol.
+This article helps you understand concurrency best practices for session slots and slot table entries of Azure NetApp Files NFS protocol.
## NFSv3
azure-netapp-files Performance Linux Mount Options https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/performance-linux-mount-options.md
# Linux NFS mount options best practices for Azure NetApp Files
-This article helps you understand mount options and the best practices about using them with Azure NetApp Files.
+This article helps you understand mount options and the best practices for using them with Azure NetApp Files.
## `Nconnect`
azure-resource-manager Bicep Functions Resource https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/bicep/bicep-functions-resource.md
description: Describes the functions to use in a Bicep file to retrieve values a
Previously updated : 06/01/2021 Last updated : 06/16/2021 # Resource functions for Bicep
Last updated 06/01/2021
Resource Manager provides the following functions for getting resource values in your Bicep file: * [extensionResourceId](#extensionresourceid)
+* [getSecret](#getsecret)
* [list*](#list) * [pickZones](#pickzones) * [reference](#reference)
azure-resource-manager Bicep Functions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/bicep/bicep-functions.md
Title: Bicep functions description: Describes the functions to use in a Bicep file to retrieve values, work with strings and numerics, and retrieve deployment information. Previously updated : 06/01/2021 Last updated : 06/16/2021 # Bicep functions
The following functions are available for working with objects.
The following functions are available for getting resource values: * [extensionResourceId](./bicep-functions-resource.md#extensionresourceid)
+* [getSecret](./bicep-functions-resource.md#getsecret)
* [listAccountSas](./bicep-functions-resource.md#list) * [listKeys](./bicep-functions-resource.md#listkeys) * [listSecrets](./bicep-functions-resource.md#list)
azure-resource-manager Key Vault Parameter https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/bicep/key-vault-parameter.md
description: Shows how to pass a secret from a key vault as a parameter during B
Previously updated : 06/01/2021 Last updated : 06/16/2021 # Use Azure Key Vault to pass secure parameter value during Bicep deployment
-Instead of putting a secure value (like a password) directly in your Bicep file or parameter file, you can retrieve the value from an [Azure Key Vault](../../key-vault/general/overview.md) during a deployment. You retrieve the value by referencing the key vault and secret in your parameter file. When a [module](./modules.md) expects a `string` parameter with `secure:true` modifier, you can use the `getSecret` function to obtain a key vault secret. The value is never exposed because you only reference its key vault ID. The key vault can exist in a different subscription than the resource group you're deploying to.
+Instead of putting a secure value (like a password) directly in your Bicep file or parameter file, you can retrieve the value from an [Azure Key Vault](../../key-vault/general/overview.md) during a deployment. When a [module](./modules.md) expects a `string` parameter with `secure:true` modifier, you can use the [getSecret function](bicep-functions-resource.md#getsecret) to obtain a key vault secret. The value is never exposed because you only reference its key vault ID. The key vault can exist in a different subscription than the resource group you're deploying to.
This article's focus is how to pass a sensitive value as a Bicep parameter. The article doesn't cover how to set a virtual machine property to a certificate's URL in a key vault. For a quickstart template of that scenario, see [Install a certificate from Azure Key Vault on a Virtual Machine](https://github.com/Azure/azure-quickstart-templates/tree/master/demos/vm-winrm-keyvault-windows).
When using a key vault with the Bicep file for a [Managed Application](../manage
## Use getSecret function
-You can use the [`getSecret` function](./bicep-functions-resource.md#getsecret) to obtain a key vault secret and pass the value to a `string` parameter of a module. The `getSecret` function can only be called on a `Microsoft.KeyVault/vaults` resource and can be used only with parameter with `@secure()` decorator.
+You can use the [getSecret function](./bicep-functions-resource.md#getsecret) to obtain a key vault secret and pass the value to a `string` parameter of a module. The `getSecret` function can only be called on a `Microsoft.KeyVault/vaults` resource and can be used only with parameter with `@secure()` decorator.
The following Bicep file creates an Azure SQL server. The `adminPassword` parameter has a `@secure()` decorator.
module sql './sql.bicep' = {
## Reference secrets in parameter file
-With this approach, you reference the key vault in the parameter file, not the Bicep. The following image shows how the parameter file references the secret and passes that value to the Bicep file.
+If you don't want to use a module, you can reference the key vault directly in the parameter file. The following image shows how the parameter file references the secret and passes that value to the Bicep file.
![Resource Manager key vault integration diagram](./media/key-vault-parameter/statickeyvault.png)
azure-resource-manager Parameter Files https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/bicep/parameter-files.md
description: Create parameter file for passing in values during deployment of a
Previously updated : 06/01/2021 Last updated : 06/16/2021 # Create Bicep parameter file
A parameter file uses the following format:
} ```
-Notice that the parameter file stores parameter values as plain text. This approach works for values that aren't sensitive, such as a resource SKU. Plain text doesn't work for sensitive values, such as passwords. If you need to pass a parameter that contains a sensitive value, store the value in a key vault. Then reference the key vault in your parameter file. The sensitive value is securely retrieved during deployment.
-
-The following parameter file includes a plain text value and a sensitive value that's stored in a key vault.
-
-```json
-{
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "<first-parameter-name>": {
- "value": "<first-value>"
- },
- "<second-parameter-name>": {
- "reference": {
- "keyVault": {
- "id": "<resource-id-key-vault>"
- },
- "secretName": "<secret-name>"
- }
- }
- }
-}
-```
-
-For more information about using values from a key vault, see [Use Azure Key Vault to pass secure parameter value during deployment](./key-vault-parameter.md).
+Notice that the parameter file stores parameter values as plain text. This approach works for values that aren't sensitive, such as a resource SKU. Plain text doesn't work for sensitive values, such as passwords. If you need to pass a parameter that contains a sensitive value, store the value in a key vault. Instead of adding the sensitive value to your parameter file, retrieve it with the [getSecret function](bicep-functions-resource.md#getsecret). For more information, see [Use Azure Key Vault to pass secure parameter value during Bicep deployment](key-vault-parameter.md).
## Define parameter values
The following example shows the formats of different parameter types: string, in
## Deploy Bicep file with parameter file
-From Azure CLI you pass a local parameter file using `@` and the parameter file name. For example, `@storage.parameters.json`.
+From Azure CLI, pass a local parameter file using `@` and the parameter file name. For example, `@storage.parameters.json`.
```azurecli az deployment group create \
az deployment group create \
For more information, see [Deploy resources with Bicep and Azure CLI](./deploy-cli.md#parameters). To deploy _.bicep_ files you need Azure CLI version 2.20 or higher.
-From Azure PowerShell you pass a local parameter file using the `TemplateParameterFile` parameter.
+From Azure PowerShell, pass a local parameter file using the `TemplateParameterFile` parameter.
```azurepowershell New-AzResourceGroupDeployment -Name ExampleDeployment -ResourceGroupName ExampleResourceGroup `
If your Bicep file includes a parameter with the same name as one of the paramet
## Next steps - For more information about how to define parameters in a Bicep file, see [Parameters in Bicep](./parameters.md).-- For more information about using values from a key vault, see [Use Azure Key Vault to pass secure parameter value during deployment](./key-vault-parameter.md).
+- To get sensitive values, see [Use Azure Key Vault to pass secure parameter value during deployment](./key-vault-parameter.md).
azure-resource-manager Azure Services Resource Providers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/azure-services-resource-providers.md
The resources providers that are marked with **- registered** are registered by
| Microsoft.HealthcareApis | [Azure API for FHIR](../../healthcare-apis/fhir/index.yml) | | Microsoft.HybridCompute | [Azure Arc](../../azure-arc/index.yml) | | Microsoft.HybridData | [StorSimple](../../storsimple/index.yml) |
-| Microsoft.HybridNetwork | [Private Edge Zones](../../networking/edge-zones-overview.md) |
+| Microsoft.HybridNetwork | [Network Function Manager](../../network-function-manager/index.yml) |
| Microsoft.ImportExport | [Azure Import/Export](../../import-export/storage-import-export-service.md) | | Microsoft.Insights | [Azure Monitor](../../azure-monitor/index.yml) | | Microsoft.IoTCentral | [Azure IoT Central](../../iot-central/index.yml) |
azure-sql Active Geo Replication Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/active-geo-replication-overview.md
Active geo-replication is an Azure SQL Database feature that allows you to creat
> [!NOTE] > Active geo-replication for Azure SQL Hyperscale is [now in public preview](https://aka.ms/hsgeodr). Current limitations include: only one geo-secondary in the same or a different region, forced and planned failover not currently supported, restore database from geo-secondary not supported, using a geo-secondary as the source database for Database Copy, or as the primary for another geo-secondary is not supported.
-> In the case you need to make the geo secondary writable, you can do so by breaking the geo-replication link with the steps below:
-> 1. Make the secondary database a read-write standalone database using the cmdlet [Remove-AzSqlDatabaseSecondary](/powershell/module/az.sql/remove-azsqldatabasesecondary). Any data changes committed to the primary but not yet replicated to the secondary will be lost. These changes could be recovered when the old primary is available, or in some cases by restoring the old primary to the latest available point in time.
+>
+> In the case you need to make the geo-secondary a primary (writable database), follow the steps below:
+> 1. Break the geo-replication link using the cmdlet [Remove-AzSqlDatabaseSecondary](/powershell/module/az.sql/remove-azsqldatabasesecondary) in PowerShell or [az sql db replica delete-link](/cli/azure/sql/db/replica?view=azure-cli-latest#az_sql_db_replica_delete_link) for Azure CLI, this will make the secondary database a read-write standalone database. Any data changes committed to the primary but not yet replicated to the secondary will be lost. These changes could be recovered when the old primary is available, or in some cases by restoring the old primary to the latest available point in time.
> 2. If the old primary is available, delete it, then set up geo-replication for the new primary (a new secondary will be seeded). > 3. Update connection strings in your application accordingly.
For more information on the SQL Database compute sizes, see [What are SQL Databa
## Cross-subscription geo-replication > [!NOTE]
-> Creating a geo-replica on a logical server in a different Azure tenant is not supported when [Azure Active Directory](https://techcommunity.microsoft.com/t5/azure-sql/support-for-azure-ad-user-creation-on-behalf-of-azure-ad/ba-p/2346849) auth is active (enabled) on either primary or secondary logical server.
+> Creating a geo-replica on a logical server in a different Azure tenant is not supported when [Azure Active Directory](https://techcommunity.microsoft.com/t5/azure-sql/azure-active-directory-only-authentication-for-azure-sql/ba-p/2417673) only authentication for Azure SQL is active (enabled) on either primary or secondary logical server.
+> [!NOTE]
+> Cross-subscription geo-replication operations including setup and failover are only supported through SQL commands.
To setup active geo-replication between two databases belonging to different subscriptions (whether under the same tenant or not), you must follow the special procedure described in this section. The procedure is based on SQL commands and requires:
azure-sql Database Copy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/database-copy.md
Start copying the source database with the [CREATE DATABASE ... AS COPY OF](/sql
> [!NOTE] > Terminating the T-SQL statement does not terminate the database copy operation. To terminate the operation, drop the target database. >
+> Database copy is not supported when the source and/or destination servers have a private endpoint configured and public network access is disabled.
+If private endpoint is configured but public network access is allowed, initiating database copy when connected to the destination server from a public IP address will succeed.
+To determine the source IP address of current connection, execute `SELECT client_net_address FROM sys.dm_exec_connections WHERE session_id = @@SPID;`
+
+ > [!IMPORTANT] > Selecting backup storage redundancy when using T-SQL CREATE DATABASE ... AS COPY OF command is not supported yet.
azure-sql Sql Data Sync Data Sql Server Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/sql-data-sync-data-sql-server-sql-database.md
Provisioning and deprovisioning during sync group creation, update, and deletion
### General limitations - A table can't have an identity column that isn't the primary key.-- A table must have a clustered index to use data sync. - A primary key can't have the following data types: sql_variant, binary, varbinary, image, xml. - Be cautious when you use the following data types as a primary key, because the supported precision is only to the second: time, datetime, datetime2, datetimeoffset. - The names of objects (databases, tables, and columns) can't contain the printable characters period (.), left square bracket ([), or right square bracket (]).
Once the sync group is created and provisioned, you can then disable these setti
> [!NOTE] > If you change the sync group's schema settings, you will need to allow the Data Sync service to access the server again so that the hub database can be re-provisioned.
+### Region data residency
+
+If you synchronize data within the same region, SQL Data Sync doesn't store/process customer data outside that region in which the service instance is deployed. If you synchronize data across different regions, SQL Data Sync will replicate customer data to the paired regions.
+ ## FAQ about SQL Data Sync ### How much does the SQL Data Sync service cost
azure-sql Sql Database Vulnerability Assessment Rules Changelog https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/sql-database-vulnerability-assessment-rules-changelog.md
Previously updated : 12/14/2020 Last updated : 06/16/2021 # SQL Vulnerability assessment rules changelog This article details the changes made to the SQL Vulnerability Assessment service rules. Rules that are updated, removed, or added will be outlined below. For an updated list of SQL Vulnerability assessment rules, see [SQL Vulnerability Assessment rules](sql-database-vulnerability-assessment-rules.md).
+## June 2021
+
+|Rule ID |Rule Title |Change details |
+||||
+|VA1220 |Database communication using TDS should be protected through TLS |Logic change |
+|VA2108 |Minimal set of principals should be members of fixed high impact database roles |Logic change |
++ ## December 2020 |Rule ID |Rule Title |Change details |
azure-sql Sql Vulnerability Assessment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/sql-vulnerability-assessment.md
To handle Boolean types as true/false, set the baseline result with binary input
} ```
+## Data residency
+
+SQL Vulnerability Assessment queries the SQL server using publicly available queries under Security Center recommendations for SQL Vulnerability Assessment, and stores the query results. The data is stored in the configured user-owned storage account.
+
+SQL Vulnerability Assessment allows you to specify the region where your data will be stored by choosing the location of the storage account. The user is responsible for the security and data resiliency of the storage account.
+ ## Next steps - Learn more about [Azure Defender for SQL](azure-defender-for-sql.md).
azure-sql Threat Detection Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/threat-detection-overview.md
tags: azure-synapse
# SQL Advanced Threat Protection [!INCLUDE[appliesto-sqldb-sqlmi-asa](../includes/appliesto-sqldb-sqlmi-asa.md)] :::image type="icon" source="../media/applies-to/yes.png" border="false":::SQL Server on Azure VM :::image type="icon" source="../media/applies-to/yes.png" border="false":::Azure Arc enabled SQL Server
-Advanced Threat Protection for [Azure SQL Database](sql-database-paas-overview.md), [Azure SQL Managed Instance](../managed-instance/sql-managed-instance-paas-overview.md), [Azure Synapse Analytics](../../synapse-analytics/sql-data-warehouse/sql-data-warehouse-overview-what-is.md), [SQL Server on Azure Virtual Machines](../virtual-machines/windows/sql-server-on-azure-vm-iaas-what-is-overview.md) and [Azure Arc enabled SQL Server](/sql/sql-server/azure-arc/overview.ms) detects anomalous activities indicating unusual and potentially harmful attempts to access or exploit databases.
+Advanced Threat Protection for [Azure SQL Database](sql-database-paas-overview.md), [Azure SQL Managed Instance](../managed-instance/sql-managed-instance-paas-overview.md), [Azure Synapse Analytics](../../synapse-analytics/sql-data-warehouse/sql-data-warehouse-overview-what-is.md), [SQL Server on Azure Virtual Machines](../virtual-machines/windows/sql-server-on-azure-vm-iaas-what-is-overview.md) and [Azure Arc enabled SQL Server](/sql/sql-server/azure-arc/overview) detects anomalous activities indicating unusual and potentially harmful attempts to access or exploit databases.
Advanced Threat Protection is part of the [Azure Defender for SQL](../../security-center/defender-for-sql-introduction.md) offering, which is a unified package for advanced SQL security capabilities. Advanced Threat Protection can be accessed and managed via the central Azure Defender for SQL portal.
Click **Advanced Threat Protection alert** to launch the Azure Security Center a
- Learn more about [Azure Defender for SQL](azure-defender-for-sql.md). - Learn more about [Azure SQL Database auditing](../../azure-sql/database/auditing-overview.md) - Learn more about [Azure security center](../../security-center/security-center-introduction.md)-- For more information on pricing, see the [Azure SQL Database pricing page](https://azure.microsoft.com/pricing/details/sql-database/)
+ For more information on pricing, see the [Azure SQL Database pricing page](https://azure.microsoft.com/pricing/details/sql-database/)
azure-sql Troubleshoot Transaction Log Errors Issues https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/troubleshoot-transaction-log-errors-issues.md
You may see errors 9002 or 40552 when the transaction log is full and cannot acc
These errors are similar to issues with a full transaction log in SQL Server, but have different resolutions in Azure SQL Database or Azure SQL Managed Instance. > [!NOTE]
-> **This article is focused on Azure SQL Database and Azure SQL Managed Instance.** Azure SQL Database and Azure SQL Managed Instance are based on the latest stable version of the Microsoft SQL Server database engine, so much of the content is similar though troubleshooting options and tools may differ. For more on blocking in SQL Server, see [Troubleshoot a Full Transaction Log (SQL Server Error 9002)](/sql/relational-databases/logs/troubleshoot-a-full-transaction-log-sql-server-error-9002).
+> **This article is focused on Azure SQL Database and Azure SQL Managed Instance.** Azure SQL Database and Azure SQL Managed Instance are based on the latest stable version of the Microsoft SQL Server database engine, so much of the content is similar though troubleshooting options and tools may differ. For more on troubleshooting a transaction log in SQL Server, see [Troubleshoot a Full Transaction Log (SQL Server Error 9002)](/sql/relational-databases/logs/troubleshoot-a-full-transaction-log-sql-server-error-9002).
## Automated backups and the transaction log
azure-sql Performance Guidelines Best Practices Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/performance-guidelines-best-practices-storage.md
Many Azure virtual machines contain another disk type called the temporary disk
The temporary storage drive is not persisted to remote storage and therefore should not store user database files, transaction log files, or anything that must be preserved.
-Place tempdb on the local temporary SSD `D:\` drive for SQL Server workloads unless consumption of local cache is a concern. If you are using a virtual machine that [does not have a temporary disk](../../../virtual-machines/azure-vms-no-temp-disk.md) then it is recommended to place tempdb on its own isolated disk or storage pool with caching set to read-only. To learn more, see [tempdb data caching policies](performance-guidelines-best-practices-storage.md#data-file-caching-policies).
+Place tempdb on the local temporary SSD `D:\` drive for SQL Server workloads unless consumption of local cache is a concern. If you are using a virtual machine that [does not have a temporary disk](../../../virtual-machines/azure-vms-no-temp-disk.yml) then it is recommended to place tempdb on its own isolated disk or storage pool with caching set to read-only. To learn more, see [tempdb data caching policies](performance-guidelines-best-practices-storage.md#data-file-caching-policies).
### Data disks
azure-vmware Azure Security Integration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/azure-security-integration.md
The diagram shows the integrated monitoring architecture of integrated security
- [Enable Azure Security Center in your subscription](../security-center/security-center-get-started.md). >[!NOTE]
- >Azure Security Center is a pre-configured tool that doesn't require deployment, but you'll need to enable it in the Azure portal.
+ >Azure Security Center is a pre-configured tool that doesn't require deployment, but you'll need to enable it.
- [Enable Azure Defender](../security-center/enable-azure-defender.md).
azure-vmware Integrate Azure Native Services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/integrate-azure-native-services.md
Title: Integrate and deploy Azure native services description: Learn how to integrate and deploy Microsoft Azure native tools to monitor and manage your Azure VMware Solution workloads. Previously updated : 06/14/2021 Last updated : 06/15/2021 # Integrate and deploy Azure native services
Last updated 06/14/2021
Microsoft Azure native services let you monitor, manage, and protect your virtual machines (VMs) in a hybrid environment (Azure, Azure VMware Solution, and on-premises). The Azure native services that you can integrate with Azure VMware Solution include: - **Log Analytics workspace:** Each workspace has its own data repository and configuration for storing log data. Data sources and solutions are configured to store their data in a specific workspace. Easily deploy the Log Analytics agent using Azure Arc enabled servers VM extension support for new and existing VMs. -- **Azure Security Center:** Unified infrastructure security management system that strengthens security of data centers, and provides advanced threat protection across hybrid workloads in the cloud or on premises. -- **Azure Sentinel** is a cloud-native, security information event management (SIEM) solution. It provides security analytics, alert detection, and automated threat response across an environment. Azure Sentinel is built on top of a Log Analytics workspace.
+- **Azure Security Center:** Unified infrastructure security management system that strengthens security of data centers, and provides advanced threat protection across hybrid workloads in the cloud or on premises. It assesses the vulnerability of Azure VMware Solution VMs and raise alerts as needed. To enable Azure Security Center, see [Integrate Azure Security Center with Azure VMware Solution](azure-security-integration.md).
+- **Azure Sentinel:** A cloud-native, security information event management (SIEM) solution. It provides security analytics, alert detection, and automated threat response across an environment. Azure Sentinel is built on top of a Log Analytics workspace.
- **Azure Arc:** Extends Azure management to any infrastructure, including Azure VMware Solution, on-premises, or other cloud platforms. - **Azure Update Management:** Manages operating system updates for your Windows and Linux machines in a hybrid environment.-- **Azure Monitor** Comprehensive solution for collecting, analyzing, and acting on telemetry from your cloud and on-premises environments. It requires no deployment.
+- **Azure Monitor:** Comprehensive solution for collecting, analyzing, and acting on telemetry from your cloud and on-premises environments. It requires no deployment.
In this article, you'll integrate Azure native services in your Azure VMware Solution private cloud. You'll also learn how to use the tools to manage your VMs throughout their lifecycle.
In this article, you'll integrate Azure native services in your Azure VMware Sol
1. Once you've enabled Update Management, you can [deploy updates on VMs and review the results](../automation/update-management/deploy-updates.md). -
-## Enable Azure Security Center
-
-Azure Security Center provides advanced threat protection across your Azure VMware Solution and on-premises virtual machines (VMs). It assesses the vulnerability of Azure VMware Solution VMs and raise alerts as needed. These security alerts can be forwarded to Azure Monitor for resolution. For more information, see [Supported features for VMs](../security-center/security-center-services.md).
-
-Azure Security Center offers many features, including:
-- File integrity monitoring-- Fileless attack detection-- Operating system patch assessment -- Security misconfigurations assessment-- Endpoint protection assessment-
->[!NOTE]
->Azure Security Center is a pre-configured tool that doesn't require deployment, but you'll need to enable it in the Azure portal.
-
-To enable Azure Security Center, see [Integrate Azure Security Center with Azure VMware Solution](azure-security-integration.md).
- ## Onboard VMs to Azure Arc enabled servers Azure Arc extends Azure management to any infrastructure, including Azure VMware Solution and on-premises. [Azure Arc enabled servers](../azure-arc/servers/overview.md) lets you manage your Windows and Linux physical servers and virtual machines hosted *outside* of Azure, on your corporate network, or another cloud provider.
azure-vmware Tutorial Expressroute Global Reach Private Cloud https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/tutorial-expressroute-global-reach-private-cloud.md
Before you enable connectivity between two ExpressRoute circuits using ExpressRo
- A separate, functioning ExpressRoute circuit used to connect on-premises environments to Azure, which is _circuit 1_ for peering. - Ensure that all gateways, including the ExpressRoute provider's service, supports 4-byte Autonomous System Number (ASN). Azure VMware Solution uses 4-byte public ASNs for advertising routes.
-[!NOTE]
+>[!NOTE]
> If advertising a default route to Azure (0.0.0.0/0), ensure a more specific route containing your on-premises networks is advertised in addition to the default route to enable management access to AVS. A single 0.0.0.0/0 route will be discarded by Azure VMware Solution's management network to ensure successful operation of the service. ## Create an ExpressRoute auth key in the on-premises ExpressRoute circuit
backup Backup Support Matrix Iaas https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-support-matrix-iaas.md
Title: Support matrix for Azure VM backup description: Provides a summary of support settings and limitations when backing up Azure VMs with the Azure Backup service. Previously updated : 06/02/2021 Last updated : 06/16/2021
Data disk size | Individual disk size can be up to 32 TB and a maximum of 256 TB
Storage type | Standard HDD, Standard SSD, Premium SSD. Managed disks | Supported. Encrypted disks | Supported.<br/><br/> Azure VMs enabled with Azure Disk Encryption can be backed up (with or without the Azure AD app).<br/><br/> Encrypted VMs can't be recovered at the file/folder level. You must recover the entire VM.<br/><br/> You can enable encryption on VMs that are already protected by Azure Backup.
-Disks with Write Accelerator enabled | As of November 23, 2020, supported only in the Korea Central (KRC) and South Africa North (SAN) regions for a limited number of subscriptions (limited preview). For those supported subscriptions, Azure Backup will back up the virtual machines having disks that are Write Accelerated (WA) enabled during backup.<br><br>For the unsupported regions, internet connectivity is required on the VM to take snapshots of Virtual Machines with WA enabled.<br><br> **Important note**: In those unsupported regions, virtual machines with WA disks need internet connectivity for a successful backup (even though those disks are excluded from the backup).
+Disks with Write Accelerator enabled | Currently, Azure VM with WA disk backup is previewed in all Azure public regions. <br><br> (Quota is exceeded and no further whitelisting is possible until GA) <br><br> Snapshots donΓÇÖt include WA disk snapshots for unsupported subscriptions as WA disk will be excluded.
Back up & Restore deduplicated VMs/disks | Azure Backup doesn't support deduplication. For more information, see this [article](./backup-support-matrix.md#disk-deduplication-support) <br/> <br/> - Azure Backup doesn't deduplicate across VMs in the Recovery Services vault <br/> <br/> - If there are VMs in deduplication state during restore, the files can't be restored because the vault doesn't understand the format. However, you can successfully perform the full VM restore. Add disk to protected VM | Supported. Resize disk on protected VM | Supported.
backup Manage Recovery Points https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/manage-recovery-points.md
Title: Manage recovery points description: Learn how the Azure Backup service manages recovery points for virtual machines Previously updated : 11/08/2020 Last updated : 06/17/2021 # Manage recovery points
To understand how churn impacts the backup performance, look at this scenario:
The backup performance will be in the order VM2>VM3>VM1. The reason for this is the churned data is spread across the various disks. Since the backup of disks happens in parallel, VM2 will show the best performance.
+## Frequently asked question
+
+### How can I find the retention period of an on-demand backup?
+
+The **Recovery Point Expiry Time in UTC** field in the backup jobs of on-demand backups displays the retention period of the recovery point. To learn more, see [Run an on-demand backup](backup-azure-manage-vms.md#run-an-on-demand-backup).
+ ## Next steps - [Azure Backup architecture and components](backup-architecture.md)
blockchain Data Manager Cosmosdb https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/blockchain/service/data-manager-cosmosdb.md
You can delete the Azure Storage account or use it to configure more blockchain
## Create Azure Cosmos DB ### Add a database and container
cognitive-services Call Read Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/Vision-API-How-to-Topics/call-read-api.md
This guide assumes you have already <a href="https://portal.azure.com/#create/Mi
## Submit data to the service
+You submit either a local image or a remote image to the Read API. For local, you put the binary image data in the HTTP request body. For remote, you specify the image's URL by formatting the request body like the following: `{"url":"http://example.com/images/test.jpg"}`.
+ The Read API's [Read call](https://centraluseuap.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/5d986960601faab4bf452005) takes an image or PDF document as the input and extracts text asynchronously. `https://{endpoint}/vision/v3.2/read/analyze[?language][&pages][&readingOrder]`
cognitive-services Intro To Spatial Analysis Public Preview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/intro-to-spatial-analysis-public-preview.md
The core operations of Spatial Analysis are all built on a pipeline that ingests
### Follow a quickstart
-Once you're granted access to Spatial Analysis, follow the [quickstart](spatial-analysis-container.md) to set up the container and begin analyzing video.
+Follow the [quickstart](spatial-analysis-container.md) to set up the container and begin analyzing video.
## Responsible use of Spatial Analysis technology
cognitive-services Data Sources And Content https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/QnAMaker/Concepts/data-sources-and-content.md
The table below summarizes the types of content and file formats that are suppor
|Source Type|Content Type| Examples| |--|--|--|
-|URL|FAQs<br> (Flat, with sections or with a topics homepage)<br>Support pages <br> (Single page how-to articles, troubleshooting articles etc.)|[Plain FAQ](../troubleshooting.md), <br>[FAQ with links](https://www.microsoft.com/software-download/faq),<br> [FAQ with topics homepage](https://www.microsoft.com/Licensing/servicecenter/Help/Faq.aspx)<br>[Support article](./best-practices.md)|
+|URL|FAQs<br> (Flat, with sections or with a topics homepage)<br>Support pages <br> (Single page how-to articles, troubleshooting articles etc.)|[Plain FAQ](../troubleshooting.md), <br>[FAQ with links](https://www.microsoft.com/en-us/software-download/faq),<br> [FAQ with topics homepage](https://www.microsoft.com/Licensing/servicecenter/Help/Faq.aspx)<br>[Support article](./best-practices.md)|
|PDF / DOC|FAQs,<br> Product Manual,<br> Brochures,<br> Paper,<br> Flyer Policy,<br> Support guide,<br> Structured QnA,<br> etc.|**Without Multi-turn**<br>[Structured QnA.docx](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/qna-maker/data-source-formats/structured.docx),<br> [Sample Product Manual.pdf](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/qna-maker/data-source-formats/product-manual.pdf),<br> [Sample semi-structured.docx](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/qna-maker/data-source-formats/semi-structured.docx),<br> [Sample white paper.pdf](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/qna-maker/data-source-formats/white-paper.pdf),<br> [Unstructured blog.pdf](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/qna-maker/data-source-formats/Introducing-surface-laptop-4-and-new-access.pdf),<br> [Unstructured white paper.pdf](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/qna-maker/data-source-formats/sample-unstructured-paper.pdf)<br><br>**Multi-turn**:<br>[Surface Pro (docx)](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/qna-maker/data-source-formats/multi-turn.docx)<br>[Contoso Benefits (docx)](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/qna-maker/data-source-formats/Multiturn-ContosoBenefits.docx)<br>[Contoso Benefits (pdf)](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/qna-maker/data-source-formats/Multiturn-ContosoBenefits.pdf)| |*Excel|Structured QnA file<br> (including RTF, HTML support)|**Without Multi-turn**:<br>[Sample QnA FAQ.xls](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/qna-maker/data-source-formats/QnA%20Maker%20Sample%20FAQ.xlsx)<br><br>**Multi-turn**:<br>[Structured simple FAQ.xls](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/qna-maker/data-source-formats/Structured-multi-turn-format.xlsx)<br>[Surface laptop FAQ.xls](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/qna-maker/data-source-formats/Multiturn-Surface-Pro.xlsx)| |*TXT/TSV|Structured QnA file|[Sample chit-chat.tsv](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/qna-maker/data-source-formats/Scenario_Responses_Friendly.tsv)|
cognitive-services Responsible Use Of Ai Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/responsible-use-of-ai-overview.md
Azure Cognitive Services provides information and guidelines on how to responsib
* [Disclosure of design patterns](./speech-service/concepts-disclosure-patterns.md) * [Code of conduct](/legal/cognitive-services/speech-service/tts-code-of-conduct?context=/azure/cognitive-services/speech-service/context/context) * [Data, privacy, and security](/legal/cognitive-services/speech-service/custom-neural-voice/data-privacy-security-custom-neural-voice?context=/azure/cognitive-services/speech-service/context/context)+
+## Anomaly Detector
+
+* [Transparency note and use cases](/legal/cognitive-services/anomaly-detector/transparency-note?context=/azure/cognitive-services/anomaly-detector/context/context)
+* [Characteristics and limitations](/legal/cognitive-services/anomaly-detector/characteristics-and-limitations?context=/azure/cognitive-services/anomaly-detector/context/context)
+* [Integration and responsible use](/legal/cognitive-services/anomaly-detector/guidance-integration-responsible-use?context=/azure/cognitive-services/anomaly-detector/context/context)
+* [Data, privacy, and security](/legal/cognitive-services/anomaly-detector/data-privacy-security?context=/azure/cognitive-services/anomaly-detector/context/context)
communication-services Privacy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/privacy.md
Azure Communication Services is committed to helping our customers meet their pr
## Data residency
-When creating an Communication Services resource, you specify a **geography** (not an Azure data center). All data stored by Communication Services at rest will be retained in that geography, in a data center selected internally by Communication Services. Data may transit or be processed in other geographies. These global endpoints are necessary to provide a high-performance, low-latency experience to end-users no matter their location.
+When creating an Communication Services resource, you specify a **geography** (not an Azure data center). All chat messages, and resource data stored by Communication Services at rest will be retained in that geography, in a data center selected internally by Communication Services. Data may transit or be processed in other geographies. These global endpoints are necessary to provide a high-performance, low-latency experience to end-users no matter their location.
## Data residency and events
communication-services Teams Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/teams-endpoint.md
+
+ Title: Custom Teams endpoint
+
+description: Building Custom Teams endpoint
+++++ Last updated : 05/31/2021+++
+# Custom Teams endpoint
+
+> [!IMPORTANT]
+> To enable/disable the custom Teams endpoint experience, complete [this form](https://forms.office.com/r/B8p5KqCH19).
+
+Azure Communication Services can be used to build custom Teams endpoints. With Azure Communication Services SDKs you can customize voice, video, chat, and screen sharing experience for Teams users. Custom Teams endpoints can communicate with the Microsoft Teams client or other custom Teams endpoints.
+
+You can use the Azure Communication Services Identity SDK to exchange AAD user tokens for Teams' access tokens. In the following diagrams, is demonstrated multitenant use case, where Fabrikam is customer of the company Contoso.
+
+## Calling
+
+Voice, video, and screen sharing capabilities are provided via Azure Communication Services Calling SDKs. The following diagram shows an overview of the process you'll follow as you integrate your calling experiences with custom Teams endpoints.
+
+![Process to enable calling feature for custom Teams endpoint experience](./media/teams-identities/teams-identity-calling-overview.png)
+
+## Chat
+
+You can also use custom Teams endpoints to optionally integrate chat capabilities using Graph APIs. Learn more about Graph API in [the documentation](https://docs.microsoft.com/graph/api/channel-post-messages).
++
+![Process to enable chat feature for custom Teams endpoint experience](./media/teams-identities/teams-identity-chat-overview.png)
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Issue Teams access token](../quickstarts/manage-teams-identity.md)
+
+The following documents may be interesting to you:
+
+- Learn about [Teams interoperability](./teams-interop.md)
communication-services Manage Teams Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/manage-teams-identity.md
+
+ Title: Set up and create Teams access tokens
+
+description: Building service providing Teams access tokens
+++++ Last updated : 05/31/2021+++
+# Quickstart: Set up and manage Teams access tokens
+
+> [!IMPORTANT]
+> To enable/disable custom Teams endpoint experience, complete [this form](https://forms.office.com/r/B8p5KqCH19).
+
+In this quickstart, we'll build a .NET console application to authenticate an AAD user token using the MSAL library. We'll then exchange that token for a Teams access token with the Azure Communication Services Identity SDK. The Teams access token can then be used by the Azure Communication Services Calling SDK to build a custom Teams endpoint.
+
+> [!NOTE]
+> In production environments, we recommend implementing this exchange mechanism in backend services, as requests for exchange are signed with a secret.
++
+## Prerequisites
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- An active Communication Services resource and connection string. [Create a Communication Services resource](./create-communication-resource.md).
+- Enable custom Teams endpoint experience via [this form](https://forms.office.com/r/B8p5KqCH19)
+- Having Azure Active Directory with users that have Teams license
+
+## Introduction
+
+Teams identities are bound to tenants in Azure Active Directory. Your application can be used by users from the same or any tenant. In this quickstart, we'll work through a multitenant use case with multiple actors: users, developers, and admins from fictional companies Contoso and Fabrikam. In this use case, Contoso is a company building a SaaS solution for Fabrikam.
+
+The following sections will guide you through the steps for admins, developers, and users. The diagrams demonstrate the multitenant use case. If you're working with a single tenant, execute all steps from Contoso and Fabrikam in single tenant.
+
+## Admin actions
+
+The Administrator role has extended permissions in AAD. Members of this role can provision resources and can read information from the Azure portal. In the following diagram, you can see all actions that have to be executed by Admins.
+
+![Admin actions to enable custom Teams endpoint experience](./media/teams-identities/teams-identity-admin-overview.png)
+
+1. Contoso's Admin creates or selects existing *Application* in Azure Active Directory. Property *Supported account types* defines whether users from different tenant can authenticate to the *Application*. Property *Redirect URI* redirects successful authentication request to Contoso's *Server*.
+1. Contoso's Admin extends *Application*'s manifest with Azure Communication Services' VoIP permission.
+1. Contoso's Admin enables experience via [this form](https://forms.office.com/r/B8p5KqCH19)
+1. Contoso's Admin creates or selects existing Communication Services, that will be used for authentication of the exchanging requests. AAD user tokens will be exchanged for Teams access tokens. You can read more about creation of [new Azure Communication Services resources here](./create-communication-resource.md).
+1. Fabrikam's Admin provisions new service principal for Azure Communication Services in the Fabrikam's tenant
+1. Fabrikam's Admin grants Azure Communication Services VoIP permission to the Contoso's *Application*. This step is required only if Contoso's *Application* isn't verified.
+
+### 1. Create AAD application registration or select AAD application
+
+Users must be authenticated against AAD applications with Azure Communication Service's `VoIP` permission. If you don't have an existing application that you would like to use for this quickstart, you can create new application registration.
+
+The following application settings influence the experience:
+- Property *Supported account types* defines whether the *Application* is single tenant ("Accounts in this organizational directory only") or multitenant ("Accounts in any organizational directory"). For this scenario, you can use multitenant.
+- *Redirect URI* defines URI where authentication request is redirected after authentication. For this scenario, you can use "Public client/native(mobile & desktop)" and fill in "http://localhost" as URI.
+
+[Here you can find detailed documentation.](https://docs.microsoft.com/azure/active-directory/develop/quickstart-register-app#register-an-application).
+
+When the *Application* is registered, you'll see an identifier in the overview. This identifier will be used in followings steps: **Application (client) ID**.
+
+### 2. Allow public client flows
+
+In the *Authentication* pane of your *Application*, you can see Configured platform for *Public client/native(mobile & desktop)* with Redirect URI pointing to *localhost*. In the bottom of the screen, you can find toggle *Allow public client flows*, which for this quickstart will be set to **Yes**.
+
+### 3. Verify application (Optional)
+In the *Branding* pane, you can verify your platform within Microsoft identity platform. This one time process will remove requirement for Fabrikam's admin to give admin consent to this application. You can find details on how to verify your application [here](https://docs.microsoft.com/azure/active-directory/develop/howto-configure-publisher-domain).
+
+### 4. Define Azure Communication Services' VoIP permission in application
+
+Go to the details of *Application* and select "Manifest" pane. In the manifest find property called: *"requiredResourceAccess"*. It is array of objects, that defines *Application's* permissions. Extend the manifest with the VoIP permissions for the first party application Azure Communication Services. Add following object to the array.
+
+> [!NOTE]
+> Do not change the GUIDs in the snippet as they are uniquely identifying application and permissions.
+
+```json
+{
+ "resourceAppId": "1fd5118e-2576-4263-8130-9503064c837a",
+ "resourceAccess": [
+ {
+ "id": "31f1efa3-6f54-4008-ac59-1bf1f0ff9958",
+ "type": "Scope"
+ }
+ ]
+}
+```
+
+Then select on the *Save* button to persist changes. You can now see the Azure Communication Services - VoIP permission in the *API Permissions* pane.
+
+### 5. Enable custom Teams endpoint experience for *Application*
+
+AAD Admin fills in following [form](https://forms.office.com/r/B8p5KqCH19) to enable the custom Teams endpoint experience for the *Application*.
+
+### 6. Create or select Communication Services resource
+
+Your Azure Communication Services resource will be used to authenticate all requests for exchanging AAD user tokens for Teams access tokens. This exchange can be triggered via the Azure Communication Services Identity SDK, which is authenticated with access key or Azure RBAC. You can get the access key in the Azure portal or configure Azure RBAC via *Access control (IAM)* pane.
+
+If you want to [create new Communication Services resource, follow this guide](./create-communication-resource.md).
+
+### 7. Provision Azure Communication Services service principal
+
+To enable custom Teams endpoint experience in Fabrikam's tenant, Fabrikam's AAD admin must provision service principal named Azure Communication Services with Application ID: *1fd5118e-2576-4263-8130-9503064c837a*. If you don't see this application in your Enterprise applications pane in Azure Active Directory, it has to be added manually.
+
+Fabrikam's AAD Admin connects to the Azure's Tenant via PowerShell.
+
+> [!NOTE]
+> Replace [Tenant_ID] with ID of your tenant, that can be found in the Azure portal on the overview page of the AAD.
+
+```azurepowershell
+Connect-AzureAD -TenantId "[Tenant_ID]"
+```
+
+If the command isn't found, then the AzureAD module isn't installed in your PowerShell. Close the PowerShell and run it with Administration rights. Then you can install Azure-AD package with following command:
+
+```azurepowershell
+Install-Module AzureAD
+```
+
+After connecting and authentication to the Azure, run following command to provision the Communication Services' service principal.
+
+> [!NOTE]
+> Parameter AppId refers to the first party application Azure Communication Services. Don't change this value.
+
+```azurepowershell
+New-AzureADServicePrincipal -AppId "1fd5118e-2576-4263-8130-9503064c837a"
+```
+
+### 8. Provide admin consent
+
+If Contoso's *Application* isn't verified, the AAD admin must grant permission to the Contoso's *Application* for Azure Communication Services' VoIP permission. Fabrikam's AAD admin provides consent via unique link. To construct admin consent link, follow instructions:
+
+1. Take following link *https://login.microsoftonline.com/{Tenant_ID}/adminconsent?client_id={Application_ID}*
+1. Replace {Tenant_ID} with Fabrikam's tenant ID
+1. Replace {Application_ID} with Contoso's Application ID
+1. Fabrikam's AAD Admin navigates to the link in the browser.
+1. Fabrikam's AAD admin logs in and grants permissions on behalf of the organization
+
+Service principal of Contoso's *Application* in Fabrikam's tenant is created if consent is granted. Fabrikam's admin can review consent in AAD:
+
+1. Sign in into Azure portal as Admin
+1. Go to Azure Active Directory
+1. Go to "Enterprise applications" pane
+1. Set filter "Application type" to "All applications"
+1. In the field to filter applications, insert the name of the Contoso's application
+1. Select "Apply" to filter results
+1. Select service principle with required name
+1. Go to *Permissions* pane
+
+You can see that Azure Communication Services' VoIP permission has now Status *Granted for {Directory_name}*.
+
+## Developer actions
+
+Contoso's developer needs to set up *Client application* for authentication of users. Developer then needs to create endpoint on backend *Server* to process AAD user token after redirection. When AAD user token is received, it is exchanged for Teams Access token and returned back to *Client application*. The actions that are needed from developers are shown in following diagram:
+
+![Developer actions to enable custom Teams endpoint experience](./media/teams-identities/teams-identity-developer-overview.png)
+
+1. Contoso's developer configures MSAL library to authenticate user for *Application* created in previous steps by Admin for Azure Communication Services' VoIP permission
+1. Contoso's developer initializes ACS identity SDK and exchanges incoming AAD user token for Teams' access token via SDK. Teams' access token is then returned to *Client application*.
+
+Microsoft Authentication Library (MSAL) enables developers to acquire AAD user tokens from the Microsoft identity platform endpoint to authenticate users and access secure web APIs. It can be used to provide secure access to Azure Communication Services. MSAL supports many different application architectures and platforms including .NET, JavaScript, Java, Python, Android, and iOS.
+
+You can find more details how to set up different environments in public documentation. [Microsoft Authentication Library (MSAL) overview](https://docs.microsoft.com/azure/active-directory/develop/msal-overview).
+
+> [!NOTE]
+> Following sections describes how to exchange AAD access token for Teams access token for console application in .NET.
+
+### Create new application
+
+In a console window (such as cmd, PowerShell, or Bash), use the dotnet new command to create a new console app with the name *TeamsAccessTokensQuickstart*. This command creates a simple "Hello World" C# project with a single source file: *Program.cs*.
+
+```console
+dotnet new console -o TeamsAccessTokensQuickstart
+```
+
+Change your directory to the newly created app folder and use the dotnet build command to compile your application.
+
+```console
+cd TeamsAccessTokensQuickstart
+dotnet build
+```
+#### Install the package
+While still in the application directory, install the Azure Communication Services Identity library for .NET package by using the dotnet add package command.
+
+```console
+dotnet add package Azure.Communication.Identity
+dotnet add package Microsoft.Identity.Client
+```
+
+#### Set up the app framework
+
+From the project directory:
+
+- Open Program.cs file in a text editor
+- Add a using directive to include following namespaces:
+ - Azure.Communication
+ - Azure.Communication.Identity
+ - Microsoft.Identity.Client
+- Update the Main method declaration to support async code
+
+Use the following code to begin:
+
+```csharp
+using System;
+using System.Text;
+using Azure.Communication;
+using Azure.Communication.Identity;
+using Microsoft.Identity.Client;
+
+namespace TeamsAccessTokensQuickstart
+{
+ class Program
+ {
+ static async System.Threading.Tasks.Task Main(string[] args)
+ {
+ Console.WriteLine("Azure Communication Services ΓÇô Teams access tokens quickstart");
+
+ // Quickstart code goes here
+ }
+ }
+}
+```
+
+### 1. Receive AAD user token via MSAL library
+
+Use MSAL library to authenticate user against AAD for Contoso's *Application* with Azure Communication Services' VoIP permission. Configure client for Contoso's *Application* (*parameter applicationId*) in public cloud (*parameter authority*). AAD user token will be returned to the redirect URI (*parameter redirectUri*). Credentials will be taken from interactive pop-up window, that will open in your default browser.
+
+> [!NOTE]
+> Redirect URI has to match the value defined in the *Application*. Check first step in the Admin guide to see how to configure Redirect URI.
+
+```csharp
+const string applicationId = "Contoso's_Application_ID";
+const string authority = "https://login.microsoftonline.com/common";
+const string redirectUri = "http://localhost";
+
+var client = PublicClientApplicationBuilder
+ .Create(applicationId)
+ .WithAuthority(authority)
+ .WithRedirectUri(redirectUri)
+ .Build();
+
+const string scope = "https://auth.msft.communication.azure.com/VoIP";
+
+var aadUserToken = await client.AcquireTokenInteractive(new[] { scope }).ExecuteAsync();
+
+Console.WriteLine("\nAuthenticated user: " + aadUserToken.Account.Username);
+Console.WriteLine("AAD user token expires on: " + aadUserToken.ExpiresOn);
+```
+
+Variable *aadUserToken* now carries valid Azure Active Directory user token, that will be used for exchange.
+
+### 2. Exchange AAD user token for Teams access token
+
+Valid AAD user token authenticates user against AAD for third party application with Azure Communication Services' VoIP permission. The following code is used ACS identity SDK to facilitate exchange of AAD user token for Teams access token.
+
+> [!NOTE]
+> Replace value "&lt;Connection-String&gt;" with valid connection string or use Azure RBAC for authentication. You can find more details in [this quickstart](./access-tokens.md).
+
+```csharp
+var identityClient = new CommunicationIdentityClient("<Connection-String>");
+var teamsAccessToken = identityClient.ExchangeTeamsToken(aadUserToken.AccessToken);
+
+Console.WriteLine("\nTeams access token expires on: " + teamsAccessToken.Value.ExpiresOn);
+```
+
+If all conditions defined in the prerequirements are met, then you would get valid Teams access token valid for 24 hours.
+
+#### Run the code
+Run the application from your application directory with the dotnet run command.
+
+```console
+dotnet run
+```
+
+The output of the app describes each action that is completed:
+
+```console
+Azure Communication Services - Teams access tokens quickstart
+
+Authenticated user: john.smith@contoso.com
+AAD user token expires on: 6/10/2021 10:13:17 AM +00:00
+
+Teams access token expires on: 6/11/2021 9:13:18 AM +00:00
+```
+
+## User actions
+
+User represents the Fabrikam's users of Contoso's *Application*. User experience is shown in following diagram.
+
+![User actions to enable custom Teams endpoint experience](./media/teams-identities/teams-identity-user-overview.png)
+
+1. Fabrikam's user uses Contoso's *Client application* and is prompted to authenticate.
+1. Contoso's *Client application* uses MSAL library to authenticate user against Fabrikam's Azure Active Directory tenant for Contoso's *Application* with Azure Communication Services' VoIP permission.
+1. Authentication is redirected to *Server* as defined in the property *Redirect URI* in MSAL and Contoso's *Application*
+1. Contoso's *Server* exchanges AAD user token for Teams' access token using ACS identity SDK and returns Teams' access token to the *Client application*.
+
+With valid Teams' access token in *Client application*, developer can integrate ACS calling SDK and build custom Teams endpoint.
++
+## Next steps
+
+In this quickstart, you learned how to:
+
+> [!div class="checklist"]
+> * Create and configure Application in Azure Active Directory
+> * Use MSAL library to issue Azure Active Directory user token
+> * Use the ACS Identity SDK to exchange Azure Active Directory user token for Teams access token
+
+The following documents may be interesting to you:
+
+- Learn about [custom Teams endpoint](../concepts/teams-endpoint.md)
+- Learn about [Teams interoperability](../concepts/teams-interop.md)
container-instances Container Instances Samples Rm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-instances/container-instances-samples-rm.md
You have several options for deploying resources with Resource Manager templates
[REST API][deploy-rest] <!-- LINKS - External -->
-[app-nav]: https://github.com/Azure/azure-quickstart-templates/tree/master/101-aci-dynamicsnav
-[app-wp]: https://github.com/Azure/azure-quickstart-templates/tree/master/201-aci-wordpress
+[app-nav]: https://github.com/Azure/azure-quickstart-templates/tree/master/demos/aci-dynamicsnav
+[app-wp]: https://github.com/Azure/azure-quickstart-templates/tree/master/application-workloads/wordpress/aci-wordpress
[az-files]: https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.containerinstance/aci-storage-file-share [net-publicip]: https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.containerinstance/aci-linuxcontainer-public-ip [net-udp]: https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.containerinstance/aci-udp
-[net-vnet]: https://github.com/Azure/azure-quickstart-templates/tree/master/101-aci-vnet
+[net-vnet]: https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.containerinstance/aci-vnet
[repo]: https://github.com/Azure/azure-quickstart-templates [vol-emptydir]: https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.containerinstance/aci-linuxcontainer-volume-emptydir [vol-gitrepo]: https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.containerinstance/aci-linuxcontainer-volume-gitrepo
container-instances Container Instances Virtual Network Concepts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-instances/container-instances-virtual-network-concepts.md
The subnet that you use for container groups may contain only container groups.
A network profile is a network configuration template for Azure resources. It specifies certain network properties for the resource, for example, the subnet into which it should be deployed. When you first use the [az container create][az-container-create] command to deploy a container group to a subnet (and thus a virtual network), Azure creates a network profile for you. You can then use that network profile for future deployments to the subnet.
-To use a Resource Manager template, YAML file, or a programmatic method to deploy a container group to a subnet, you need to provide the full Resource Manager resource ID of a network profile. You can use a profile previously created using [az container create][az-container-create], or create a profile using a Resource Manager template (see [template example](https://github.com/Azure/azure-quickstart-templates/tree/master/101-aci-vnet) and [reference](/azure/templates/microsoft.network/networkprofiles)). To get the ID of a previously created profile, use the [az network profile list][az-network-profile-list] command.
+To use a Resource Manager template, YAML file, or a programmatic method to deploy a container group to a subnet, you need to provide the full Resource Manager resource ID of a network profile. You can use a profile previously created using [az container create][az-container-create], or create a profile using a Resource Manager template (see [template example](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.containerinstance/aci-vnet) and [reference](/azure/templates/microsoft.network/networkprofiles)). To get the ID of a previously created profile, use the [az network profile list][az-network-profile-list] command.
In the following diagram, several container groups have been deployed to a subnet delegated to Azure Container Instances. Once you've deployed one container group to a subnet, you can deploy additional container groups to it by specifying the same network profile.
In the following diagram, several container groups have been deployed to a subne
## Next steps * For deployment examples with the Azure CLI, see [Deploy container instances into an Azure virtual network](container-instances-vnet.md).
-* To deploy a new virtual network, subnet, network profile, and container group using a Resource Manager template, see [Create an Azure container group with VNet](https://github.com/Azure/azure-quickstart-templates/tree/master/101-aci-vnet
+* To deploy a new virtual network, subnet, network profile, and container group using a Resource Manager template, see [Create an Azure container group with VNet](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.containerinstance/aci-vnet
). * When using the [Azure portal](container-instances-quickstart-portal.md) to create a container instance, you can also provide settings for a new or exsting virtual network on the **Networking** tab.
container-instances Container Instances Vnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-instances/container-instances-vnet.md
The log output should show that `wget` was able to connect and download the inde
### Example - YAML
-You can also deploy a container group to an existing virtual network by using a YAML file, a [Resource Manager template](https://github.com/Azure/azure-quickstart-templates/tree/master/101-aci-vnet
+You can also deploy a container group to an existing virtual network by using a YAML file, a [Resource Manager template](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.containerinstance/aci-vnet
), or another programmatic method such as with the Python SDK. For example, when using a YAML file, you can deploy to a virtual network with a subnet delegated to Azure Container Instances. Specify the following properties:
az network vnet delete --resource-group $RES_GROUP --name aci-vnet
## Next steps
-To deploy a new virtual network, subnet, network profile, and container group using a Resource Manager template, see [Create an Azure container group with VNet](https://github.com/Azure/azure-quickstart-templates/tree/master/101-aci-vnet
+To deploy a new virtual network, subnet, network profile, and container group using a Resource Manager template, see [Create an Azure container group with VNet](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.containerinstance/aci-vnet
). <!-- IMAGES -->
cosmos-db Cassandra Kafka Connect https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/cassandra-kafka-connect.md
select * from weather.data_by_station where station_id IN ('station-2', 'station
## Clean up resources ## Next steps
cosmos-db Create Cassandra Api Account Java https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/create-cassandra-api-account-java.md
This tutorial covers the following tasks:
## Create a database account ## Get the connection details of your account
cosmos-db Create Cassandra Dotnet Core https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/create-cassandra-dotnet-core.md
In addition, you need:
<a id="create-account"></a> ## Create a database account ## Clone the sample application
Now go back to the Azure portal to get your connection string information and co
## Review SLAs in the Azure portal ## Clean up resources ## Next steps
cosmos-db Create Cassandra Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/create-cassandra-dotnet.md
In addition, you need:
<a id="create-account"></a> ## Create a database account ## Clone the sample application
Now go back to the Azure portal to get your connection string information and co
## Review SLAs in the Azure portal ## Clean up resources ## Next steps
cosmos-db Create Cassandra Go https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/create-cassandra-go.md
Azure Cosmos DB is a multi-model database service that lets you quickly create a
Before you can create a database, you need to create a Cassandra account with Azure Cosmos DB. ## Clone the sample application
go run main.go
## Review SLAs in the Azure portal ## Clean up resources ## Next steps
cosmos-db Create Cassandra Java V4 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/create-cassandra-java-v4.md
In this quickstart, you create an Azure Cosmos DB Cassandra API account, and use
Before you can create a document database, you need to create a Cassandra account with Azure Cosmos DB. ## Clone the sample application
Now go back to the Azure portal to get your connection string information and co
## Review SLAs in the Azure portal ## Clean up resources ## Next steps
cosmos-db Create Cassandra Java https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/create-cassandra-java.md
In this quickstart, you create an Azure Cosmos DB Cassandra API account, and use
Before you can create a document database, you need to create a Cassandra account with Azure Cosmos DB. ## Clone the sample application
Now go back to the Azure portal to get your connection string information and co
## Review SLAs in the Azure portal ## Clean up resources ## Next steps
cosmos-db Create Cassandra Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/create-cassandra-nodejs.md
In addition, you need:
Before you can create a document database, you need to create a Cassandra account with Azure Cosmos DB. ## Clone the sample application
Now go back to the Azure portal to get your connection string information and co
## Review SLAs in the Azure portal ## Clean up resources ## Next steps
cosmos-db Create Cassandra Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/create-cassandra-python.md
In this quickstart, you create an Azure Cosmos DB Cassandra API account, and use
Before you can create a document database, you need to create a Cassandra account with Azure Cosmos DB. ## Clone the sample application
Now go back to the Azure portal to get your connection string information and co
## Review SLAs in the Azure portal ## Clean up resources ## Next steps
cosmos-db Create Cosmosdb Resources Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/create-cosmosdb-resources-portal.md
This quickstart demonstrates how to use the Azure portal to create an Azure Cosm
An Azure subscription or free Azure Cosmos DB trial account - [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] -- [!INCLUDE [cosmos-db-emulator-docdb-api](../../includes/cosmos-db-emulator-docdb-api.md)]
+- [!INCLUDE [cosmos-db-emulator-docdb-api](includes/cosmos-db-emulator-docdb-api.md)]
<a id="create-account"></a> ## Create an Azure Cosmos DB account <a id="create-container-database"></a> ## Add a database and a container
Add data to your new database using Data Explorer.
## Query your data ## Clean up resources If you wish to delete just the database and use the Azure Cosmos account in future, you can delete the database with the following steps:
cosmos-db Create Graph Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/create-graph-dotnet.md
If you don't already have Visual Studio 2019 installed, you can download and use
## Create a database account ## Add a graph ## Clone the sample application
You can now go back to Data Explorer in the Azure portal and browse and query yo
## Review SLAs in the Azure portal ## Clean up resources ## Next steps
cosmos-db Create Graph Gremlin Console https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/create-graph-gremlin-console.md
You also need to install the [Gremlin Console](https://tinkerpop.apache.org/down
## Create a database account ## Add a graph ## <a id="ConnectAppService"></a>Connect to your app service/Graph
Congratulations! You've completed this Azure Cosmos DB: Gremlin API tutorial!
## Review SLAs in the Azure portal ## Clean up resources ## Next steps
cosmos-db Create Graph Java https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/create-graph-java.md
In this quickstart, you create and manage an Azure Cosmos DB Gremlin (graph) API
Before you can create a graph database, you need to create a Gremlin (Graph) database account with Azure Cosmos DB. ## Add a graph ## Clone the sample application
That completes the resource creation part of this tutorial. You can continue to
## Review SLAs in the Azure portal ## Clean up resources ## Next steps
cosmos-db Create Graph Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/create-graph-nodejs.md
In this quickstart, you create and manage an Azure Cosmos DB Gremlin (graph) API
## Create a database account ## Add a graph ## Clone the sample application
Try completing `g.V()` with `.has('firstName', 'Thomas')` to test the filter. No
## Review SLAs in the Azure portal ## Clean up your resources ## Next steps
cosmos-db Create Graph Php https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/create-graph-php.md
In addition:
Before you can create a graph database, you need to create a Gremlin (Graph) database account with Azure Cosmos DB. ## Add a graph ## Clone the sample application
You can now go back to Data Explorer and see the vertices added to the graph, an
## Review SLAs in the Azure portal ## Clean up resources ## Next steps
cosmos-db Create Graph Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/create-graph-python.md
In this quickstart, you create and manage an Azure Cosmos DB Gremlin (graph) API
Before you can create a graph database, you need to create a Gremlin (Graph) database account with Azure Cosmos DB. ## Add a graph ## Clone the sample application
That completes the resource creation part of this tutorial. You can continue to
## Review SLAs in the Azure portal ## Clean up resources ## Next steps
cosmos-db Create Mongodb Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/create-mongodb-dotnet.md
If you don't already have Visual Studio, download [Visual Studio 2019 Community
<a id="create-account"></a> ## Create a database account The sample described in this article is compatible with MongoDB.Driver version 2.6.1.
You've now updated your app with all the info it needs to communicate with Cosmo
--> ## Review SLAs in the Azure portal ## Clean up resources ## Next steps
cosmos-db Create Mongodb Go https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/create-mongodb-go.md
The `todo` you just deleted should not be present
## Clean up resources ## Next steps
cosmos-db Create Mongodb Java https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/create-mongodb-java.md
In this quickstart, you create and manage an Azure Cosmos DB for MongoDB API acc
## Create a database account ## Add a collection Name your new database **db**, and your new collection **coll**. ## Clone the sample application
You can now use [Robomongo](mongodb-robomongo.md) / [Studio 3T](mongodb-mongoche
## Review SLAs in the Azure portal ## Clean up resources ## Next steps
cosmos-db Create Mongodb Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/create-mongodb-nodejs.md
git commit -m "configured MongoDB connection string"
``` ## Clean up resources ## Next steps
cosmos-db Create Mongodb Rust https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/create-mongodb-rust.md
fn delete_todo(self, todo_id: &str) {
## Clean up resources ## Next steps
cosmos-db Create Mongodb Xamarin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/create-mongodb-xamarin.md
If you prefer to work on a Mac, download [Visual Studio for Mac](https://visuals
## Create a database account The sample described in this article is compatible with MongoDB.Driver version 2.6.1.
You've now updated your app with all the info it needs to communicate with Azure
## Review SLAs in the Azure portal ## Clean up resources ## Next steps
cosmos-db Create Sql Api Java https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/create-sql-api-java.md
As items are inserted into a Cosmos DB container, the database grows horizontall
Before you can create a document database, you need to create a SQL API account with Azure Cosmos DB. ## Add a container <a id="add-sample-data"></a> ## Add sample data ## Query your data ## Clone the sample application
Now go back to the Azure portal to get your connection string information and la
## Review SLAs in the Azure portal ## Clean up resources ## Next steps
cosmos-db Create Sql Api Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/create-sql-api-nodejs.md
The "try Azure Cosmos DB for free" option doesn't require an Azure subscription
## Add a container ## Add sample data ## Query your data ## Clone the sample application
You can continue to experiment with this sample application or go back to Data E
## Review SLAs in the Azure portal ## Next steps
cosmos-db Create Sql Api Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/create-sql-api-python.md
In this quickstart, you create and manage an Azure Cosmos DB SQL API account fro
## Create a database account ## Add a container ## Add sample data ## Query your data ## Clone the sample application
The following snippets are all taken from the *cosmos_get_started.py* file.
## Review SLAs in the Azure portal ## Clean up resources ## Next steps
cosmos-db Create Sql Api Spring Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/create-sql-api-spring-data.md
As items are inserted into a Cosmos DB container, the database grows horizontall
Before you can create a document database, you need to create a SQL API account with Azure Cosmos DB. ## Add a container <a id="add-sample-data"></a> ## Add sample data ## Query your data ## Clone the sample application
Now go back to the Azure portal to get your connection string information and la
## Review SLAs in the Azure portal ## Clean up resources ## Next steps
cosmos-db Create Sql Api Xamarin Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/create-sql-api-xamarin-dotnet.md
If you are developing on Windows and don't already have Visual Studio 2019 insta
If you are using a Mac, you can download the **free** [Visual Studio for Mac](https://www.visualstudio.com/vs/mac/). [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] ## Create a database account ## Add a container ## Add sample data ## Query your data ## Clone the sample application
Go back to the Azure portal to get your API key information and copy it into the
public static readonly string CosmosAuthKey = "[PRIMARY KEY copied from Azure portal"; ``` ## Review the code
The following steps will demonstrate how to run the app using the Visual Studio
## Review SLAs in the Azure portal ## Clean up resources ## Next steps
cosmos-db Create Table Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/create-table-dotnet.md
If you donΓÇÖt already have Visual Studio 2019 installed, you can download and u
## Create a database account ## Add a table ## Add sample data ## Clone the sample application
You've now updated your app with all the info it needs to communicate with Azure
## Review SLAs in the Azure portal ## Clean up resources ## Next steps
cosmos-db Create Table Java https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/create-table-java.md
In this quickstart, you create an Azure Cosmos DB Table API account, and use Dat
> You need to create a new Table API account to work with the generally available Table API SDKs. Table API accounts created during preview are not supported by the generally available SDKs. > ## Add a table ## Add sample data ## Clone the sample application
You've now updated your app with all the info it needs to communicate with Azure
## Review SLAs in the Azure portal ## Clean up resources ## Next steps
cosmos-db Create Table Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/create-table-nodejs.md
In this quickstart, you create an Azure Cosmos DB Table API account, and use Dat
> You need to create a new Table API account to work with the generally available Table API SDKs. Table API accounts created during preview are not supported by the generally available SDKs. > ## Add a table ## Add sample data ## Clone the sample application
You've now updated your app with all the info it needs to communicate with Azure
## Review SLAs in the Azure portal ## Clean up resources ## Next steps
cosmos-db Enable Notebooks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/enable-notebooks.md
Starting February 10, 2021, new Azure Cosmos accounts created in one of the [sup
1. Select **Go to resource** to go to the Azure Cosmos DB account page.
- :::image type="content" source="../../includes/media/cosmos-db-create-dbaccount/azure-cosmos-db-account-created-3.png" alt-text="The Azure Cosmos DB account page":::
+ :::image type="content" source="includes/media/cosmos-db-create-dbaccount/azure-cosmos-db-account-created-3.png" alt-text="The Azure Cosmos DB account page":::
1. Navigate to the **Data Explorer** pane. You should now see your notebooks workspace.
cosmos-db How To Manage Database Account https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/how-to-manage-database-account.md
This article describes how to manage various tasks on an Azure Cosmos account us
### <a id="create-database-account-via-portal"></a>Azure portal ### <a id="create-database-account-via-cli"></a>Azure CLI
cosmos-db How To Setup Rbac https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/how-to-setup-rbac.md
description: Learn how to configure role-based access control with Azure Active
Previously updated : 06/08/2021 Last updated : 06/15/2021
az cosmosdb sql role definition list --account-name $accountName --resource-grou
### Using Azure Resource Manager templates
-See [this page](/rest/api/cosmos-db-resource-provider/2021-04-01-preview/sqlresources2/create-update-sql-role-definition) for a reference and examples of using Azure Resource Manager templates to create role definitions.
+See [this page](/rest/api/cosmos-db-resource-provider/2021-04-15/sqlresources2/create-update-sql-role-definition) for a reference and examples of using Azure Resource Manager templates to create role definitions.
## <a id="role-assignments"></a> Create role assignments
az cosmosdb sql role assignment create --account-name $accountName --resource-gr
### Using Azure Resource Manager templates
-See [this page](/rest/api/cosmos-db-resource-provider/2021-04-01-preview/sqlresources2/create-update-sql-role-assignment) for a reference and examples of using Azure Resource Manager templates to create role assignments.
+See [this page](/rest/api/cosmos-db-resource-provider/2021-04-15/sqlresources2/create-update-sql-role-assignment) for a reference and examples of using Azure Resource Manager templates to create role assignments.
## Initialize the SDK with Azure AD
This additional information flows in the **DataPlaneRequests** log category and
- `aadPrincipalId_g` shows the principal ID of the AAD identity that was used to authenticate the request. - `aadAppliedRoleAssignmentId_g` shows the [role assignment](#role-assignments) that was honored when authorizing the request.
+## <a id="disable-local-auth"></a> Enforcing RBAC as the only authentication method
+
+In situations where you want to force clients to connect to Azure Cosmos DB through RBAC exclusively, you have the option to disable the account's primary/secondary keys. When doing so, any incoming request using either a primary/secondary key or a resource token will be actively rejected.
+
+### Using Azure Resource Manager templates
+
+When creating or updating your Azure Cosmos DB account using Azure Resource Manager templates, set the `disableLocalAuth` property to `true`:
+
+```json
+"resources": [
+ {
+ "type": " Microsoft.DocumentDB/databaseAccounts",
+ "properties": {
+ "disableLocalAuth": true,
+ // ...
+ },
+ // ...
+ },
+ // ...
+ ]
+```
+ ## Limits - You can create up to 100 role definitions and 2,000 role assignments per Azure Cosmos DB account.
Yes.
### Is it possible to disable the usage of the account primary/secondary keys when using RBAC?
-Disabling the account primary/secondary keys is not currently possible.
+Yes, see [Enforcing RBAC as the only authentication method](#disable-local-auth).
## Next steps
cosmos-db Local Emulator Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/local-emulator-release-notes.md
This article shows the Azure Cosmos DB Emulator release notes with a list of fea
## Release notes
+### 2.14.0 (15 June 2021)
+
+ - This release updates the local Data Explorer content to latest Azure Portal version; in this release we addresses a known issue when importing multiple document items by using the JSON file uploading feature.
+ ### 2.11.13 (21 April 2021) - This release updates the local Data Explorer content to latest Azure Portal version and adds a new MongoDB endpoint configuration, "4.0".
cosmos-db Migrate Java V4 Sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/migrate-java-v4-sdk.md
Previously updated : 06/13/2021 Last updated : 06/15/2021
The following table lists different Azure Cosmos DB Java SDKs, the package name
| Java SDK| Release Date | Bundled APIs | Maven Jar | Java package name |API Reference | Release Notes | Retire date | |-||--|--|--|-||--|
-| Async 2.x.x | June 2018 | Async(RxJava) | `com.microsoft.azure::azure-cosmosdb` | `com.microsoft.azure.cosmosdb.rx` | [API](https://azure.github.io/azure-cosmosdb-jav) | August 30, 2024 |
+| Async 2.x.x | June 2018 | Async(RxJava) | `com.microsoft.azure::azure-cosmosdb` | `com.microsoft.azure.cosmosdb.rx` | [API](https://azure.github.io/azure-cosmosdb-jav) | - |
| Sync 2.x.x | Sept 2018 | Sync | `com.microsoft.azure::azure-documentdb` | `com.microsoft.azure.cosmosdb` | [API](https://azure.github.io/azure-cosmosdb-jav) | February 29, 2024 |
-| 3.x.x | July 2019 | Async(Reactor)/Sync | `com.microsoft.azure::azure-cosmos` | `com.azure.data.cosmos` | [API](https://azure.github.io/azure-cosmosdb-java/3.0.0/) | - | August 30, 2024 |
-| 4.0 | June 2020 | Async(Reactor)/Sync | `com.azure::azure-cosmos` | `com.azure.cosmos` | - | [API](https://azuresdkdocs.blob.core.windows.net/$web/java/azure-cosmos/4.0.1/https://docsupdatetracker.net/index.html) | - |
+| 3.x.x | July 2019 | Async(Reactor)/Sync | `com.microsoft.azure::azure-cosmos` | `com.azure.data.cosmos` | [API](https://azure.github.io/azure-cosmosdb-java/3.0.0/) | - | - |
+| 4.0 | June 2020 | Async(Reactor)/Sync | `com.azure::azure-cosmos` | `com.azure.cosmos` | [API](https://docs.microsoft.com/java/api/overview/azure/cosmosdb) | - | - |
## SDK level implementation changes
This is different from Azure Cosmos DB Java SDK 3.x.x which exposes a fluent int
### Create resources
-The following code snippet shows the differences in how resources are created between the 4.0, 3.x.x Async APIs and 2.x.x Sync APIs:
+The following code snippet shows the differences in how resources are created between the 4.0, 3.x.x Async, 2.x.x Sync, and 2.x.x Async APIs:
# [Java SDK 4.0 Async API](#tab/java-v4-async)
DocumentCollection documentCollection = new DocumentCollection();
documentCollection.setId("YourContainerName"); documentCollection = client.createCollection(database.getSelfLink(), documentCollection, new RequestOptions()).getResource(); ```+
+# [Java SDK 2.x.x Async API](#tab/java-v2-async)
+
+```java
+// Create Async client.
+// Building an async client is still a sync operation.
+AsyncDocumentClient client = new Builder()
+ .withServiceEndpoint("your.hostname")
+ .withMasterKeyOrResourceToken("yourmasterkey")
+ .withConsistencyLevel(ConsistencyLevel.Eventual)
+ .build();
+// Create database with specified name
+Database database = new Database();
+database.setId("YourDatabaseName");
+client.createDatabase(database, new RequestOptions())
+ .flatMap(databaseResponse -> {
+ // Collection properties - name and partition key
+ DocumentCollection documentCollection = new DocumentCollection();
+ documentCollection.setId("YourContainerName");
+ documentCollection.setPartitionKey(new PartitionKeyDefinition("/id"));
+ // Create collection
+ return client.createCollection(databaseResponse.getResource().getSelfLink(), documentCollection, new RequestOptions());
+}).subscribe();
+
+```
+ ### Item operations
-The following code snippet shows the differences in how item operations are performed between the 4.0, 3.x.x Async APIs and 2.x.x Sync APIs:
+The following code snippet shows the differences in how item operations are performed between the 4.0, 3.x.x Async, 2.x.x Sync, and 2.x.x Async APIs:
# [Java SDK 4.0 Async API](#tab/java-v4-async)
ResourceResponse<Document> documentResourceResponse = client.createDocument(docu
new RequestOptions(), true); Document responseDocument = documentResourceResponse.getResource(); ```+
+# [Java SDK 2.x.x Async API](#tab/java-v2-async)
+
+```java
+// Collection is created. Generate many docs to insert.
+int number_of_docs = 50000;
+ArrayList<Document> docs = generateManyDocs(number_of_docs);
+// Insert many docs into collection...
+Observable.from(docs)
+ .flatMap(doc -> client.createDocument(createdCollection.getSelfLink(), doc, new RequestOptions(), false))
+ .subscribe(); // ...Subscribing triggers stream execution.
+```
+ ### Indexing
-The following code snippet shows the differences in how indexing is created between the 4.0, 3.x.x Async APIs and 2.x.x Sync APIs:
+The following code snippet shows the differences in how indexing is created between the 4.0, 3.x.x Async, 2.x.x Sync, and 2.x.x Async APIs:
# [Java SDK 4.0 Async API](#tab/java-v4-async)
documentCollection.setId("YourContainerName");
documentCollection.setIndexingPolicy(indexingPolicy); documentCollection = client.createCollection(database.getSelfLink(), documentCollection, new RequestOptions()).getResource(); ```+
+# [Java SDK 2.x.x Async API](#tab/java-v2-async)
+
+```java
+// Custom indexing policy
+IndexingPolicy indexingPolicy = new IndexingPolicy();
+indexingPolicy.setIndexingMode(IndexingMode.Consistent); //To turn indexing off set IndexingMode.None
+// Included paths
+List<IncludedPath> includedPaths = new ArrayList<>();
+IncludedPath includedPath = new IncludedPath();
+includedPath.setPath("/*");
+includedPaths.add(includedPath);
+indexingPolicy.setIncludedPaths(includedPaths);
+// Excluded paths
+List<ExcludedPath> excludedPaths = new ArrayList<>();
+ExcludedPath excludedPath = new ExcludedPath();
+excludedPath.setPath("/name/*");
+excludedPaths.add(excludedPath);
+indexingPolicy.setExcludedPaths(excludedPaths);
+// Create container with specified name and indexing policy
+DocumentCollection documentCollection = new DocumentCollection();
+documentCollection.setId("YourContainerName");
+documentCollection.setIndexingPolicy(indexingPolicy);
+client.createCollection(database.getSelfLink(), documentCollection, new RequestOptions()).subscribe();
+```
+ ### Stored procedures
-The following code snippet shows the differences in how stored procedures are created between the 4.0, 3.x.x Async APIs and 2.x.x Sync APIs:
+The following code snippet shows the differences in how stored procedures are created between the 4.0, 3.x.x Async, 2.x.x Sync, and 2.x.x Async APIs:
# [Java SDK 4.0 Async API](#tab/java-v4-async)
logger.info(String.format("Stored procedure %s returned %s (HTTP %d), at cost %.
storedProcedureResponse.getStatusCode(), storedProcedureResponse.getRequestCharge())); ```+
+# [Java SDK 2.x.x Async API](#tab/java-v2-async)
+
+```java
+logger.info("Creating stored procedure...\n");
+String sprocId = "createMyDocument";
+String sprocBody = "function createMyDocument() {\n" +
+ "var documentToCreate = {\"id\":\"test_doc\"}\n" +
+ "var context = getContext();\n" +
+ "var collection = context.getCollection();\n" +
+ "var accepted = collection.createDocument(collection.getSelfLink(), documentToCreate,\n" +
+ " function (err, documentCreated) {\n" +
+ "if (err) throw new Error('Error' + err.message);\n" +
+ "context.getResponse().setBody(documentCreated.id)\n" +
+ "});\n" +
+ "if (!accepted) return;\n" +
+ "}";
+StoredProcedure storedProcedureDef = new StoredProcedure();
+storedProcedureDef.setId(sprocId);
+storedProcedureDef.setBody(sprocBody);
+StoredProcedure storedProcedure = client
+ .createStoredProcedure(documentCollection.getSelfLink(), storedProcedureDef, new RequestOptions())
+ .toBlocking()
+ .single()
+ .getResource();
+// ...
+logger.info(String.format("Executing stored procedure %s...\n\n", sprocId));
+RequestOptions options = new RequestOptions();
+options.setPartitionKey(new PartitionKey("test_doc"));
+StoredProcedureResponse storedProcedureResponse =
+ client.executeStoredProcedure(storedProcedure.getSelfLink(), options, null)
+ .toBlocking().single();
+logger.info(String.format("Stored procedure %s returned %s (HTTP %d), at cost %.3f RU.\n",
+ sprocId,
+ storedProcedureResponse.getResponseAsString(),
+ storedProcedureResponse.getStatusCode(),
+ storedProcedureResponse.getRequestCharge()));
+```
+ ### Change feed
ChangeFeedProcessor.Builder()
# [Java SDK 2.x.x Sync API](#tab/java-v2-sync) * This feature is not supported as of Java SDK v2 sync. +
+# [Java SDK 2.x.x Async API](#tab/java-v2-async)
+
+* This feature is not supported as of Java SDK v2 async.
+ ### Container level Time-To-Live(TTL)
-The following code snippet shows the differences in how to create time to live for data in the container using the 4.0, 3.x.x Async APIs and 2.x.x Sync APIs:
+The following code snippet shows the differences in how to create time to live for data in the container between the 4.0, 3.x.x Async, 2.x.x Sync, and 2.x.x Async APIs:
# [Java SDK 4.0 Async API](#tab/java-v4-async)
DocumentCollection documentCollection;
documentCollection.setDefaultTimeToLive(90 * 60 * 60 * 24); documentCollection = client.createCollection(database.getSelfLink(), documentCollection, new RequestOptions()).getResource(); ```+
+# [Java SDK 2.x.x Async API](#tab/java-v2-async)
+
+```java
+DocumentCollection collection = new DocumentCollection();
+// Create a new container with TTL enabled with default expiration value
+collection.setDefaultTimeToLive(90 * 60 * 60 * 24);
+collection = client
+ .createCollection(database.getSelfLink(), documentCollection, new RequestOptions())
+ .toBlocking()
+ .single()
+ .getResource();
+```
+ ### Item level Time-To-Live(TTL)
-The following code snippet shows the differences in how to create time to live for an item using the 4.0, 3.x.x Async APIs and 2.x.x Sync APIs:
+The following code snippet shows the differences in how to create time to live for an item between the 4.0, 3.x.x Async, 2.x.x Sync, and 2.x.x Async APIs:
# [Java SDK 4.0 Async API](#tab/java-v4-async)
ResourceResponse<Document> documentResourceResponse = client.createDocument(docu
new RequestOptions(), true); Document responseDocument = documentResourceResponse.getResource(); ```+
+# [Java SDK 2.x.x Async API](#tab/java-v2-async)
+
+```java
+Document document = new Document();
+document.setId("YourDocumentId");
+document.setTimeToLive(60 * 60 * 24 * 30 ); // Expire document in 30 days
+ResourceResponse<Document> documentResourceResponse = client.createDocument(documentCollection.getSelfLink(), document,
+ new RequestOptions(), true).toBlocking().single();
+Document responseDocument = documentResourceResponse.getResource();
+```
+ ## Next steps
cosmos-db Mongodb Mongoose https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/mongodb-mongoose.md
Cosmos DB is Microsoft's globally distributed multi-model database service. You
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] [Node.js](https://nodejs.org/) version v0.10.29 or higher.
Cosmos DB is Microsoft's globally distributed multi-model database service. You
Let's create a Cosmos account. If you already have an account you want to use, you can skip ahead to Set up your Node.js application. If you are using the Azure Cosmos DB Emulator, follow the steps at [Azure Cosmos DB Emulator](local-emulator.md) to set up the emulator and skip ahead to Set up your Node.js application. ### Create a database In this application we will cover two ways of creating collections in Azure Cosmos DB:
As you can see, it is easy to work with Mongoose discriminators. So, if you have
## Clean up resources ## Next steps
cosmos-db Mongodb Readpreference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/mongodb-readpreference.md
This article shows how to globally distribute read operations with [MongoDB Read
## Prerequisites If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. Refer to this [Quickstart](tutorial-global-distribution-mongodb.md) article for instructions on using the Azure portal to set up a Cosmos account with global distribution and then connect to it.
cosmos-db Quick Create Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/quick-create-template.md
An Azure subscription or free Azure Cosmos DB trial account
- [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] -- [!INCLUDE [cosmos-db-emulator-docdb-api](../../includes/cosmos-db-emulator-docdb-api.md)]
+- [!INCLUDE [cosmos-db-emulator-docdb-api](includes/cosmos-db-emulator-docdb-api.md)]
## Review the template
cosmos-db Sql Api Dotnet Application https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql-api-dotnet-application.md
Before following the instructions in this article, make sure that you have the f
* An active Azure account. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
- [!INCLUDE [cosmos-db-emulator-docdb-api](../../includes/cosmos-db-emulator-docdb-api.md)]
+ [!INCLUDE [cosmos-db-emulator-docdb-api](includes/cosmos-db-emulator-docdb-api.md)]
-* Visual Studio 2019. [!INCLUDE [cosmos-db-emulator-vs](../../includes/cosmos-db-emulator-vs.md)]
+* Visual Studio 2019. [!INCLUDE [cosmos-db-emulator-vs](includes/cosmos-db-emulator-vs.md)]
All the screenshots in this article are from Microsoft Visual Studio Community 2019. If you use a different version, your screens and options may not match entirely. The solution should work if you meet the prerequisites.
All the screenshots in this article are from Microsoft Visual Studio Community 2
Let's start by creating an Azure Cosmos account. If you already have an Azure Cosmos DB SQL API account or if you're using the Azure Cosmos DB Emulator, skip to [Step 2: Create a new ASP.NET MVC application](#create-a-new-mvc-application). In the next section, you create a new ASP.NET Core MVC application.
cosmos-db Sql Api Dotnet Samples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql-api-dotnet-samples.md
An Azure subscription or free Cosmos DB trial account
- [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] - You can [activate Visual Studio subscriber benefits](https://azure.microsoft.com/pricing/member-offers/msdn-benefits-details/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio): Your Visual Studio subscription gives you credits every month, which you can use for paid Azure services.-- [!INCLUDE [cosmos-db-emulator-docdb-api](../../includes/cosmos-db-emulator-docdb-api.md)]
+- [!INCLUDE [cosmos-db-emulator-docdb-api](includes/cosmos-db-emulator-docdb-api.md)]
> [!NOTE] > The samples are self-contained, and set up and clean up after themselves with multiple calls to [CreateDocumentCollectionAsync()](/dotnet/api/microsoft.azure.documents.client.documentclient.createdocumentcollectionasync). Each occurrence bills your subscription for one hour of usage in your collection's performance tier.
cosmos-db Sql Api Dotnet V3sdk Samples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql-api-dotnet-v3sdk-samples.md
An Azure subscription or free Cosmos DB trial account
- [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] - You can [activate Visual Studio subscriber benefits](https://azure.microsoft.com/pricing/member-offers/msdn-benefits-details/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio): Your Visual Studio subscription gives you credits every month, which you can use for paid Azure services.-- [!INCLUDE [cosmos-db-emulator-docdb-api](../../includes/cosmos-db-emulator-docdb-api.md)]
+- [!INCLUDE [cosmos-db-emulator-docdb-api](includes/cosmos-db-emulator-docdb-api.md)]
> [!NOTE] > The samples are self-contained, and set up and clean up after themselves. Each occurrence bills your subscription for one hour of usage in your container's performance tier.
cosmos-db Sql Api Get Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql-api-get-started.md
Now let's get started!
* An active Azure account. If you don't have one, you can sign up for a [free account](https://azure.microsoft.com/free/).
- [!INCLUDE [cosmos-db-emulator-docdb-api](../../includes/cosmos-db-emulator-docdb-api.md)]
+ [!INCLUDE [cosmos-db-emulator-docdb-api](includes/cosmos-db-emulator-docdb-api.md)]
-* [!INCLUDE [cosmos-db-emulator-vs](../../includes/cosmos-db-emulator-vs.md)]
+* [!INCLUDE [cosmos-db-emulator-vs](includes/cosmos-db-emulator-vs.md)]
## Step 1: Create an Azure Cosmos DB account Let's create an Azure Cosmos DB account. If you already have an account you want to use, skip this section. To use the Azure Cosmos DB Emulator, follow the steps at [Azure Cosmos DB Emulator](local-emulator.md) to set up the emulator. Then skip ahead to [Step 2: Set up your Visual Studio project](#SetupVS). ## <a id="SetupVS"></a>Step 2: Set up your Visual Studio project
cosmos-db Sql Api Java Application https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql-api-java-application.md
Before you begin this application development tutorial, you must have the follow
* If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
- [!INCLUDE [cosmos-db-emulator-docdb-api](../../includes/cosmos-db-emulator-docdb-api.md)]
+ [!INCLUDE [cosmos-db-emulator-docdb-api](includes/cosmos-db-emulator-docdb-api.md)]
* [Java Development Kit (JDK) 7+](/java/azure/jdk/). * [Eclipse IDE for Java EE Developers.](https://www.eclipse.org/downloads/packages/release/luna/sr1/eclipse-ide-java-ee-developers)
If you're installing these tools for the first time, coreservlets.com provides a
Let's start by creating an Azure Cosmos DB account. If you already have an account or if you are using the Azure Cosmos DB Emulator for this tutorial, you can skip to [Step 2: Create the Java JSP application](#CreateJSP). ## <a id="CreateJSP"></a>Create the Java JSP application
cosmos-db Sql Api Java Sdk Samples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql-api-java-sdk-samples.md
> >- You can [activate Visual Studio subscriber benefits](https://azure.microsoft.com/pricing/member-offers/msdn-benefits-details/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio): Your Visual Studio subscription gives you credits every month that you can use for paid Azure services. >
->[!INCLUDE [cosmos-db-emulator-docdb-api](../../includes/cosmos-db-emulator-docdb-api.md)]
+>[!INCLUDE [cosmos-db-emulator-docdb-api](includes/cosmos-db-emulator-docdb-api.md)]
> The latest sample applications that perform CRUD operations and other common operations on Azure Cosmos DB resources are included in the [azure-cosmos-java-sql-api-samples](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples) GitHub repository. This article provides:
cosmos-db Sql Api Nodejs Application https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql-api-nodejs-application.md
that you have the following resources:
* If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
- [!INCLUDE [cosmos-db-emulator-docdb-api](../../includes/cosmos-db-emulator-docdb-api.md)]
+ [!INCLUDE [cosmos-db-emulator-docdb-api](includes/cosmos-db-emulator-docdb-api.md)]
* [Node.js][Node.js] version 6.10 or higher. * [Express generator](https://www.expressjs.com/starter/generator.html) (you can install Express via `npm install express-generator -g`)
that you have the following resources:
## <a name="create-account"></a>Create an Azure Cosmos DB account Let's start by creating an Azure Cosmos DB account. If you already have an account or if you are using the Azure Cosmos DB Emulator for this tutorial, you can skip to [Step 2: Create a new Node.js application](#create-new-app). ## <a name="create-new-app"></a>Create a new Node.js application Now let's learn to create a basic Hello World Node.js project using the Express framework.
cosmos-db Sql Api Nodejs Get Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql-api-nodejs-get-started.md
Make sure you have the following resources:
* An active Azure account. If you don't have one, you can sign up for a [Free Azure Trial](https://azure.microsoft.com/pricing/free-trial/).
- [!INCLUDE [cosmos-db-emulator-docdb-api](../../includes/cosmos-db-emulator-docdb-api.md)]
+ [!INCLUDE [cosmos-db-emulator-docdb-api](includes/cosmos-db-emulator-docdb-api.md)]
* [Node.js](https://nodejs.org/) v6.0.0 or higher.
Make sure you have the following resources:
Let's create an Azure Cosmos DB account. If you already have an account you want to use, you can skip ahead to [Set up your Node.js application](#SetupNode). If you are using the Azure Cosmos DB Emulator, follow the steps at [Azure Cosmos DB Emulator](local-emulator.md) to set up the emulator and skip ahead to [Set up your Node.js application](#SetupNode). ## <a id="SetupNode"></a>Set up your Node.js application
cosmos-db Sql Api Nodejs Samples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql-api-nodejs-samples.md
Sample solutions that perform CRUD operations and other common operations on Azu
- You can [activate Visual Studio subscriber benefits](https://azure.microsoft.com/pricing/member-offers/msdn-benefits-details/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio): Your Visual Studio subscription gives you credits every month that you can use for paid Azure services. You also need the [JavaScript SDK](sql-api-sdk-node.md).
cosmos-db Sql Api Sdk Async Java https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql-api-sdk-async-java.md
The SQL API Async Java SDK differs from the SQL API Java SDK by providing asynch
[!INCLUDE [Release notes](~/azure-cosmosdb-java-v2/changelog/README.md)] ## FAQ ## See also To learn more about Cosmos DB, see [Microsoft Azure Cosmos DB](https://azure.microsoft.com/services/cosmos-db/) service page.
cosmos-db Sql Api Sdk Dotnet Changefeed https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql-api-sdk-dotnet-changefeed.md
Microsoft will provide notification at least **12 months** in advance of retirin
## FAQ ## See also
cosmos-db Sql Api Sdk Dotnet Standard https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql-api-sdk-dotnet-standard.md
[!INCLUDE[Release notes](~/samples-cosmosdb-dotnet-v3/changelog.md)] ## FAQ ## See also To learn more about Cosmos DB, see [Microsoft Azure Cosmos DB](https://azure.microsoft.com/services/cosmos-db/) service page.
cosmos-db Sql Api Sdk Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql-api-sdk-dotnet.md
The following sub versions of .NET SDKs are available under the 2.x.x version:
## FAQ ## See also
cosmos-db Sql Api Sdk Java Spark https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql-api-sdk-java-spark.md
You can use the connector with [Azure Databricks](https://azure.microsoft.com/se
* Improves connection management and connection pooling to reduce the number of metadata calls. ## FAQ ## Next steps
cosmos-db Sql Api Sdk Java Spring V2 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql-api-sdk-java-spring-v2.md
You can use Spring Data Azure Cosmos DB in your [Azure Spring Cloud](https://azu
* Bug fix and defect mitigation. ## FAQ ## Next steps Learn more about [Azure Cosmos DB](https://azure.microsoft.com/services/cosmos-db/).
cosmos-db Sql Api Sdk Java Spring V3 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql-api-sdk-java-spring-v3.md
You can use Spring Data Azure Cosmos DB in your [Azure Spring Cloud](https://azu
## FAQ ## Next steps
cosmos-db Sql Api Sdk Java V4 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql-api-sdk-java-v4.md
The Azure Cosmos DB Java SDK v4 for Core (SQL) combines an Async API and a Sync
[!INCLUDE[Release notes](~/azure-sdk-for-java-cosmos-db/sdk/cosmos/azure-cosmos/CHANGELOG.md)] ## FAQ ## Next steps To learn more about Cosmos DB, see [Microsoft Azure Cosmos DB](https://azure.microsoft.com/services/cosmos-db/) service page.
cosmos-db Sql Api Sdk Java https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql-api-sdk-java.md
Microsoft will provide notification at least **12 months** in advance of retirin
| 0.9.0-prelease |December 10, 2014 |February 29, 2016 | ## FAQ ## See also To learn more about Cosmos DB, see [Microsoft Azure Cosmos DB](https://azure.microsoft.com/services/cosmos-db/) service page.
cosmos-db Sql Api Sdk Node https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql-api-sdk-node.md
Microsoft provides notification at least **12 months** in advance of retiring an
| [1.0.0](#1.0.0) |April 08, 2015 |August 30, 2020 | ## FAQ ## See also To learn more about Cosmos DB, see [Microsoft Azure Cosmos DB](https://azure.microsoft.com/services/cosmos-db/) service page.
cosmos-db Sql Api Sdk Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql-api-sdk-python.md
Microsoft provides notification at least **12 months** in advance of retiring an
## FAQ ## Next steps
cosmos-db Sql Api Spring Data Sdk Samples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql-api-spring-data-sdk-samples.md
> >- You can [activate Visual Studio subscriber benefits](https://azure.microsoft.com/pricing/member-offers/msdn-benefits-details/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio): Your Visual Studio subscription gives you credits every month that you can use for paid Azure services. >
->[!INCLUDE [cosmos-db-emulator-docdb-api](../../includes/cosmos-db-emulator-docdb-api.md)]
+>[!INCLUDE [cosmos-db-emulator-docdb-api](includes/cosmos-db-emulator-docdb-api.md)]
> The latest sample applications that perform CRUD operations and other common operations on Azure Cosmos DB resources are included in the [azure-spring-data-cosmos-java-sql-api-samples](https://github.com/Azure-Samples/azure-spring-data-cosmos-java-sql-api-samples) GitHub repository. This article provides:
cosmos-db Table Sdk Dotnet Standard https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/table-sdk-dotnet-standard.md
This cross-platform .NET Standard library [Microsoft.Azure.Cosmos.Table](https:/
## FAQ ## See also To learn more about the Azure Cosmos DB Table API, see [Introduction to Azure Cosmos DB Table API](table-introduction.md).
cosmos-db Table Sdk Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/table-sdk-dotnet.md
when attempting to use the Microsoft.Azure.CosmosDB.Table NuGet package, you hav
## FAQ ## See also
cosmos-db Table Sdk Java https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/table-sdk-java.md
New features and functionality and optimizations are only added to the current S
| [1.0.0](#1.0.0) |November 15, 2017 | | ## FAQ ## See also To learn more about Cosmos DB, see [Microsoft Azure Cosmos DB](https://azure.microsoft.com/services/cosmos-db/) service page.
cosmos-db Table Sdk Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/table-sdk-nodejs.md
New features and functionality and optimizations are only added to the current S
| [1.0.0](#1.0.0) |November 15, 2017 | | ## FAQ ## See also To learn more about Cosmos DB, see [Microsoft Azure Cosmos DB](https://azure.microsoft.com/services/cosmos-db/) service page.
cosmos-db Table Sdk Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/table-sdk-python.md
New features and functionality and optimizations are only added to the current S
## FAQ ## See also To learn more about Cosmos DB, see [Microsoft Azure Cosmos DB](https://azure.microsoft.com/services/cosmos-db/) service page.
cosmos-db Table Storage How To Use C Plus https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/table-storage-how-to-use-c-plus.md
This guide shows you common scenarios by using the Azure Table storage service o
### Create an Azure service account ### Create an Azure storage account ### Create an Azure Cosmos DB Table API account ## Create a C++ application
cosmos-db Table Storage How To Use Java https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/table-storage-how-to-use-java.md
This article shows you how to create tables, store your data, and perform CRUD o
## Create an Azure service account **Create an Azure storage account** **Create an Azure Cosmos DB account** ## Create a Java application
cosmos-db Table Storage How To Use Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/table-storage-how-to-use-nodejs.md
This article shows you how to create tables, store your data, and perform CRUD o
## Create an Azure service account **Create an Azure storage account** **Create an Azure Cosmos DB Table API account** ## Configure your application to access Azure Storage or the Azure Cosmos DB Table API
cosmos-db Table Storage How To Use Php https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/table-storage-how-to-use-php.md
This article shows you how to create tables, store your data, and perform CRUD o
## Create an Azure service account **Create an Azure storage account** **Create an Azure Cosmos DB Table API account** ## Create a PHP application
cosmos-db Table Storage How To Use Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/table-storage-how-to-use-python.md
You need the following to complete this sample successfully:
## Create an Azure service account **Create an Azure storage account** **Create an Azure Cosmos DB Table API account** ## Install the Azure Cosmos DB Table SDK for Python
cosmos-db Table Storage How To Use Ruby https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/table-storage-how-to-use-ruby.md
This article shows you how to create tables, store your data, and perform CRUD o
## Create an Azure service account **Create an Azure storage account** **Create an Azure Cosmos DB account** ## Add access to Azure storage or Azure Cosmos DB
cosmos-db Troubleshoot Dot Net Sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/troubleshoot-dot-net-sdk.md
Cosmos DB SDK on any IO failure will attempt to retry the failed operation if re
2. Writes (Create, Upsert, Replace, Delete) are "not" idempotent and hence SDK cannot always blindly retry the failed write operations. It is required that user's application logic to handle the failure and retry. 3. [Trouble shooting sdk availability](troubleshoot-sdk-availability.md) explains retries for multi-region Cosmos DB accounts.
+### Retry design
+
+The application should be designed to retry on any exception unless it is a known issue where retrying will not help. For example, the application should retry on 408 request timeouts, this timeout is possibly transient so a retry may result in success. The application should not retry on 400s, this typically means that there is an issue with the request that must first be resolved. Retrying on the 400 will not fix the issue and will result in the same failure if retried again. The table below shows known failures and which ones to retry on.
+ ## Common error status codes <a id="error-codes"></a>
-| Status Code | Description |
-|-|-|
-| 400 | Bad request (Depends on the error message)|
-| 401 | [Not authorized](troubleshoot-unauthorized.md) |
-| 403 | [Forbidden](troubleshoot-forbidden.md) |
-| 404 | [Resource is not found](troubleshoot-not-found.md) |
-| 408 | [Request timed out](troubleshoot-dot-net-sdk-request-timeout.md) |
-| 409 | Conflict failure is when the ID provided for a resource on a write operation has been taken by an existing resource. Use another ID for the resource to resolve this issue as ID must be unique within all documents with the same partition key value. |
-| 410 | Gone exceptions (Transient failure that should not violate SLA) |
-| 412 | Precondition failure is where the operation specified an eTag that is different from the version available at the server. It's an optimistic concurrency error. Retry the request after reading the latest version of the resource and updating the eTag on the request.
-| 413 | [Request Entity Too Large](concepts-limits.md#per-item-limits) |
-| 429 | [Too many requests](troubleshoot-request-rate-too-large.md) |
-| 449 | Transient error that only occurs on write operations, and is safe to retry |
-| 500 | The operation failed due to an unexpected service error. Contact support. See Filing an [Azure support issue](https://aka.ms/azure-support). |
-| 503 | [Service unavailable](troubleshoot-service-unavailable.md) |
+| Status Code | Retryable | Description |
+|-|-|-|
+| 400 | No | Bad request (i.e. invalid json, incorrect headers, incorrect partition key in header)|
+| 401 | No | [Not authorized](troubleshoot-unauthorized.md) |
+| 403 | No | [Forbidden](troubleshoot-forbidden.md) |
+| 404 | No | [Resource is not found](troubleshoot-not-found.md) |
+| 408 | Yes | [Request timed out](troubleshoot-dot-net-sdk-request-timeout.md) |
+| 409 | No | Conflict failure is when the ID provided for a resource on a write operation has been taken by an existing resource. Use another ID for the resource to resolve this issue as ID must be unique within all documents with the same partition key value. |
+| 410 | Yes | Gone exceptions (transient failure that should not violate SLA) |
+| 412 | No | Precondition failure is where the operation specified an eTag that is different from the version available at the server. It's an optimistic concurrency error. Retry the request after reading the latest version of the resource and updating the eTag on the request.
+| 413 | No | [Request Entity Too Large](concepts-limits.md#per-item-limits) |
+| 429 | Yes | It is safe to retry on a 429. This can be avoided by following the link for [too many requests](troubleshoot-request-rate-too-large.md).|
+| 449 | Yes | Transient error that only occurs on write operations, and is safe to retry. This can point to a design issue where too many concurrent operations are trying to update the same object in Cosmos DB. |
+| 500 | Yes | The operation failed due to an unexpected service error. Contact support by filing an [Azure support issue](https://aka.ms/azure-support). |
+| 503 | Yes | [Service unavailable](troubleshoot-service-unavailable.md) |
### <a name="snat"></a>Azure SNAT (PAT) port exhaustion
If you encounter the following error: `Unable to load DLL 'Microsoft.Azure.Cosmo
[Common issues and workarounds]: #common-issues-workarounds [Enable client SDK logging]: #logging [Azure SNAT (PAT) port exhaustion]: #snat
-[Production check list]: #production-check-list
+[Production check list]: #production-check-list
cosmos-db Tutorial Develop Table Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/tutorial-develop-table-dotnet.md
You need the following to complete this sample successfully:
## Create an Azure Cosmos DB Table API account ## Create a .NET console project
cosmos-db Tutorial Global Distribution Mongodb https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/tutorial-global-distribution-mongodb.md
This article covers the following tasks:
> * Configure global distribution using the Azure portal > * Configure global distribution using the [Azure Cosmos DB's API for MongoDB](mongodb-introduction.md) ## Verifying your regional setup A simple way to check your global configuration with Cosmos DB's API for MongoDB is to run the *isMaster()* command from the Mongo Shell.
cosmos-db Tutorial Global Distribution Sql Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/tutorial-global-distribution-sql-api.md
This article covers the following tasks:
> * Configure global distribution using the [SQL APIs](./introduction.md) <a id="portal"></a> ## <a id="preferred-locations"></a> Connecting to a preferred region using the SQL API
cosmos-db Tutorial Global Distribution Table https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/tutorial-global-distribution-table.md
This article covers the following tasks:
> * Configure global distribution using the Azure portal > * Configure global distribution using the [Table API](table-introduction.md) ## Connecting to a preferred region using the Table API
cosmos-db Tutorial Sql Api Dotnet Bulk Import https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/tutorial-sql-api-dotnet-bulk-import.md
Before following the instructions in this article, make sure that you have the f
* An active Azure account. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
- [!INCLUDE [cosmos-db-emulator-docdb-api](../../includes/cosmos-db-emulator-docdb-api.md)]
+ [!INCLUDE [cosmos-db-emulator-docdb-api](includes/cosmos-db-emulator-docdb-api.md)]
* [NET Core 3 SDK](https://dotnet.microsoft.com/download/dotnet-core). You can verify which version is available in your environment by running `dotnet --version`.
cost-management-billing Cancel Azure Subscription https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/manage/cancel-azure-subscription.md
After you cancel, billing is stopped immediately. However, it can take up to 10
After you cancel, your services are disabled. That means your virtual machines are de-allocated, temporary IP addresses are freed, and storage is read-only.
-After your subscription is canceled, Microsoft waits 30 - 90 days before permanently deleting your data in case you need to access it or you change your mind. We don't charge you for keeping the data. To learn more, see [Microsoft Trust Center - How we manage your data](https://go.microsoft.com/fwLink/p/?LinkID=822930&clcid=0x409).
+After your subscription is canceled, and the subscription does not have any active resources, Microsoft waits 30 - 90 days before permanently deleting your data in case you need to access it or you change your mind. We don't charge you for keeping the data. To learn more, see [Microsoft Trust Center - How we manage your data](https://go.microsoft.com/fwLink/p/?LinkID=822930&clcid=0x409).
-## Delete free trial subscription
+## Delete free account or pay-as-you-go subscription
-If you have a free trial subscription, you don't have to wait 30 days for the subscription to automatically delete. You can delete your subscription *three days* after you cancel it. The **Delete subscription** option isn't available until three days after you cancel your subscription.
+If you have a free account or pay-as-you-go subscription, you don't have to wait 30 days for the subscription to automatically delete. The **Delete subscription** option becomes available three days after you cancel a subscription.
1. Wait three days after the date you canceled the subscription. 1. Select your subscription on the [Subscriptions](https://portal.azure.com/#blade/Microsoft_Azure_Billing/SubscriptionsBlade) page in the Azure portal.
If you have a free trial subscription, you don't have to wait 30 days for the su
## Delete other subscriptions
-The only subscription type that you can manually delete is a free trial subscription. All other subscription types, including pay-as-you-go subscriptions, are deleted only through the [subscription cancellation](#cancel-subscription-in-the-azure-portal) process. In other words, you can't delete a subscription directly unless it's a free trial subscription. However, after you cancel a subscription, you can create an [Azure support request](https://go.microsoft.com/fwlink/?linkid=2083458) to ask to have the subscription deleted immediately.
+The only subscription types that you can manually delete are a free account or pay-as-you-go subscription. All other subscription types are deleted only through the [subscription cancellation](#cancel-subscription-in-the-azure-portal) process. In other words, you can't delete a subscription directly unless it's a free account or pay-as-you-go subscription. However, after you cancel a subscription, you can create an [Azure support request](https://go.microsoft.com/fwlink/?linkid=2083458) to ask to have the subscription deleted immediately.
## Reactivate a subscription
See the [Renewal and Cancellation](/visualstudio/subscriptions/faq/admin/renewal
## Next steps -- If needed, you can reactivate a pay-as-you-go subscription in the [Azure portal](subscription-disabled.md).
+- If needed, you can reactivate a pay-as-you-go subscription in the [Azure portal](subscription-disabled.md).
cost-management-billing Change Azure Account Profile https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/manage/change-azure-account-profile.md
Last updated 04/08/2021 -+ # Change contact information for an Azure billing account
data-factory Connector Azure Database For Postgresql https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-database-for-postgresql.md
Previously updated : 02/25/2021 Last updated : 06/16/2021 # Copy and transform data in Azure Database for PostgreSQL by using Azure Data Factory
To copy data to Azure Database for PostgreSQL, the following properties are supp
|: |: |: | | type | The type property of the copy activity sink must be set to **AzurePostgreSQLSink**. | Yes | | preCopyScript | Specify a SQL query for the copy activity to execute before you write data into Azure Database for PostgreSQL in each run. You can use this property to clean up the preloaded data. | No |
-| writeMethod | The method used to write data into Azure Database for PostgreSQL.<br>Allowed values are: **CopyCommand** (preview, which is more performant), **BulkInsert** (default). | No |
+| writeMethod | The method used to write data into Azure Database for PostgreSQL.<br>Allowed values are: **CopyCommand** (default, which is more performant), **BulkInsert**. | No |
| writeBatchSize | The number of rows loaded into Azure Database for PostgreSQL per batch.<br>Allowed value is an integer that represents the number of rows. | No (default is 1,000,000) | | writeBatchTimeout | Wait time for the batch insert operation to complete before it times out.<br>Allowed values are Timespan strings. An example is 00:30:00 (30 minutes). | No (default is 00:30:00) |
data-factory Connector Dynamics Crm Office 365 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-dynamics-crm-office-365.md
Refer to the following table of supported authentication types and configuration
|: |: |: | | Dataverse <br/><br/> Dynamics 365 online <br/><br/> Dynamics CRM online | Azure Active Directory (Azure AD) service principal <br/><br/> Office 365 | [Dynamics online and Azure AD service-principal or Office 365 authentication](#dynamics-365-and-dynamics-crm-online) | | Dynamics 365 on-premises with internet-facing deployment (IFD) <br/><br/> Dynamics CRM 2016 on-premises with IFD <br/><br/> Dynamics CRM 2015 on-premises with IFD | IFD | [Dynamics on-premises with IFD and IFD authentication](#dynamics-365-and-dynamics-crm-on-premises-with-ifd) |+
+>[!NOTE]
+>With the [deprecation of regional Discovery Service](/power-platform/important-changes-coming#regional-discovery-service-is-deprecated), Azure Data Factory has upgraded to leverage [global Discovery Service](/powerapps/developer/data-platform/webapi/discover-url-organization-web-api#global-discovery-service) while using Office 365 Authentication.
+ > [!IMPORTANT] >If your tenant and user is configured in Azure Active Directory for [conditional access](../active-directory/conditional-access/overview.md) and/or Multi-Factor Authentication is required, you will not be able to use Office 365 Authentication type. For those situations, you must use a Azure Active Directory (Azure AD) service principal authentication.
data-factory Data Flow Troubleshoot Connector Format https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-flow-troubleshoot-connector-format.md
Previously updated : 05/24/2021 Last updated : 06/10/2021
To overwrite the default behavior and bring in additional fields, ADF provides o
![Screenshot that shows the second option to customize the source schema.](./media/data-flow-troubleshoot-connector-format/customize-schema-option-2.png)
+### Support map type in the source
+
+#### Symptoms
+In ADF data flows, map data type cannot be directly supported in Cosmos DB or JSON source, so you cannot get the map data type under "Import projection".
+
+#### Cause
+For Cosmos DB and JSON, they are schema free connectivity and related spark connector uses sample data to infer the schema, and then that schema is used as the Cosmos DB/JSON source schema. When inferring the schema, the Cosmos DB/JSON spark connector can only infer object data as a struct rather than a map data type, and that's why map type cannot be directly supported.
+
+#### Recommendation 
+To solve this issue, refer to the following examples and steps to manually update the script (DSL) of the Cosmos DB/JSON source to get the map data type support.
+
+**Examples**:
+
+
+**Step-1**: Open the script of the data flow activity.
+
+
+**Step-2**: Update the DSL to get the map type support by referring to the examples above.
++
+The map type support:
+
+|Type |Is the map type supported? |Comments|
+|-|--||
+|Excel, CSV |No |Both are tabular data sources with the primitive type, so there is no need to support the map type. |
+|Orc, Avro |Yes |None.|
+|JSON|Yes |The map type cannot be directly supported, please follow the recommendation part in this section to update the script (DSL) under the source projection.|
+|Cosmos DB |Yes |The map type cannot be directly supported, please follow the recommendation part in this section to update the script (DSL) under the source projection.|
+|Parquet |Yes |Today the complex data type is not supported on the parquet dataset, so you need to use the "Import projection" under the data flow parquet source to get the map type.|
+|XML |No |None.|
+ ### Consume JSON files generated by copy activities #### Symptoms
data-share Share Your Data Arm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-share/share-your-data-arm.md
Learn how to set up a new Azure Data Share from an Azure storage account by usin
If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
-[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2F101-data-share-share-storage-account%2Fazuredeploy.json)
+[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.datashare%2Fdata-share-share-storage-account%2Fazuredeploy.json)
## Prerequisites
If you don't have an Azure subscription, create a [free account](https://azure.m
## Review the template
-The template used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/101-data-share-share-storage-account/).
+The template used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/data-share-share-storage-account/).
The following resources are defined in the template:
It's because the deployment is trying to create the dataset before the Azure rol
1. Select the following image to sign in to Azure and open the template.
- [![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2F101-data-share-share-storage-account%2Fazuredeploy.json)
+ [![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.datashare%2Fdata-share-share-storage-account%2Fazuredeploy.json)
1. Select or enter the following values: * **Subscription**: select an Azure subscription used to create the data share and the other resources.
dev-spaces About https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dev-spaces/about.md
- Title: "What is Azure Dev Spaces?"- Previously updated : 05/26/2021-
-description: "Learn how Azure Dev Spaces provides a rapid, iterative Kubernetes development experience for teams in Azure Kubernetes Service clusters"
-keywords: "Docker, Kubernetes, Azure, AKS, Azure Kubernetes Service, containers, kubectl, k8s"
-
-#Customer intent: As a developer, I want understand what Azure Dev Spaces is so that I can use it.
--
-# What is Azure Dev Spaces?
-
-> [!IMPORTANT]
-> Azure Dev Spaces is retired as of May 15, 2021. Customers should use [Bridge to Kubernetes](/visualstudio/containers/overview-bridge-to-kubernetes?view=vs-2019).
-
-Azure Dev Spaces provides a rapid, iterative Kubernetes development experience for teams in Azure Kubernetes Service (AKS) clusters. Azure Dev Spaces also allows you to debug and test all the components of your application in AKS with minimal development machine setup, without replicating or mocking up dependencies.
-
-## How Azure Dev Spaces simplifies Kubernetes development
-
-Azure Dev Spaces helps teams to focus on the development and rapid iteration of their microservice application by allowing teams to work directly with their entire microservices architecture or application running in AKS. Azure Dev Spaces also provides a way to independently update portions of your microservices architecture in isolation without affecting the rest of the AKS cluster or other developers. Azure Dev Spaces is for development and testing in lower-level development and testing environments and is not intended to run on production AKS clusters.
-
-Since teams can work with the entire application and collaborate directly in AKS, Azure Dev Spaces:
-
-* Minimizes local machine setup
-* Decreases setup time for new developers on the team
-* Increases a team's velocity through faster iteration
-* Reduces the number of redundant development and integration environments since team members can share a cluster
-* Removes the need to replicate or mock up dependencies
-* Improves collaboration across development teams as well as the teams they work with, such as DevOps teams
-
-Azure Dev Spaces provides tooling to generate Docker and Kubernetes assets for your projects. This tooling allows you to easily add new and existing applications to both a dev space and other AKS clusters.
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Bridge to Kubernetes](/visualstudio/containers/overview-bridge-to-kubernetes?view=vs-2019)
devtest-labs Devtest Lab Add Devtest User https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/devtest-lab-add-devtest-user.md
The following table illustrates the actions that can be performed by users in ea
## Add an owner or user at the lab level Owners and users can be added at the lab level via the Azure portal.
-A user can be an external user with a valid [Microsoft account (MSA)](devtest-lab-faq.md#what-is-a-microsoft-account).
+A user can be an external user with a valid [Microsoft account (MSA)](/azure/devtest-labs/devtest-lab-faq#what-is-a-microsoft-account).
The following steps guide you through the process of adding an owner or user to a lab in Azure DevTest Labs: 1. Sign in to the [Azure portal](https://go.microsoft.com/fwlink/p/?LinkID=525040).
devtest-labs Devtest Lab Comparing Vm Base Image Types https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/devtest-lab-comparing-vm-base-image-types.md
Formulas provide a dynamic way to create VMs from the desired configuration/sett
[!INCLUDE [devtest-lab-try-it-out](../../includes/devtest-lab-try-it-out.md)] ## Related blog posts
-* [Custom images or formulas?](./devtest-lab-faq.md#blog-post)
+* [Custom images or formulas?](/azure/devtest-labs/devtest-lab-faq#blog-post)
## Next steps-- [DevTest Labs FAQ](devtest-lab-faq.md)
+- [DevTest Labs FAQ](devtest-lab-faq.yml)
devtest-labs Devtest Lab Create Custom Image From Vhd Using Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/devtest-lab-create-custom-image-from-vhd-using-powershell.md
New-AzResourceGroupDeployment -ResourceGroupName $lab.ResourceGroupName -Name Cr
## Related blog posts -- [Custom images or formulas?](./devtest-lab-faq.md#blog-post)
+- [Custom images or formulas?](/azure/devtest-labs/devtest-lab-faq#blog-post)
- [Copying Custom Images between Azure DevTest Labs](https://www.visualstudiogeeks.com/blog/DevOps/How-To-Move-CustomImages-VHD-Between-AzureDevTestLabs#copying-custom-images-between-azure-devtest-labs) ## Next steps
devtest-labs Devtest Lab Create Custom Image From Vm Using Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/devtest-lab-create-custom-image-from-vm-using-portal.md
You can create a custom image from a provisioned VM, and afterwards use that cus
## Related blog posts -- [Custom images or formulas?](./devtest-lab-faq.md#blog-post)
+- [Custom images or formulas?](/azure/devtest-labs/devtest-lab-faq#blog-post)
- [Copying Custom Images between Azure DevTest Labs](https://www.visualstudiogeeks.com/blog/DevOps/How-To-Move-CustomImages-VHD-Between-AzureDevTestLabs#copying-custom-images-between-azure-devtest-labs) ## Next steps
devtest-labs Devtest Lab Create Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/devtest-lab-create-template.md
After a few minutes, the custom image is created and is stored inside the labΓÇÖ
## Related blog posts -- [Custom images or formulas?](./devtest-lab-faq.md#blog-post)
+- [Custom images or formulas?](/azure/devtest-labs/devtest-lab-faq#blog-post)
- [Copying Custom Images between Azure DevTest Labs](https://www.visualstudiogeeks.com/blog/DevOps/How-To-Move-CustomImages-VHD-Between-AzureDevTestLabs#copying-custom-images-between-azure-devtest-labs) ## Next steps
devtest-labs Devtest Lab Developer Lab https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/devtest-lab-developer-lab.md
In this article, you learn about various Azure DevTest Labs features that can be
| | | | [Configure Azure Marketplace images](devtest-lab-configure-marketplace-images.md) |Learn how you can allow Azure Marketplace images, making available for selection only the images you want for the developers.| | [Create a custom image](devtest-lab-create-template.md) |Create a custom image by pre-installing the software you need so that developers can quickly create a VM using the custom image.|
- | [Learn about image factory](./devtest-lab-faq.md#blog-post) |Watch a video that describes how to set up and use an image factory.|
+ | [Learn about image factory](/azure/devtest-labs/devtest-lab-faq#blog-post) |Watch a video that describes how to set up and use an image factory.|
3. **Create reusable templates for developer machines**
In this article, you learn about various Azure DevTest Labs features that can be
| Task | What you learn | | | | | [Define lab policies](devtest-lab-set-lab-policy.md) |Control costs by setting policies in the lab. |
- | [Delete all the lab VMs using a PowerShell script](devtest-lab-faq.md#how-do-i-automate-the-process-of-deleting-all-the-vms-in-my-lab) |Delete all the labs in one operation when development is complete.|
+ | [Delete all the lab VMs using a PowerShell script](/azure/devtest-labs/devtest-lab-faq#how-do-i-automate-the-process-of-deleting-all-the-vms-in-my-lab) |Delete all the labs in one operation when development is complete.|
1. **Add a virtual network to a VM**
In this article, you learn about various Azure DevTest Labs features that can be
6. **Share the lab with each developer**
- Labs can be directly accessed using a link that you share with your developers. They don't even have to have an Azure account, as long as they have a [Microsoft account](devtest-lab-faq.md#what-is-a-microsoft-account). Developers cannot see VMs created by other developers.
+ Labs can be directly accessed using a link that you share with your developers. They don't even have to have an Azure account, as long as they have a [Microsoft account](/azure/devtest-labs/devtest-lab-faq#what-is-a-microsoft-account). Developers cannot see VMs created by other developers.
Learn more by clicking on the links in the following table:
In this article, you learn about various Azure DevTest Labs features that can be
| | | | [Add a developer to a lab in Azure DevTest Labs](devtest-lab-add-devtest-user.md) |Use the Azure portal to add developers to your lab.| | [Add developers to the lab using a PowerShell script](devtest-lab-add-devtest-user.md#add-an-external-user-to-a-lab-using-powershell) |Use PowerShell to automate adding developers to your lab. |
- | [Get a link to the lab](devtest-lab-faq.md#how-do-i-share-a-direct-link-to-my-lab) |Learn how developers can directly access a lab via a hyperlink.|
+ | [Get a link to the lab](/azure/devtest-labs/devtest-lab-faq#how-do-i-share-a-direct-link-to-my-lab) |Learn how developers can directly access a lab via a hyperlink.|
7. **Automate lab creation for more teams**
In this article, you learn about various Azure DevTest Labs features that can be
| Task | What you learn | | | |
- | [Create a lab using a Resource Manager template](devtest-lab-faq.md#how-do-i-create-a-lab-from-a-resource-manager-template) |Create labs in Azure DevTest Labs using Resource Manager templates. |
+ | [Create a lab using a Resource Manager template](/azure/devtest-labs/devtest-lab-faq#how-do-i-create-a-lab-from-a-resource-manager-template) |Create labs in Azure DevTest Labs using Resource Manager templates. |
[!INCLUDE [devtest-lab-try-it-out](../../includes/devtest-lab-try-it-out.md)]
devtest-labs Devtest Lab Enable Licensed Images https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/devtest-lab-enable-licensed-images.md
You can enable programmatic deployment for a licensed image by following these s
## Related blog posts -- [Custom images or formulas?](./devtest-lab-faq.md#blog-post)
+- [Custom images or formulas?](/azure/devtest-labs/devtest-lab-faq#blog-post)
- [Copying Custom Images between Azure DevTest Labs](https://www.visualstudiogeeks.com/blog/DevOps/How-To-Move-CustomImages-VHD-Between-AzureDevTestLabs#copying-custom-images-between-azure-devtest-labs) ## Next steps
devtest-labs Devtest Lab Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/devtest-lab-faq.md
- Title: Azure DevTest Labs FAQ | Microsoft Docs
-description: This article provides answers to some of the frequently asked questions (FAQ) about Azure DevTest Labs.
- Previously updated : 07/17/2020 ---
-# Azure DevTest Labs FAQ
-Get answers to some of the most common questions about Azure DevTest Labs.
-
-## Blog post
-Our DevTest Labs Team blog has been retired as of 20 March 2019. 
-
-### Where can I track feature updates from now on?
-From now on, we'll be posting feature updates and informative blog posts on the Azure blog and Azure updates. These blog posts will also link to our documentation wherever required.
-
-Subscribe to the [DevTest Labs Azure Blog](https://azure.microsoft.com/blog/tag/azure-devtest-labs/) and [DevTest Labs Azure updates](https://azure.microsoft.com/updates/?product=devtest-lab) to stay informed about new features in DevTest Labs.
-
-### What happens to the existing blog posts?
-We're currently working on migrating existing blog posts (excluding outage updates) to our [DevTest Labs documentation](devtest-lab-overview.md). When the MSDN blog is deprecated, it will be redirected to the documentation overview for DevTest Labs. Once redirected, you can search for the article you're looking for in the 'Filter by' title. We haven't migrated all posts yet, but should be done by end of this month. 
--
-### Where do I see outage updates?
-We'll be posting outage updates using our Twitter handle from now onwards. Follow us on Twitter to get latest updates on outages and known bugs.
-
-### Twitter
-Our Twitter handle: [@azlabservices](https://twitter.com/azlabservices)
-
-## General
-### What if my question isn't answered here?
-If your question isn't listed here, let us know, so we can help you find an answer.
--- Post a question at the end of this FAQ.-- To reach a wider audience, post a question on the [Microsoft Q&A question page for Azure DevTest Labs](/answers/topics/azure-devtestlabs.html). Engage with the Azure DevTest Labs team and other members of the community.-- For feature requests, submit your requests and ideas to [Azure DevTest Labs User Voice](https://feedback.azure.com/forums/320373-azure-devtest-labs).-
-### What is a Microsoft account?
-A Microsoft account is an account you use for almost everything you do with Microsoft devices and services. ItΓÇÖs an email address and password that you use to sign into Skype, Outlook.com, OneDrive, Windows phone, Azure, and Xbox Live. A single account means that your files, photos, contacts, and settings can follow you on any device.
-
-> [!NOTE]
-> A Microsoft account used to be called a Windows Live ID.
-
-### Why should I use Azure DevTest Labs?
-Azure DevTest Labs can save your team time and money. Developers can create their own environments by using several different bases. They also can use artifacts to quickly deploy and configure applications. By using custom images and formulas, you can save virtual machines (VMs) as templates, and easily reproduce them across the team. DevTest Labs also offers several configurable policies that lab administrators can use to reduce waste and manage a team's environments. These policies include auto-shutdown, cost threshold, maximum VMs per user, and maximum VM size. For a more in-depth explanation of DevTest Labs, see the [overview](devtest-lab-overview.md) or the [introductory video](https://channel9.msdn.com/Blogs/Azure/what-is-azure-devtest-labs).
-
-### What does "worry-free self-service" mean?
-Worry-free self-service means that developers and testers create their own environments as needed. Administrators have the security of knowing that DevTest Labs can help set the appropriate access, minimize waste and control costs. Administrators can specify which VM sizes are allowed, the maximum number of VMs, and when VMs are started and shut down. DevTest Labs also makes it easy to monitor costs and set alerts, to help you stay aware of how lab resources are being used.
-
-### How can I use DevTest Labs?
-DevTest Labs is useful anytime you require dev or test environments, and want to reproduce them quickly, or manage them by using cost-saving policies.
-Here are some scenarios that our customers use DevTest Labs for:
--- Manage dev and test environments in one place. Use policies to reduce costs and create custom images to share builds across the team.-- Develop an application by using custom images to save the disk state throughout the development stages.-- Track cost in relation to progress.-- Create mass test environments for quality assurance testing.-- Use artifacts and formulas to easily configure and reproduce an application in various environments.-- Distribute VMs for hackathons (collaborative dev or test work), and then easily deprovision them when the event ends.-
-### How am I billed for DevTest Labs?
-DevTest Labs is a free service. Creating labs and configuring policies, templates, and artifacts in DevTest Labs is free. You pay only for the Azure resources used in your labs, such as VMs, storage accounts, and virtual networks. For more information about the cost of lab resources, see [Azure DevTest Labs pricing](https://azure.microsoft.com/pricing/details/devtest-lab/).
-
-## Security
-
-### What are the different security levels in DevTest Labs?
-Security access is determined by Azure role-based access control (Azure RBAC). To learn how access works, it helps to learn the differences between a permission, a role, and a scope, as defined by Azure RBAC.
--- **Permission**: A permission is a defined access to a specific action. For example, a permission can be read access to all VMs.-- **Role**: A role is a set of permissions that can be grouped and assigned to a user. For example, a user with a subscription owner role has access to all resources within a subscription.-- **Scope**: A scope is a level within the hierarchy of an Azure resource. For example, a scope can be a resource group, a single lab, or the entire subscription.-
-Within the scope of DevTest Labs, there are two types of roles that define user permissions:
--- **Lab owner**: A lab owner has access to all resources in the lab. A lab owner can modify policies, read and write to any VMs, change the virtual network, and so on.-- **Lab user**: A lab user can view all lab resources, such as VMs, policies, and virtual networks. However, a lab user can't modify policies or any VMs that were created by other users.-
-You also can create custom roles in DevTest Labs. To learn how to create custom roles in DevTest Labs, see [Grant user permissions to specific lab policies](devtest-lab-grant-user-permissions-to-specific-lab-policies.md).
-
-Because scopes are hierarchical, when a user has permissions at a certain scope, the user is automatically granted those permissions at every lower-level scope in the scope. For instance, if a user is assigned the role of subscription owner, the user has access to all resources in a subscription. These resources include VMs, virtual networks, and labs. A subscription owner automatically inherits the role of lab owner. However, the opposite is not true. A lab owner has access to a lab, which is a lower scope than the subscription level. So, a lab owner can't see VMs, virtual networks, or any other resources that are outside the lab.
-
-### How do I define Azure role-based access control for my DevTest Labs environments to ensure that IT can govern while developers/test can do their work?
-There is a broad pattern, however the detail depends on your organization.
-
-Central IT should own only what is necessary and enable the project and application teams to have the needed level of control. Typically, it means that central IT owns the subscription and handles core IT functions such as networking configurations. The set of **owners** for a subscription should be small. These owners can nominate additional owners when there's a need, or apply subscription-level policies, for example ΓÇ£No Public IPΓÇ¥.
-
-There may be a subset of users that require access across a subscription, such as Tier1 or Tier 2 support. In this case, we recommend that you give these users the **contributor** access so that they can manage the resources, but not provide user access or adjust policies.
-
-The DevTest Labs resource should be owned by owners who are close to the project/application team. It's because they understand their requirements for machines, and required software. In most organizations, the owner of this DevTest Labs resource is commonly the project/development lead. This owner can manage users and policies within the lab environment and can manage all VMs in the DevTest Labs environment.
-
-The project/application team members should be added to the **DevTest Labs User** role. These users can create virtual machines (in-line with the lab and subscription-level policies). They can also manage their own virtual machines. They can't manage virtual machines that belong to other users.
-
-For more information, see [Azure enterprise scaffold ΓÇô prescriptive subscription governance documentation](/azure/architecture/cloud-adoption/appendix/azure-scaffold).
--
-### How do I create a role to allow users to do a specific task?
-For a comprehensive article about how to create custom roles and assign permissions to a role, see [Grant user permissions to specific lab policies](devtest-lab-grant-user-permissions-to-specific-lab-policies.md). Here's an example of a script that creates the role **DevTest Labs Advanced User**, which has permission to start and stop all VMs in the lab:
--
-```powershell
-$policyRoleDef = Get-AzRoleDefinition "DevTest Labs User"
-$policyRoleDef.Actions.Remove('Microsoft.DevTestLab/Environments/*')
-$policyRoleDef.Id = $null
-$policyRoleDef.Name = "DevTest Labs Advanced User"
-$policyRoleDef.IsCustom = $true
-$policyRoleDef.AssignableScopes.Clear()
-$policyRoleDef.AssignableScopes.Add("/subscriptions/<subscription Id>")
-$policyRoleDef.Actions.Add("Microsoft.DevTestLab/labs/virtualMachines/Start/action")
-$policyRoleDef.Actions.Add("Microsoft.DevTestLab/labs/virtualMachines/Stop/action")
-$policyRoleDef = New-AzRoleDefinition -Role $policyRoleDef
-```
-
-### How can an organization ensure corporate security policies are in place?
-An organization may achieve it by doing the following actions:
--- Developing and publishing a comprehensive security policy. The policy articulates the rules of acceptable use associated with the using software, cloud assets. It also defines what clearly violates the policy.-- Develop a custom image, custom artifacts, and a deployment process that allows for orchestration within the security realm that is defined with active directory. This approach enforces the corporate boundary and sets a common set of environmental controls. These controls against the environment a developer can consider as they develop and follow a secure development lifecycle as part of their overall process. The objective also is to provide an environment that isn't overly restrictive that may hinder development, but a reasonable set of controls. The group policies at the organization unit (OU) that contains lab virtual machines could be a subset of the total group policies that are found in production. Alternatively, they can be an additional set to properly mitigate any identified risks.-
-### How can an organization ensure data integrity to ensure that remoting developers can't remove code or introduce malware or unapproved software?
-There are several layers of control to mitigate the threat from external consultants, contractors, or employees that are remoting in to collaborate in DevTest Labs.
-
-As stated previously, the first step must have an acceptable use policy drafted and defined that clearly outlines the consequences when someone violates the policy.
-
-The first layer of controls for remote access is to apply a remote access policy through a VPN connection that isn't directly connected to the lab.
-
-The second layer of controls is to apply a set of group policy objects that prevent copy and paste through remote desktop. A network policy could be implemented to not allow outbound services from the environment such as FTP and RDP services out of the environment. User-defined routing could force all Azure network traffic back to on-premises, but the routing couldn't account for all URLs that might allow uploading of data unless controlled through a proxy to scan content and sessions. Public IPs could be restricted within the virtual network supporting DevTest Labs to not allow bridging of an external network resource.
-
-Ultimately, the same type of restrictions needs to be applied across the organization, which would account for all possible methods of removable media or external URLs that could accept a post of content. Consult with your security professional to review and implement a security policy. For more recommendations, see [Microsoft Cyber Security](https://www.microsoft.com/security/default.aspx?&WT.srch=1&wt.mc_id=AID623240_SEM_sNYnsZDs).
-
-## Lab configuration
-
-### How do I create a lab from a Resource Manager template?
-We offer a [GitHub repository of lab Azure Resource Manager templates](https://azure.microsoft.com/resources/templates/dtl-create-lab) that you can deploy as-is or modify to create custom templates for your labs. Each template has a link to deploy the lab as it's in your own Azure subscription. Or, you can customize the template and [deploy by using PowerShell or Azure CLI](../azure-resource-manager/templates/deploy-powershell.md).
--
-### Can I have all virtual machines to be created in a common resource group instead having each machine in its own resource group?
-Yes, as a lab owner, you can either let the lab handle resource group allocation for you or have all virtual machines created in a common resource group that you specify.
-
-Separate resource group scenario:
-- DevTest Labs creates a new resource group for every public/private IP virtual machine you spin up-- DevTest Labs creates a resource group for shared IP machines that belong to the same size.-
-Common resource group scenario:
-- All virtual machines are spun up in the common resource group you specify. Learn more [resource group allocation for the lab](./resource-group-control.md).-
-### How do I maintain a naming convention across my DevTest Labs environment?
-You may want to extend current enterprise naming conventions to Azure operations and make them consistent across the DevTest Labs environment. When deploying DevTest Labs, we recommend that you have specific starting policies. You deploy these policies by a central script and JSON templates to enforce consistency. Naming policies can be implemented through Azure policies applied at the subscription level. For JSON samples for Azure Policy, see [Azure Policy samples](../governance/policy/samples/index.md).
-
-### How many labs can I create under the same subscription?
-There isn't a specific limit on the number of labs that can be created per subscription. However, the amount of resources used per subscription is limited. You can read about the [limits and quotas for Azure subscriptions](../azure-resource-manager/management/azure-subscription-service-limits.md) and [how to increase these limits](https://azure.microsoft.com/blog/azure-limits-quotas-increase-requests).
--
-### How many VMs can I create per lab?
-There is no specific limit on the number of VMs that can be created per lab. However, the resources (VM cores, public IP addresses, and so on) that are used are limited per subscription. You can read about the [limits and quotas for Azure subscriptions](../azure-resource-manager/management/azure-subscription-service-limits.md) and [how to increase these limits](https://azure.microsoft.com/blog/azure-limits-quotas-increase-requests).
-
-### How do I determine the ratio of users per lab and the overall number of labs that are needed across an organization?
-We recommend that business units and development groups that are associated with the same development project are associated with the same lab. It allows for same types of policies, images, and shutdown policies to be applied to both groups.
-
-You may also need to consider geographic boundaries. For example, developers in the North East United States (US) may use a lab provisioned in East US2. And, developers in Dallas, Texas, and Denver, Colorado may be directed to use a resource in US South Central. If there is a collaborative effort with an external third party, they could be assigned to a lab that is not used by internal developers.
-
-You may also use a lab for a specific project within Azure DevOps Projects. Then, you apply security through a specified Azure Active Directory group, which allows access to both set of resources. The virtual network assigned to the lab can be another boundary to consolidate users.
-
-### How can we prevent the deletion of resources within a lab?
-We recommend that you set proper permissions at the lab level so that only authorized users can delete resources or change lab policies. Developers should be placed within the **DevTest Labs Users** group. The lead developer or the infrastructure lead should be the **DevTest Labs Owner**. We recommend you have only two lab owners. This policy extends towards the code repository to avoid corruption. Lab users have rights to use resources but can't update lab policies. See the following article that lists the roles and rights that each built-in group has within a lab: [Add owners and users in Azure DevTest Labs](devtest-lab-add-devtest-user.md).
-
-### How do I share a direct link to my lab?
-
-1. In the [Azure portal](https://portal.azure.com), go to the lab.
-2. Copy the **lab URL** from your browser, and then share it with your lab users.
-
-> [!NOTE]
-> If a lab user is an external user who has a Microsoft account, but who is not a member of your organization's Active Directory instance, the user might see an error message when they try to access the shared link. If an external user sees an error message, ask the user to first select their name in the upper-right corner of the Azure portal. Then, in the Directory section of the menu, the user can select the directory where the lab exists.
-
-## Virtual machines
-
-### Why can't I see VMs on the Virtual Machines page that I see in DevTest Labs?
-When you create a VM in DevTest Labs, you're given permission to access that VM. You can view the VM both on the labs page and on the **Virtual Machines** page. Users assigned to the **DevTest Labs Owner** role can see all VMs that were created in the lab on the lab's **All Virtual Machines** page. However, users who have the **DevTest Labs User** role are not automatically granted read access to VM resources that other users have created. So, those VMs are not displayed on the **Virtual Machines** page.
--
-### How do I create multiple VMs from the same template at once?
-You have two options for simultaneously creating multiple VMs from the same template:
--- You can use the [Azure DevOps Tasks extension](https://marketplace.visualstudio.com/items?itemName=ms-azuredevtestlabs.tasks).-- You can [generate a Resource Manager template](devtest-lab-add-vm.md#save-azure-resource-manager-template) while you're creating a VM, and [deploy the Resource Manager template from Windows PowerShell](../azure-resource-manager/templates/deploy-powershell.md).-- You can also specify more than one instance of a machine to be created during virtual machine creation. To learn more about creating multiple instances of virtual machines, see the doc on [creating a lab virtual machine](devtest-lab-add-vm.md).-
-### How do I move my existing Azure VMs into my DevTest Labs lab?
-To copy your existing VMs to DevTest Labs:
-
-1. Copy the VHD file of your existing VM by using a [Windows PowerShell script](https://github.com/Azure/azure-devtestlab/blob/master/samples/DevTestLabs/Scripts/CopyVirtualMachines/CopyAzVHDFromVMToLab.ps1).
-2. Create the [custom image](devtest-lab-create-template.md) inside your DevTest Labs lab.
-3. Create a VM in the lab from your custom image.
-
-### Can I attach multiple disks to my VMs?
-
-Yes, you can attach multiple disks to your VMs.
-
-### Are Gen 2 images supported by DevTest Labs?
-Yes. The DevTest Labs service supports [Gen 2 images](../virtual-machines/generation-2.md). However, if both Gen 1 and Gen 2 versions are available for an image, DevTest Labs shows only the Gen 1 version of the image when creating a VM. You see the image if there is only Gen 2 version of it available.
-
-### If I want to use a Windows OS image for my testing, do I have to purchase an MSDN subscription?
-To use Windows client OS images (Windows 7 or a later version) for your development or testing in Azure, take one of the following steps:
--- [Buy an MSDN subscription](https://www.visualstudio.com/products/how-to-buy-vs).-- If you have an Enterprise Agreement, create an Azure subscription with the [Enterprise Dev/Test offer](https://azure.microsoft.com/offers/ms-azr-0148p).-
-For more information about the Azure credits for each MSDN offering, see [Monthly Azure credit for Visual Studio subscribers](https://azure.microsoft.com/pricing/member-offers/msdn-benefits-details/).
--
-### How do I automate the process of deleting all the VMs in my lab?
-As a lab owner, you can delete VMs from your lab in the Azure portal. You also can delete all the VMs in your lab by using a PowerShell script. In the following example, under the **values to change** comment, modify the parameter values. You can retrieve the `subscriptionId`, `labResourceGroup`, and `labName` values from the lab pane in the Azure portal.
-
-```powershell
-# Delete all the VMs in a lab.
-
-# Values to change:
-$subscriptionId = "<Enter Azure subscription ID here>"
-$labResourceGroup = "<Enter lab's resource group here>"
-$labName = "<Enter lab name here>"
-
-# Sign in to your Azure account.
-Connect-AzAccount
-
-# Select the Azure subscription that has the lab. This step is optional
-# if you have only one subscription.
-Select-AzSubscription -SubscriptionId $subscriptionId
-
-# Get the lab that has the VMs that you want to delete.
-$lab = Get-AzResource -ResourceId ('subscriptions/' + $subscriptionId + '/resourceGroups/' + $labResourceGroup + '/providers/Microsoft.DevTestLab/labs/' + $labName)
-
-# Get the VMs from that lab.
-$labVMs = Get-AzResource | Where-Object {
- $_.ResourceType -eq 'microsoft.devtestlab/labs/virtualmachines' -and
- $_.Name -like "$($lab.Name)/*"}
-
-# Delete the VMs.
-foreach($labVM in $labVMs)
-{
- Remove-AzResource -ResourceId $labVM.ResourceId -Force
-}
-```
-
-## Environments
-
-### How can I use Resource Manager templates in my DevTest Labs Environment?
-You deploy your Resource Manager templates into a DevTest Labs environment by using steps mentioned in the [Environments feature in DevTest Labs](devtest-lab-test-env.md) article. Basically, you check your Resource Manager templates into a Git Repository (either Azure Repos or GitHub), and add a [private repository for your templates](devtest-lab-test-env.md) to the lab. This scenario may not be useful if you're using DevTest Labs to host development machines but may be useful if you're building a staging environment, which is representative of production.
-
-It's also worth noting that the number of virtual machines per lab or per user option only limits the number of machines natively created in the lab itself, and not by any environments (Resource Manager templates).
-
-## Custom Images
-
-### How can I set up an easily repeatable process to bring my custom organizational images into a DevTest Labs environment?
-See this [video on Image Factory pattern](https://sec.ch9.ms/ch9/8e8a/9ea0b8d4-b803-4f23-bca4-4808d9368e8a/dtlimagefactory_mid.mp4). This scenario is an advanced scenario, and the scripts provided are sample scripts only. If any changes are required, you need to manage and maintain the scripts used in your environment.
-
-For detailed information on creating an image factory, see [Create a custom image factory in Azure DevTest Labs](image-factory-create.md).
-
-### What is the difference between a custom image and a formula?
-A custom image is a managed image. A formula is an image that you can configure with additional settings, and then save and reproduce. A custom image might be preferable if you want to quickly create several environments by using the same basic, immutable image. A formula might be better if you want to reproduce the configuration of your VM with the latest bits, as part of a virtual network or subnet, or as a VM of a specific size. For a more in-depth explanation, see [Comparing custom images and formulas in DevTest Labs](devtest-lab-comparing-vm-base-image-types.md).
-
-### When should I use a formula vs. custom image?
-Typically, the deciding factor in this scenario is cost and reuse. If you have a scenario where many users/labs require an image with a lot of software on top of the base image, then you could reduce costs by creating a custom image. It means that the image is created once. It reduces the setup time of the virtual machine and the cost incurred due to the virtual machine running when setup occurs.
-
-However, an additional factor to note is the frequency of changes to your software package. If you run daily builds and require that software to be on your usersΓÇÖ virtual machines, consider using a formula instead of a custom image.
-
-For a more in-depth explanation, see [Comparing custom images and formulas](devtest-lab-comparing-vm-base-image-types.md) in DevTest Labs.
-
-### How do I automate the process of uploading VHD files to create custom images?
-
-To automate uploading VHD files to create custom images, you have two options:
--- Use [AzCopy](../storage/common/storage-use-azcopy-v10.md) to copy or upload VHD files to the storage account that's associated with the lab.-- Use [Azure Storage Explorer](../vs-azure-tools-storage-manage-with-storage-explorer.md). Storage Explorer is a standalone app that runs on Windows, OS X, and Linux.-
-To find the destination storage account that's associated with your lab:
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-2. On the left menu, select **Resource Groups**.
-3. Find and select the resource group that's associated with your lab.
-4. Under **Overview**, select one of the storage accounts.
-5. Select **Blobs**.
-6. Look for uploads in the list. If none exists, return to step 4 and try another storage account.
-7. Use the **URL** as the destination in your AzCopy command.
-
-When should I use an Azure Marketplace image vs. my own custom organizational image?
-
-### When should I use an Azure Marketplace image vs. my own custom organizational image?
-Azure Marketplace should be used by default unless you have specific concerns or organizational requirements. Some common examples include;
--- Complex software setup that requires an application to be included as part of the base image.-- Installation and setup of an application could take many hours, which aren't an efficient use of compute time to be added on an Azure Marketplace image.-- Developers and testers require access to a virtual machine quickly, and want to minimize the setup time of a new virtual machine.-- Compliance or regulatory conditions (for example, security policies) that must be in place for all machines.-- Using custom images shouldn't be considered lightly. They introduce extra complexity, as you now must manage VHD files for those underlying base images. You also need to routinely patch those base images with software updates. These updates include new operating system (OS) updates, and any updates or configuration changes needed for the software package itself.-
-## Artifacts
-
-### What are artifacts?
-Artifacts are customizable elements that you can use to deploy your latest bits or deploy your dev tools to a VM. Attach artifacts to your VM when you create the VM. After the VM is provisioned, the artifacts deploy and configure your VM. Various pre-existing artifacts are available in our [public GitHub repository](https://github.com/Azure/azure-devtestlab/tree/master/Artifacts). You can also [author your own artifacts](devtest-lab-artifact-author.md).
-
-### My artifact failed during VM creation. How do I troubleshoot it?
-To learn how to get logs for your failed artifact, see [How to diagnose artifact failures in DevTest Labs](devtest-lab-troubleshoot-artifact-failure.md).
-
-### When should an organization use a public artifact repository vs. private artifact repository in DevTest Labs?
-The [public artifact repository](https://github.com/Azure/azure-devtestlab/tree/master/Artifacts) provides an initial set of software packages that are most commonly used. It helps with rapid deployment without having to invest time to reproduce common developer tools and add-ins. You can choose to deploy their own private repository. You can use a public and a private repository in parallel. You may also choose to disable the public repository. The criteria to deploy a private repository should be driven by the following questions and considerations:
--- Does the organization have a requirement to have corporate licensed software as part of their DevTest Labs offering? If the answer is yes, then a private repository should be created.-- Does the organization develop custom software that provides a specific operation, which is required as part of the overall provisioning process? If the answer is yes, then a private repository should be deployed.-- If organization's governance policy requires isolation, and external repositories aren't under direct configuration management by the organization, a private artifact repository should be deployed. As part of this process, an initial copy of the public repository can be copied and integrated with the private repository. Then, the public repository can be disabled so that no one within the organization can access it anymore. This approach forces all users within the organization to have only a single repository that is approved by the organization and minimize configuration drift.--
-### Should an organization plan for a single repository or allow multiple repositories?
-As part of your organization's overall governance and configuration management strategy, we recommend that you use a centralized repository. When you use multiple repositories, they may become silos of unmanaged software over the time. With a central repository, multiple teams can consume artifacts from this repository for their projects. It enforces standardization, security, ease of management, and eliminates the duplication of efforts. As part of the centralization, the following actions are recommended practices for long-term management and sustainability:
--- Associate the Azure Repos with the same Azure Active Directory tenant that the Azure subscription is using for authentication and authorization.-- Create a group named `All DevTest Labs Developers` in Azure Active Directory that is centrally managed. Any developer who contributes to artifact development should be placed in this group.-- The same Azure Active Directory group can be used to provide access to the Azure Repos repository and to the lab.-- In Azure Repos, branching or forking should be used to a separate an in-development repository from the primary production repository. Content is only added to the main branch with a pull request after a proper code review. Once the code reviewer approves the change, a lead developer, who is responsible for maintenance of the main branch, merges the updated code.-
-## CI/CD integration
-
-### Does DevTest Labs integrate with my CI/CD toolchain?
-If you're using Azure DevOps, you can use a [DevTest Labs Tasks extension](https://marketplace.visualstudio.com/items?itemName=ms-azuredevtestlabs.tasks) to automate your release pipeline in DevTest Labs. Some of the tasks that you can do with this extension include:
--- Create and deploy a VM automatically. You also can configure the VM with the latest build by using Azure File Copy or PowerShell Azure DevOps Services tasks.-- Automatically capture the state of a VM after testing to reproduce a bug on the same VM for further investigation.-- Delete the VM at the end of the release pipeline, when it's no longer needed.-
-The following blog posts offer guidance and information about using the Azure DevOps Services extension:
--- [DevTest Labs and the Azure DevOps extension](integrate-environments-devops-pipeline.md)-- [Deploy a new VM in an existing DevTest Labs lab from Azure DevOps Services](https://www.visualstudiogeeks.com/blog/DevOps/Deploy-New-VM-To-Existing-AzureDevTestLab-From-VSTS)-- [Using Azure DevOps Services release management for continuous deployments to DevTest Labs](https://www.visualstudiogeeks.com/blog/DevOps/Use-VSTS-ReleaseManagement-to-Deploy-and-Test-in-AzureDevTestLabs)-
-For other continuous integration (CI)/continuous delivery (CD) toolchains, you can achieve the same scenarios by deploying [Azure Resource Manager templates](https://azure.microsoft.com/resources/templates/) by using [Azure PowerShell cmdlets](../azure-resource-manager/templates/deploy-powershell.md) and [.NET SDKs](https://www.nuget.org/packages/Microsoft.Azure.Management.DevTestLabs/). You also can use [REST APIs for DevTest Labs](https://aka.ms/dtlrestapis) to integrate with your toolchain.
-
-## Networking
-
-### When should I create a new virtual network for my DevTest Labs environment vs. using an existing virtual network?
-If your VMs need to interact with existing infrastructure, then consider using an existing virtual network inside your DevTest Labs environment. If you use ExpressRoute, you may want to minimize the number of virtual networks / subnets so that you donΓÇÖt fragment your IP address space that gets assigned for use in the subscriptions.
-
-Consider using the virtual network peering pattern here ([Hub-Spoke model](/azure/architecture/reference-architectures/hybrid-networking/hub-spoke)) too. This approach enables vnet/subnet communication across subscriptions. Otherwise, each DevTest Labs environment could have its own virtual network.
-
-There are [limits](../azure-resource-manager/management/azure-subscription-service-limits.md) on the number of virtual networks per subscription. The default amount is 50, though this limit can be raised to 100.
-
-### When should I use a shared IP vs. public IP vs. private IP?
-
-If you use a site-to-site VPN or Express Route, consider using private IPs so that your machines are accessible via your internal network, and inaccessible over public internet.
-
-> [!NOTE]
-> Lab owners can change this subnet policy to ensure that no one accidentally creates public IP addresses for their VMs. The subscription owner should create a subscription policy preventing public IPs from being created.
-
-When using shared public IPs, the virtual machines in a lab share a public IP address. This approach can be helpful when you need to avoid breaching the limits on public IP addresses for a given subscription.
-
-### How do I ensure that development and test virtual machines are unable to reach the public internet? Are there any recommended patterns to set up network configuration?
-
-Yes. There are two aspects to consider ΓÇô inbound and outbound traffic.
--- **Inbound traffic** ΓÇô If the virtual machine doesn't have a public IP address, then it cannot be reached by the internet. A common approach is to ensure that a subscription-level policy is set, such that no user can create a public IP address.-- **Outbound traffic** ΓÇô If you want to prevent virtual machines accessing public internet directly and force traffic through a corporate firewall, then you can route traffic on-premises via express route or VPN, by using forced routing.-
-> [!NOTE]
-> If you have a proxy server that blocks traffic without proxy settings, do not forget to add exceptions to the labΓÇÖs artifact storage account.
-
-You could also use network security groups for virtual machines or subnets. This step adds an additional layer of protection to allow / block traffic.
-
-## Troubleshooting
-
-### Why isn't my existing virtual network saving properly?
-One possibility is that your virtual network name contains periods. If so, try removing the periods or replacing them with hyphens. Then, try again to save the virtual network.
-
-### Why do I get a "Parent resource not found" error when I provision a VM from PowerShell?
-When one resource is a parent to another resource, the parent resource must exist before you create the child resource. If the parent resource doesn't exist, you see a **ParentResourceNotFound** message. If you don't specify a dependency on the parent resource, the child resource might be deployed before the parent.
-
-VMs are child resources under a lab in a resource group. When you use Resource Manager templates to deploy VMs by using PowerShell, the resource group name provided in the PowerShell script should be the resource group name of the lab. For more information, see [Troubleshoot common Azure deployment errors](../azure-resource-manager/templates/common-deployment-errors.md).
-
-### Where can I find more error information if a VM deployment fails?
-VM deployment errors are captured in activity logs. You can find lab VM activity logs under **Audit logs** or **Virtual machine diagnostics** on the resource menu on the lab's VM page (the page appears after you select the VM from the My virtual machines list).
-
-Sometimes, the deployment error occurs before VM deployment begins. An example is when the subscription limit for a resource that was created with the VM is exceeded. In this case, the error details are captured in the lab-level activity logs. Activity logs are located at the bottom of the **Configuration and policies** settings. For more information about using activity logs in Azure, see [View activity logs to audit actions on resources](../azure-resource-manager/management/view-activity-logs.md).
-
-### Why do I get "location is not available for resource type" error when trying to create a lab?
-You may see an error message similar to the following one when you try to create a lab:
-
-```
-The provided location 'australiacentral' is not available for resource type 'Microsoft.KeyVault/vaults'. List of available regions for the resource type is 'northcentralus,eastus,northeurope,westeurope,eastasia,southeastasia,eastus2,centralus,southcentralus,westus,japaneast,japanwest,australiaeast,australiasoutheast,brazilsouth,centralindia,southindia,westindia,canadacentral,canadaeast,uksouth,ukwest,westcentralus,westus2,koreacentral,koreasouth,francecentral,southafricanorth
-```
-
-You can resolve this error by taking one of the following steps:
-
-#### Option 1
-Check availability of the resource type in Azure regions on the [Products available by region](https://azure.microsoft.com/global-infrastructure/services/) page. If the resource type isn't available in a certain region, DevTest Labs doesn't support creation of a lab in that region. Select another region when creating your lab.
-
-#### Option 2
-If the resource type is available in your region, check if it's registered with your subscription. It can be done at the subscription owner level as shown in [this article](../azure-resource-manager/management/resource-providers-and-types.md).
devtest-labs Devtest Lab Guidance Get Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/devtest-lab-guidance-get-started.md
A lab in Azure DevTest Labs acts as a great container for transient activities l
- [Policies](devtest-lab-set-lab-policy.md) ensure trainees only get the number of resources, such as virtual machines, that they need. - Pre-configured and created machines are [claimed](devtest-lab-add-claimable-vm.md) with single action from trainee.-- Labs are shared with trainees by accessing [URL for the lab](devtest-lab-faq.md#how-do-i-share-a-direct-link-to-my-lab).
+- Labs are shared with trainees by accessing [URL for the lab](/azure/devtest-labs/devtest-lab-faq#how-do-i-share-a-direct-link-to-my-lab).
- [Expiration dates](devtest-lab-add-vm.md#steps-to-add-a-vm-to-a-lab-in-azure-devtest-labs) on virtual machines ensure that machines are deleted after they are no longer needed.-- ItΓÇÖs easy to [delete a lab](devtest-lab-delete-lab-vm.md#delete-a-lab) and all [related resources](devtest-lab-faq.md#how-do-i-automate-the-process-of-deleting-all-the-vms-in-my-lab) when the training is over.
+- ItΓÇÖs easy to [delete a lab](devtest-lab-delete-lab-vm.md#delete-a-lab) and all [related resources](/azure/devtest-labs/devtest-lab-faq#how-do-i-automate-the-process-of-deleting-all-the-vms-in-my-lab) when the training is over.
For more information, see [Use Azure DevTest Labs for training](devtest-lab-training-lab.md).
A **proof of concept** deployment focuses on a concentrated effort from a single
Read the following articles: - [DevTest Labs concepts](devtest-lab-concepts.md)-- [DevTest Labs FAQ](devtest-lab-faq.md)
+- [DevTest Labs FAQ](devtest-lab-faq.yml)
devtest-labs Devtest Lab Guidance Governance Application Migration Integration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/devtest-lab-guidance-governance-application-migration-integration.md
However, an additional factor to note is the frequency of changes to your softwa
How can I set up an easily repeatable process to bring my custom organizational images into a DevTest Labs environment? ### Answer
-See [this video on Image Factory pattern](./devtest-lab-faq.md#blog-post). This scenario is an advanced scenario, and the scripts provided are sample scripts only. If any changes are required, you need to manage and maintain the scripts used in your environment.
+See [this video on Image Factory pattern](/azure/devtest-labs/devtest-lab-faq#blog-post). This scenario is an advanced scenario, and the scripts provided are sample scripts only. If any changes are required, you need to manage and maintain the scripts used in your environment.
Using DevTest Labs to create a custom image pipeline in Azure Pipelines: -- [Introduction: Get VMs ready in minutes by setting up an image factory in Azure DevTest Labs](./devtest-lab-faq.md#blog-post)-- [Image Factory ΓÇô Part 2! Setup Azure Pipelines and Factory Lab to Create VMs](./devtest-lab-faq.md#blog-post)-- [Image Factory ΓÇô Part 3: Save Custom Images and Distribute to Multiple Labs](./devtest-lab-faq.md#blog-post)-- [Video: Custom Image Factory with Azure DevTest Labs](./devtest-lab-faq.md#blog-post)
+- [Introduction: Get VMs ready in minutes by setting up an image factory in Azure DevTest Labs](/azure/devtest-labs/devtest-lab-faq#blog-post)
+- [Image Factory ΓÇô Part 2! Setup Azure Pipelines and Factory Lab to Create VMs](/azure/devtest-labs/devtest-lab-faq#blog-post)
+- [Image Factory ΓÇô Part 3: Save Custom Images and Distribute to Multiple Labs](/azure/devtest-labs/devtest-lab-faq#blog-post)
+- [Video: Custom Image Factory with Azure DevTest Labs](/azure/devtest-labs/devtest-lab-faq#blog-post)
## Patterns to set up network configuration
devtest-labs Devtest Lab Manage Formulas https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/devtest-lab-manage-formulas.md
To delete a formula, follow these steps:
[!INCLUDE [devtest-lab-try-it-out](../../includes/devtest-lab-try-it-out.md)] ## Related blog posts
-* [Custom images or formulas?](devtest-lab-faq.md#what-is-the-difference-between-a-custom-image-and-a-formula)
+* [Custom images or formulas?](/azure/devtest-labs/devtest-lab-faq#what-is-the-difference-between-a-custom-image-and-a-formula)
## Next steps Once you have created a formula for use when creating a VM, the next step is to [add a VM to your lab](devtest-lab-add-vm.md).
devtest-labs Devtest Lab Test Env https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/devtest-lab-test-env.md
In this article, you learn about various Azure DevTest Labs features used to mee
| | | | [Configure Azure Marketplace images](devtest-lab-configure-marketplace-images.md) |Learn how you can allow Azure Marketplace images, making available for selection only the images you want for the testers.| | [Create a custom image](devtest-lab-create-template.md) |Create a custom image by pre-installing the software you need so that testers can quickly create a VM using the custom image.|
- | [Learn about image factory](./devtest-lab-faq.md#blog-post) |Watch a video that describes how to set up and use an image factory.|
+ | [Learn about image factory](/azure/devtest-labs/devtest-lab-faq#blog-post) |Watch a video that describes how to set up and use an image factory.|
3. **Create reusable templates for test machines**
In this article, you learn about various Azure DevTest Labs features used to mee
| Task | What you learn | | | | | [Define lab policies](devtest-lab-set-lab-policy.md) |Control costs by setting policies in the lab. |
- | [Delete all the lab VMs using a PowerShell script](devtest-lab-faq.md#how-do-i-automate-the-process-of-deleting-all-the-vms-in-my-lab) |Delete all the labs in one operation when testing is complete.|
+ | [Delete all the lab VMs using a PowerShell script](/azure/devtest-labs/devtest-lab-faq#how-do-i-automate-the-process-of-deleting-all-the-vms-in-my-lab) |Delete all the labs in one operation when testing is complete.|
1. **Add a virtual network to a Lab**
In this article, you learn about various Azure DevTest Labs features used to mee
6. **Share the lab with each tester**
- Labs can be directly accessed using a link that you share with your testers. They don't even have to have an Azure account, as long as they have a [Microsoft account](devtest-lab-faq.md#what-is-a-microsoft-account). Testers cannot see VMs created by other testers.
+ Labs can be directly accessed using a link that you share with your testers. They don't even have to have an Azure account, as long as they have a [Microsoft account](/azure/devtest-labs/devtest-lab-faq#what-is-a-microsoft-account). Testers cannot see VMs created by other testers.
Learn more by clicking on the links in the following table:
In this article, you learn about various Azure DevTest Labs features used to mee
| | | | [Add a tester to a lab in Azure DevTest Labs](devtest-lab-add-devtest-user.md) |Use the Azure portal to add testers to your lab.| | [Add testers to the lab using a PowerShell script](devtest-lab-add-devtest-user.md#add-an-external-user-to-a-lab-using-powershell) |Use PowerShell to automate adding testers to your lab. |
- | [Get a link to the lab](devtest-lab-faq.md#how-do-i-share-a-direct-link-to-my-lab) |Learn how testers can directly access a lab via a hyperlink.|
+ | [Get a link to the lab](/azure/devtest-labs/devtest-lab-faq#how-do-i-share-a-direct-link-to-my-lab) |Learn how testers can directly access a lab via a hyperlink.|
7. **Automate lab creation for more teams**
In this article, you learn about various Azure DevTest Labs features used to mee
| Task | What you learn | | | |
- | [Create a lab using a Resource Manager template](devtest-lab-faq.md#how-do-i-create-a-lab-from-a-resource-manager-template) |Create labs in Azure DevTest Labs using Resource Manager templates. |
+ | [Create a lab using a Resource Manager template](/azure/devtest-labs/devtest-lab-faq#how-do-i-create-a-lab-from-a-resource-manager-template) |Create labs in Azure DevTest Labs using Resource Manager templates. |
[!INCLUDE [devtest-lab-try-it-out](../../includes/devtest-lab-try-it-out.md)]
devtest-labs Devtest Lab Training Lab https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/devtest-lab-training-lab.md
In this article, you learn about various Azure DevTest Labs features that can be
| Task | What you learn | | | | | [Define lab policies](devtest-lab-set-lab-policy.md) |Control costs by setting policies in the lab. |
- | [Delete all the lab VMs using a PowerShell script](devtest-lab-faq.md#how-do-i-automate-the-process-of-deleting-all-the-vms-in-my-lab) |Delete all the labs in one operation when the training is complete. |
+ | [Delete all the lab VMs using a PowerShell script](/azure/devtest-labs/devtest-lab-faq#how-do-i-automate-the-process-of-deleting-all-the-vms-in-my-lab) |Delete all the labs in one operation when the training is complete. |
5. **Share the lab with each trainee**
- Labs can be directly accessed using a link that you share with your trainees. Your trainees don't even have to have an Azure account, as long as they have a [Microsoft account](devtest-lab-faq.md#what-is-a-microsoft-account). Trainees cannot see VMs created by other trainees.
+ Labs can be directly accessed using a link that you share with your trainees. Your trainees don't even have to have an Azure account, as long as they have a [Microsoft account](/azure/devtest-labs/devtest-lab-faq#what-is-a-microsoft-account). Trainees cannot see VMs created by other trainees.
Learn more by clicking on the links in the following table:
In this article, you learn about various Azure DevTest Labs features that can be
| | | | [Add a trainee to a lab in Azure DevTest Labs](devtest-lab-add-devtest-user.md) |Use the Azure portal to add trainees to your training lab. | | [Add trainees to the lab using a PowerShell script](devtest-lab-add-devtest-user.md#add-an-external-user-to-a-lab-using-powershell) |Use PowerShell to automate adding trainees to your training lab. |
- | [Get a link to the lab](devtest-lab-faq.md#how-do-i-share-a-direct-link-to-my-lab) |Learn how a lab can be directly accessed via a hyperlink. |
+ | [Get a link to the lab](/azure/devtest-labs/devtest-lab-faq#how-do-i-share-a-direct-link-to-my-lab) |Learn how a lab can be directly accessed via a hyperlink. |
6. **Reuse the lab again and again** You can automate lab creation, including custom settings, by creating a Resource Manager template and using it to create identical labs again and again.
In this article, you learn about various Azure DevTest Labs features that can be
| Task | What you learn | | | |
- | [Create a lab using a Resource Manager template](devtest-lab-faq.md#how-do-i-create-a-lab-from-a-resource-manager-template) |Create labs in Azure DevTest Labs using Resource Manager templates. |
+ | [Create a lab using a Resource Manager template](/azure/devtest-labs/devtest-lab-faq#how-do-i-create-a-lab-from-a-resource-manager-template) |Create labs in Azure DevTest Labs using Resource Manager templates. |
[!INCLUDE [devtest-lab-try-it-out](../../includes/devtest-lab-try-it-out.md)]
devtest-labs Extend Devtest Labs Azure Functions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/extend-devtest-labs-azure-functions.md
The last step in this walkthrough is to test the Azure function.
Azure Functions can help extend the functionality of DevTest Labs beyond whatΓÇÖs already built-in and help customers meet their unique requirements for their teams. This pattern can be extended & expanded further to cover even more. To learn more about DevTest Labs, see the following articles: - [DevTest Labs Enterprise Reference Architecture](devtest-lab-reference-architecture.md)-- [Frequently Asked Questions](devtest-lab-faq.md)
+- [Frequently Asked Questions](devtest-lab-faq.yml)
- [Scaling up DevTest Labs](devtest-lab-guidance-scale.md) - [Automating DevTest Labs with PowerShell](https://github.com/Azure/azure-devtestlab/tree/master/samples/DevTestLabs/Modules/Library/Tests)
devtest-labs Import Virtual Machines From Another Lab https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/import-virtual-machines-from-another-lab.md
POST https://management.azure.com/subscriptions/<DestinationSubscriptionID>/reso
See the following articles: - [Set policies for a lab](devtest-lab-set-lab-policy.md)-- [Frequently asked questions](devtest-lab-faq.md)
+- [Frequently asked questions](devtest-lab-faq.yml)
devtest-labs Personal Data Delete Export https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/personal-data-delete-export.md
The exported data can be manipulated and visualized using tools, like SQL Server
See the following articles: - [Set policies for a lab](devtest-lab-set-lab-policy.md)-- [Frequently asked questions](devtest-lab-faq.md)
+- [Frequently asked questions](devtest-lab-faq.yml)
devtest-labs Resource Group Control https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/resource-group-control.md
How to use this API:
See the following articles: - [Set policies for a lab](devtest-lab-set-lab-policy.md)-- [Frequently asked questions](devtest-lab-faq.md)
+- [Frequently asked questions](devtest-lab-faq.yml)
digital-twins Concepts Apis Sdks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/concepts-apis-sdks.md
Update calls for twins and relationships use [JSON Patch](http://jsonpatch.com/)
The following list provides additional detail and general guidelines for using the APIs and SDKs.
-* You can use an HTTP REST-testing tool like Postman to make direct calls to the Azure Digital Twins APIs. For more information about this process, see [How-to: Make requests with Postman](how-to-use-postman.md).
+* You can use an HTTP REST-testing tool like Postman to make direct calls to the Azure Digital Twins APIs. For more information about this process, see [How-to: Make API requests with Postman](how-to-use-postman.md).
* To use the SDK, instantiate the `DigitalTwinsClient` class. The constructor requires credentials that can be obtained with a variety of authentication methods in the `Azure.Identity` package. For more on `Azure.Identity`, see its [namespace documentation](/dotnet/api/azure.identity?view=azure-dotnet&preserve-view=true). * You may find the `InteractiveBrowserCredential` useful while getting started, but there are several other options, including credentials for [managed identity](/dotnet/api/azure.identity.interactivebrowsercredential?view=azure-dotnet&preserve-view=true), which you will likely use to authenticate [Azure functions set up with MSI](../app-service/overview-managed-identity.md?tabs=dotnet) against Azure Digital Twins. For more about `InteractiveBrowserCredential`, see its [class documentation](/dotnet/api/azure.identity.interactivebrowsercredential?view=azure-dotnet&preserve-view=true). * Requests to the Azure Digital Twins APIs require a user or service principal that is a part of the same [Azure Active Directory](../active-directory/fundamentals/active-directory-whatis.md) (Azure AD) tenant where the Azure Digital Twins instance resides. To prevent malicious scanning of Azure Digital Twins endpoints, requests with access tokens from outside the originating tenant will be returned a "404 Sub-Domain not found" error message. This error will be returned *even if* the user or service principal was given an Azure Digital Twins Data Owner or Azure Digital Twins Data Reader role through [Azure AD B2B](../active-directory/external-identities/what-is-b2b.md) collaboration. For information on how to achieve access across multiple tenants, see [How-to: Write app authentication code](how-to-authenticate-client.md#authenticate-across-tenants).
From here, you can view the metrics for your instance and create custom views.
## Next steps See how to make direct requests to the APIs using Postman:
-* [How-to: Make requests with Postman](how-to-use-postman.md)
+* [How-to: Make API requests with Postman](how-to-use-postman.md)
Or, practice using the .NET SDK by creating a client app with this tutorial: * [Tutorial: Code a client app](tutorial-code.md)
digital-twins Concepts Security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/concepts-security.md
Azure supports two types of managed identities: system-assigned and user-assigne
You can use a system-assigned managed identity for your Azure Digital Instance to authenticate to a [custom-defined endpoint](concepts-route-events.md#create-an-endpoint). Azure Digital Twins supports system-assigned identity-based authentication to endpoints for [Event Hub](../event-hubs/event-hubs-about.md) and [Service Bus](../service-bus-messaging/service-bus-messaging-overview.md) destinations, and to an [Azure Storage Container](../storage/blobs/storage-blobs-introduction.md) endpoint for [dead-letter events](concepts-route-events.md#dead-letter-events). [Event Grid](../event-grid/overview.md) endpoints are currently not supported for managed identities.
-For instructions on how to enable a system-managed identity for Azure Digital Twins and use it to route events, see [How-to: Route events with a managed identity](./how-to-route-with-managed-identity.md).
+For instructions on how to enable a system-managed identity for Azure Digital Twins and use it to route events, see [How-to: Route events with a managed identity](how-to-route-with-managed-identity.md).
## Private network access with Azure Private Link (preview)
digital-twins How To Manage Routes Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-manage-routes-portal.md
Once you have created the endpoint resources, you can use them for an Azure Digi
1. Finish creating your endpoint by selecting _Save_. >[!IMPORTANT]
-> In order to successfully use identity-based authentication for your endpoint, you'll need to create a managed identity for your instance by following the steps in [How-to: Route events with a managed identity](./how-to-route-with-managed-identity.md).
+> In order to successfully use identity-based authentication for your endpoint, you'll need to create a managed identity for your instance by following the steps in [How-to: Route events with a managed identity](how-to-route-with-managed-identity.md).
After creating your endpoint, you can verify that the endpoint was successfully created by checking the notification icon in the top Azure portal bar:
digital-twins How To Use Postman https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-use-postman.md
Title: Make requests with Postman
+ Title: Make API requests with Postman
-description: Learn how to configure and use Postman to test the Azure Digital Twins APIs.
+description: Learn how to configure and use Postman to call the Azure Digital Twins APIs.
Previously updated : 11/10/2020 Last updated : 6/16/2021+ # How to use Postman to send requests to the Azure Digital Twins APIs
event-grid Partner Events Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/partner-events-overview.md
Title: Azure Event Grid - Partner Events description: Send events from third-party Event Grid SaaS and PaaS partners directly to Azure services with Azure Event Grid. Previously updated : 11/10/2020 Last updated : 06/15/2021 # Partner Events in Azure Event Grid (preview)
You may want to use the Partner Events if you've one or more of the following re
## Available third-party event publishers A third-party event publisher must go through an [onboarding process](partner-onboarding-overview.md) before a subscriber can start consuming its events.
-If you're a subscriber and would like a third-party service to expose its events through Event Grid,
+> [!NOTE]
+> If you would like a third-party service to expose its events through Event Grid, submit the idea on the [User Voice portal](https://feedback.azure.com/forums/909934-azure-event-grid).
### Auth0 **Auth0** is the first partner publisher available. You can create an [Auth0 partner topic](auth0-overview.md) to connect your Auth0 and Azure accounts. This integration allows you to react to, log, and monitor Auth0 events in real time. To try it out, see [Integrate Azure Event Grid with Auto0](auth0-how-to.md)
-If you would like a third-party service to expose its events through Event Grid, submit the idea on the [User Voice portal](https://feedback.azure.com/forums/909934-azure-event-grid).
## Resources managed by event publishers Event publishers create and manage the following resources:
iot-edge How To Retrieve Iot Edge Logs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-retrieve-iot-edge-logs.md
While not required, for best compatibility with this feature, the recommended lo
<{Log Level}> {Timestamp} {Message Text} ```
-`{Timestamp}` should be formatted as `yyyy-MM-dd hh:mm:ss.fff zzz`, and `{Log Level}` should follow the table below, which derives its severity levels from the [Severity code in the Syslog standard](https://wikipedia.org/wiki/Syslog#Severity_level).
+`{Timestamp}` should be formatted as `yyyy-MM-dd HH:mm:ss.fff zzz`, and `{Log Level}` should follow the table below, which derives its severity levels from the [Severity code in the Syslog standard](https://wikipedia.org/wiki/Syslog#Severity_level).
| Value | Severity | |-|-|
iot-pnp Howto Convert To Pnp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-pnp/howto-convert-to-pnp.md
+
+ Title: Convert an existing device to use IoT Plug and Play | Microsoft Docs
+description: This article describes how to convert your existing device code to work with IoT Plug and Play by creating a device model and then sending the model ID when the device connects.
++ Last updated : 05/14/2021+++++
+# How to convert an existing device to be an IoT Plug and Play device
+
+This article outlines the steps you should follow to convert an existing device to an IoT Plug and Play device. It describes how to create the model that every IoT Plug and Play device requires, and the necessary code changes to enable the device to function as an IoT Plug and Play device.
+
+For the code samples, this article shows C code that uses an MQTT library to connect to an IoT hub. You can apply the changes described in this article to devices implemented with other languages and SDKs.
+
+To convert your existing device to be an IoT Plug and Play device:
+
+1. Review your device code to understand the telemetry, properties, and commands it implements.
+1. Create a model that describes the telemetry, properties, and commands your device implements.
+1. Modify the device code to announce the model ID when it connects to your service.
+
+## Review your device code
+
+Before you create a model for your device, you need to understand the existing capabilities of your device:
+
+- The telemetry the device sends on regular basis.
+- The read-only and writable properties the device synchronizes with your service.
+- The commands invoked from the service that the device responds to.
+
+For example, review the following device code snippets that implement various device capabilities. These examples are based on the sample in the [PnPMQTTWin32-Before](https://github.com/Azure-Samples/IoTMQTTSample/tree/master/src/Windows/PnPMQTTWin32-Before) before:
+
+The following snippet shows the device sending temperature telemetry:
+
+```c
+#define TOPIC "devices/" DEVICEID "/messages/events/"
+
+// ...
+
+void Thermostat_SendCurrentTemperature()
+{
+ char msg[] = "{\"temperature\":25.6}";
+ int msgId = rand();
+ int rc = mosquitto_publish(mosq, &msgId, TOPIC, sizeof(msg) - 1, msg, 1, true);
+ if (rc != MOSQ_ERR_SUCCESS)
+ {
+ printf("Error: %s\r\n", mosquitto_strerror(rc));
+ }
+}
+```
+
+The name of the telemetry field is `temperature` and its type is float or a double. The model definition for this telemetry type looks like the following JSON. To learn mode, see [Design a model](#design-a-model) below:
+
+```json
+{
+ "@type": [
+ "Telemetry"
+ ],
+ "name": "temperature",
+ "displayName": "Temperature",
+ "description": "Temperature in degrees Celsius.",
+ "schema": "double"
+}
+```
+
+The following snippet shows the device reporting a property value:
+
+```c
+#define DEVICETWIN_MESSAGE_PATCH "$iothub/twin/PATCH/properties/reported/?$rid=patch_temp"
+
+static void SendMaxTemperatureSinceReboot()
+{
+ char msg[] = "{\"maxTempSinceLastReboot\": 42.500}";
+ int msgId = rand();
+ int rc = mosquitto_publish(mosq, &msgId, DEVICETWIN_MESSAGE_PATCH, sizeof(msg) - 1, msg, 1, true);
+ if (rc != MOSQ_ERR_SUCCESS)
+ {
+ printf("Error: %s\r\n", mosquitto_strerror(rc));
+ }
+}
+```
+
+The name of the property is `maxTempSinceLastReboot` and its type is float or double. This property is reported by the device, the device never receives an update for this value from the service. The model definition for this property looks like the following JSON. To learn mode, see [Design a model](#design-a-model) below:
+
+```json
+{
+ "@type": [
+ "Property"
+ ],
+ "name": "maxTempSinceLastReboot",
+ "schema": "double",
+ "displayName": "Max temperature since last reboot.",
+ "description": "Returns the max temperature since last device reboot."
+}
+```
+
+The following snippet shows the device responding to messages from the service:
+
+```c
+void message_callback(struct mosquitto* mosq, void* obj, const struct mosquitto_message* message)
+{
+ printf("Message received: %s payload: %s \r\n", message->topic, (char*)message->payload);
+
+ if (strncmp(message->topic, "$iothub/methods/POST/getMaxMinReport/?$rid=1",37) == 0)
+ {
+ char* pch;
+ char* context;
+ int msgId = 0;
+ pch = strtok_s((char*)message->topic, "=",&context);
+ while (pch != NULL)
+ {
+ pch = strtok_s(NULL, "=", &context);
+ if (pch != NULL) {
+ char * pEnd;
+ msgId = strtol(pch,&pEnd,16 );
+ }
+ }
+ char topic[64];
+ sprintf_s(topic, "$iothub/methods/res/200/?$rid=%d", msgId);
+ char msg[] = "{\"maxTemp\":83.51,\"minTemp\":77.68}";
+ int rc = mosquitto_publish(mosq, &msgId, topic, sizeof(msg) - 1, msg, 1, true);
+ if (rc != MOSQ_ERR_SUCCESS)
+ {
+ printf("Error: %s\r\n", mosquitto_strerror(rc));
+ }
+ delete pch;
+ }
+
+ if (strncmp(message->topic, "$iothub/twin/PATCH/properties/desired/?$version=1", 38) == 0)
+ {
+ char* pch;
+ char* context;
+ int version = 0;
+ pch = strtok_s((char*)message->topic, "=", &context);
+ while (pch != NULL)
+ {
+ pch = strtok_s(NULL, "=", &context);
+ if (pch != NULL) {
+ char* pEnd;
+ version = strtol(pch, &pEnd, 10);
+ }
+ }
+ // To do: Parse payload and extract target value
+ char msg[128];
+ int value = 46;
+ sprintf_s(msg, "{\"targetTemperature\":{\"value\":%d,\"ac\":200,\"av\":%d,\"ad\":\"success\"}}", value, version);
+ int rc = mosquitto_publish(mosq, &version, DEVICETWIN_MESSAGE_PATCH, strlen(msg), msg, 1, true);
+ if (rc != MOSQ_ERR_SUCCESS)
+ {
+ printf("Error: %s\r\n", mosquitto_strerror(rc));
+ }
+ delete pch;
+ }
+}
+```
+
+The `$iothub/methods/POST/getMaxMinReport/` topic receives a request for command called `getMaxMinReport` from the service, this request could include a payload with command parameters. The device sends a response with a payload that includes `maxTemp` and `minTemp` values.
+
+The `$iothub/twin/PATCH/properties/desired/` topic receives property updates from the service. This example assumes the property update is for the `targetTemperature` property. It responds with an acknowledgment that looks like `{\"targetTemperature\":{\"value\":46,\"ac\":200,\"av\":12,\"ad\":\"success\"}}`.
+
+In summary, the sample implements the following capabilities:
+
+| Name | Capability type | Details |
+| - | -- | - |
+| temperature | Telemetry | Assume the data type is double |
+| maxTempSinceLastReboot | Property | Assume the data type is double |
+| targetTemperature | Writable property | Data type is integer |
+| getMaxMinReport | Command | Returns JSON with `maxTemp` and `minTemp` fields of type double |
+
+## Design a model
+
+Every IoT Plug and Play device has a model that describes the features and capabilities of the device. The model uses the [Digital Twin Definition Language (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md) to describe the device capabilities.
+
+For a simple model that maps the existing capabilities of your device, use the *Telemetry*, *Property*, and *Command* DTDL elements.
+
+A DTDL model for the sample described in the previous section looks like the following example:
+
+```json
+{
+ "@context": "dtmi:dtdl:context;2",
+ "@id": "dtmi:com:example:ConvertSample;1",
+ "@type": "Interface",
+ "displayName": "Simple device",
+ "description": "Example that shows model for simple device converted to act as an IoT Plug and Play device.",
+ "contents": [
+ {
+ "@type": [
+ "Telemetry",
+ "Temperature"
+ ],
+ "name": "temperature",
+ "displayName": "Temperature",
+ "description": "Temperature in degrees Celsius.",
+ "schema": "double",
+ "unit": "degreeCelsius"
+ },
+ {
+ "@type": [
+ "Property",
+ "Temperature"
+ ],
+ "name": "targetTemperature",
+ "schema": "double",
+ "displayName": "Target Temperature",
+ "description": "Allows to remotely specify the desired target temperature.",
+ "unit": "degreeCelsius",
+ "writable": true
+ },
+ {
+ "@type": [
+ "Property",
+ "Temperature"
+ ],
+ "name": "maxTempSinceLastReboot",
+ "schema": "double",
+ "unit": "degreeCelsius",
+ "displayName": "Max temperature since last reboot.",
+ "description": "Returns the max temperature since last device reboot."
+ },
+ {
+ "@type": "Command",
+ "name": "getMaxMinReport",
+ "displayName": "Get Max-Min report.",
+ "description": "This command returns the max and min temperature.",
+ "request": {
+ },
+ "response": {
+ "name": "tempReport",
+ "displayName": "Temperature Report",
+ "schema": {
+ "@type": "Object",
+ "fields": [
+ {
+ "name": "maxTemp",
+ "displayName": "Max temperature",
+ "schema": "double"
+ },
+ {
+ "name": "minTemp",
+ "displayName": "Min temperature",
+ "schema": "double"
+ }
+ ]
+ }
+ }
+ }
+ ]
+}
+```
+
+In this model:
+
+- The `name` and `schema` values map to the data the device sends and receives.
+- All the capabilities are grouped in a single interface.
+- The `@type` fields identify the DTDL types such as **Property** and **Command**.
+- Fields such as `unit`, `displayName`, and `description` provide extra information for the service to use. For example, IoT Central uses these values when it displays data on device dashboards.
+
+To learn more, see [IoT Plug and Play conventions](concepts-convention.md) and [IoT Plug and Play modeling guide](concepts-modeling-guide.md).
+
+## Update the code
+
+If your device is already working with IoT Hub or IoT Central, you don't need to make any changes to the implementation of its telemetry, property, and command capabilities. To make the device follow the IoT Plug and Play conventions, modify the way that the device connects to your service so that it announces the ID of the model you created. The service can then use the model to understand the device capabilities. For example, IoT Central can use the model ID to automatically retrieve the model from a repository and generate a device template for your device.
+
+IoT devices connect to your IoT service either through the Device Provisioning Service (DPS) or directly with a connection string.
+
+If your device uses DPS to connect, include the model ID in the payload you send when you register the device. For the example model shown previously, the payload looks like:
+
+```json
+{
+ "modelId" : "dtmi:com:example:ConvertSample;1"
+}
+```
+
+To learn more, see [Runtime Registration - Register Device](/rest/api/iot-dps/runtimeregistration/registerdevice).
+
+If your device uses DPS to connect or connects directly with a connection string, include the model ID when your code connects to IoT Hub. For example:
+
+```c
+#define USERNAME IOTHUBNAME ".azure-devices.net/" DEVICEID "/?api-version=2020-09-30&model-id=dtmi:com:example:ConvertSample;1"
+
+// ...
+
+mosquitto_username_pw_set(mosq, USERNAME, PWD);
+
+// ...
+
+rc = mosquitto_connect(mosq, HOST, PORT, 10);
+```
+
+## Next steps
+
+Now that you know how to convert an existing device to be an IoT Plug and Play device, a suggested next step is to [Install and use the DTDL authoring tools](howto-use-dtdl-authoring-tools.md) to help you build a DTDL model.
key-vault Overview Throttling https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/general/overview-throttling.md
Key Vault was originally created with the limits specified in [Azure Key Vault s
1. If you use Key Vault to store credentials for a service, check if that service supports Azure AD Authentication to authenticate directly. This reduces the load on Key Vault, improves reliability and simplifies your code since Key Vault can now use the Azure AD token. Many services have moved to using Azure AD Auth. See the current list at [Services that support managed identities for Azure resources](../../active-directory/managed-identities-azure-resources/services-support-managed-identities.md#azure-services-that-support-managed-identities-for-azure-resources). 1. Consider staggering your load/deployment over a longer period of time to stay under the current RPS limits. 1. If your app comprises multiple nodes that need to read the same secret(s), then consider using a fan out pattern, where one entity reads the secret from Key Vault, and fans out to all nodes. Cache the retrieved secrets only in memory.
-If you find that the above still does not meet your needs, please fill out the below table and contact us to determine what additional capacity can be added (example put below for illustrative purposes only).
-| Vault name | Vault Region | Object type (Secret, Key, or Cert) | Operation(s)* | Key Type | Key Length or Curve | HSM key?| Steady state RPS needed | Peak RPS needed |
-|--|--|--|--|--|--|--|--|--|
-| https://mykeyvault.vault.azure.net/ | | Key | Sign | EC | P-256 | No | 200 | 1000 |
-
-\* For a full list of possible values, see [Azure Key Vault operations](/rest/api/keyvault/key-operations).
-
-If additional capacity is approved, please note the following as result of the capacity increases:
-1. Data consistency model changes. Once a vault is allow listed with additional throughput capacity, the Key Vault service data consistency guarantee changes (necessary to meet higher volume RPS since the underlying Azure Storage service cannot keep up). In a nutshell:
- 1. **Without allow listing**: The Key Vault service will reflect the results of a write operation (eg. SecretSet, CreateKey) immediately in subsequent calls (eg. SecretGet, KeySign).
- 1. **With allow listing**: The Key Vault service will reflect the results of a write operation (eg. SecretSet, CreateKey) within 60 seconds in subsequent calls (eg. SecretGet, KeySign).
-1. Client code must honor back-off policy for 429 retries. The client code calling the Key Vault service must not immediately retry Key Vault requests when it receives a 429 response code. The Azure Key Vault throttling guidance published here recommends applying exponential backoff when receiving a 429 Http response code.
-
-If you have a valid business case for higher throttle limits, please contact us.
## How to throttle your app in response to service limits
lab-services Capacity Limits https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/lab-services/capacity-limits.md
Once you submit the support request, we will review the request. If necessary, w
## Next steps See the following article: - [Administrator Guide - VM sizing](administrator-guide.md#vm-sizing).-- [Frequently asked questions](classroom-labs-faq.md).
+- [Frequently asked questions](classroom-labs-faq.yml).
lab-services Classroom Labs Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/lab-services/classroom-labs-faq.md
- Title: Labs in Azure Lab Services ΓÇö FAQ | Microsoft Docs
-description: This article provides answers to frequently asked questions (FAQ) about labs in Azure Lab Services.
- Previously updated : 06/26/2020--
-# Labs in Azure Lab Services ΓÇö Frequently asked questions (FAQ)
-Get answers to some of the most common questions about labs in Azure Lab Services.
-
-## Quotas
-
-### Is the quota per user or per week or per entire duration of the lab?
-The quota you set for a lab is for each student for entire duration of the lab. And, the [scheduled running time of VMs](how-to-create-schedules.md) doesn't count against the quota allotted to a user. The quota is for the time outside of schedule hours that a student spends on VMs. For more information on quotas, see [Set quotas for users](how-to-configure-student-usage.md#set-quotas-for-users).
-
-### If educator turns on a student VM, does that affect the student quota?
-No. It doesn't. When educator turns on the student VM, it doesn't affect the quota allotted to the student.
-
-## Schedules
-
-### Do all VMs in the lab start automatically when a schedule is set?
-No. Not all the VMs. Only the VMs that are assigned to users on a schedule. The VMs that aren't assigned to a user aren't automatically started. It's by design.
-
-## Lab accounts
-
-### Why am I not able to create a lab because of unavailability of the address range?
-
-Labs can create lab VMs within an IP address range you specify when creating your lab account in the Azure portal. When an address range is provided, each lab that's created after it's allotted 512 IP addresses for lab VMs. The address range for the lab account must be large enough to accommodate all the labs you intend to create under the lab account.
-
-For example, if you have a block of /19 - 10.0.0.0/19, this address range accommodates 8192 IP addresses and 16 labs(8192/512 = 16 labs). In this case, lab creation fails on 17th lab creation.
-
-### What port ranges should I open on my organization's firewall setting to connect to Lab virtual machines via RDP/SSH?
-
-The ports are: 49152ΓÇô65535. Labs sit behind a load balancer. Each lab has a single public IP address and each virtual machine in the lab has a unique port.
-
-You can also see the private IP address of each virtual machine on the **Virtual machine pool** tab of the home page for lab in the Azure portal. If you republish a lab, the public IP address of the lab will not change, but the private IP and port number of each virtual machine in the lab can change. You can learn more in the article: [Firewall settings for Azure Lab Services](how-to-configure-firewall-settings.md).
-
-### What public IP address range should I open on my organization's firewall settings to connect to Lab virtual machines via RDP/SSH?
-See [Azure IP Ranges and Service Tags ΓÇö Public Cloud](https://www.microsoft.com/download/details.aspx?id=56519), which provides the public IP address range for data centers in Azure. You can open the IP addresses for the regions where your lab accounts are in.
-
-## Virtual machine images
-
-### As a lab creator, why can't I enable additional image options in the virtual machine images dropdown when creating a new lab?
-
-When an administrator adds you as a lab creator to a lab account, you're given the permissions to create labs. But, you don't have the permissions to edit any settings inside the lab account, including the list of enabled virtual machine images. To enable additional images, contact your lab account administrator to do it for you, or ask the administrator to add you as a Contributor role to the lab account. The Contributor role will give you the permissions to edit the virtual machine image list in the lab account.
-
-### Can I attach additional disks to a virtual machine?
-No. it's not possible to attach additional disks to a VM in a classroom lab.
-
-## Users
-
-### How many users can be in a classroom lab?
-You can add up to 400 users to a classroom lab.
-
-## Blog post
-Subscribe to the [Azure Lab Services blog](https://aka.ms/azlabs-blog).
-
-## Update notifications
-Subscribe to [Lab Services updates](https://azure.microsoft.com/updates/?product=lab-services) to stay informed about new features in Lab Services.
-
-## General
-### What if my question isn't answered here?
-If your question isn't listed here, let us know, so we can help you find an answer.
--- Post a question at the end of this FAQ. -- To reach a wider audience, post a question on the [Azure Lab Services ΓÇö Tech community forum](https://techcommunity.microsoft.com/t5/azure-lab-services/bd-p/AzureLabServices). -- For feature requests, submit your requests and ideas to [Azure Lab Services ΓÇö User Voice](https://feedback.azure.com/forums/320373-lab-services?category_id=352774).-
machine-learning Execute Python Script https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/algorithm-module-reference/execute-python-script.md
Previously updated : 01/02/2021 Last updated : 06/15/2021 # Execute Python Script module
The Execute Python Script module contains sample Python code that you can use as
Any file contained in the uploaded zipped archive can be used during pipeline execution. If the archive includes a directory structure, the structure is preserved.
- > [!WARNING]
- > **Don't** use **app** as the name of folder or your script, since **app** is a reserved word for built-in services. But you can use other namespaces like `app123`.
+ > [!IMPORTANT]
+ > Please use unique and meaningful name for files in the script bundle since some common words (like `test`, `app` and etc) are reserved for built-in services.
Following is a script bundle example, which contains a python script file and a txt file:
machine-learning Concept Data Encryption https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/concept-data-encryption.md
The OS disk for each compute node stored in Azure Storage is encrypted with Micr
Each virtual machine also has a local temporary disk for OS operations. If you want, you can use the disk to stage training data. The disk is encrypted by default for workspaces with the `hbi_workspace` parameter set to `TRUE`. This environment is short-lived only for the duration of your run, and encryption support is limited to system-managed keys only.
-The OS disk for compute instance is encrypted with Microsoft-managed keys in Azure Machine Learning storage accounts. The local temporary disk on compute instance is encrypted with Microsoft managed keys for workspaces with the `hbi_workspace` parameter set to `TRUE`.
+The OS disk for compute instance is encrypted with Microsoft-managed keys in Azure Machine Learning storage accounts. The local temporary disk on compute instance is encrypted with Microsoft managed keys for workspaces with the `hbi_workspace` parameter set to `TRUE`. Customer managed key encryption is not supported for OS and temp disk.
### Azure Databricks
machine-learning Dsvm Tools Deep Learning Frameworks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/data-science-virtual-machine/dsvm-tools-deep-learning-frameworks.md
Deep learning frameworks on the DSVM are listed below.
| Category | Value | |--|--|
-| Version(s) supported | 2.4 |
+| Version(s) supported | 2.5 |
| Supported DSVM editions | Windows Server 2019<br>Ubuntu 18.04 | | How is it configured / installed on the DSVM? | Installed in Python, conda environment 'py38_tensorflow' | | How to run it | Terminal: Activate the correct environment, and then run Python. <br/> * Jupyter: Connect to [Jupyter](provision-vm.md) or [JupyterHub](dsvm-ubuntu-intro.md#how-to-access-the-ubuntu-data-science-virtual-machine), and then open the TensorFlow directory for samples. |
machine-learning How To Deploy Custom Container https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-deploy-custom-container.md
+
+ Title: Deploy a custom container as a managed online endpoint
+
+description: Learn how to use a custom container to use open-source servers in Azure Machine Learning
++++++ Last updated : 06/16/2021++++
+# Deploy a TensorFlow model served with TF Serving using a custom container in a managed online endpoint (preview)
+
+Learn how to deploy a custom container as a managed online endpoint in Azure Machine Learning.
+
+Custom container deployments can use web servers other than the default Python Flask server used by Azure Machine Learning. Users of these deployments can still take advantage of Azure Machine Learning's built-in monitoring, scaling, alerting, and authentication.
++
+> [!WARNING]
+> Microsoft may not be able to help troubleshoot problems caused by a custom image. If you encounter problems, you may be asked to use the default image or one of the images Microsoft provides to see if the problem is specific to your image.
+
+## Prerequisites
+
+* Install and configure the Azure CLI and ML extension. For more information, see [Install, set up, and use the 2.0 CLI (preview)](how-to-configure-cli.md).
+
+* You must have an Azure resource group, in which you (or the service principal you use) need to have `Contributor` access. You'll have such a resource group if you configured your ML extension per the above article.
+
+* You must have an Azure Machine Learning workspace. You'll have such a workspace if you configured your ML extension per the above article.
+
+* If you've not already set the defaults for Azure CLI, you should save your default settings. To avoid having to repeatedly pass in the values, run:
+
+ ```azurecli
+ az account set --subscription <subscription id>
+ az configure --defaults workspace=<azureml workspace name> group=<resource group>
+
+* To deploy locally, you must have [Docker engine](https://docs.docker.com/engine/install/) running locally. This step is **highly recommended**. It will help you debug issues.
+
+## Download source code
+
+To follow along with this tutorial, download the source code below.
+
+```azurecli-interactive
+git clone https://github.com/Azure/azureml-examples --depth 1
+cd azureml-examples/cli
+```
+
+## Initialize environment variables
+
+Define environment variables:
++
+## Download a TensorFlow model
+
+Download and unzip a model that divides an input by two and adds 2 to the result:
++
+## Run a TF Serving image locally to test that it works
+
+Use docker to run your image locally for testing:
++
+### Check that you can send liveness and scoring requests to the image
+
+First, check that the container is "alive," meaning that the process inside the container is still running. You should get a 200 (OK) response.
++
+Then, check that you can get predictions about unlabeled data:
++
+### Stop the image
+
+Now that you've tested locally, stop the image:
++
+## Create a YAML file for your endpoint
+
+You can configure your cloud deployment using YAML. Take a look at the sample YAML for this endpoint:
++
+There are a few important concepts to notice in this YAML:
+
+### Readiness route vs. liveness route
+
+An HTTP server can optionally define paths for both _liveness_ and _readiness_. A liveness route is used to check whether the server is running. A readiness route is used to check whether the server is ready to do some work. In machine learning inference, a server could respond 200 OK to a liveness request before loading a model. The server could respond 200 OK to a readiness request only after the model has been loaded into memory.
+
+Review the [Kubernetes documentation](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/) for more information about liveness and readiness probes.
+
+Notice that this deployment uses the same path for both liveness and readiness, since TF Serving only defines a liveness route.
+
+### Locating the mounted model
+
+When you deploy a model as a real-time endpoint, Azure Machine Learning _mounts_ your model to your endpoint. Model mounting enables you to deploy new versions of the model without having to create a new Docker image. By default, a model registered with the name *foo* and version *1* would be located at the following path inside of your deployed container: `/var/azureml-app/azureml-models/foo/1`
+
+So, for example, if you have the following directory structure on your local machine:
+
+```
+azureml-examples
+ cli
+ endpoints
+ online
+ custom-container
+ half_plus_two
+ tfserving-endpoint.yml
+```
+
+and `tfserving-endpoint.yml` contains:
+
+```
+model:
+ name: tfserving-mounted
+ version: 1
+ local_path: ./half_plus_two
+```
+
+then your model will be located at the following location in your endpoint:
+
+```
+var
+ azureml-app
+ azureml-models
+ tfserving-endpoint
+ 1
+ half_plus_two
+```
+
+### Create the endpoint
+
+Now that you've understood how the YAML was constructed, create your endpoint. This command can take a few minutes to complete.
++
+### Invoke the endpoint
+
+Once your deployment completes, see if you can make a scoring request to the deployed endpoint.
++
+### Delete endpoint and model
+
+Now that you've successfully scored with your endpoint, you can delete it:
++
+## Next steps
+
+- [Safe rollout for online endpoints (preview)](how-to-safely-rollout-managed-endpoints.md)
+- [Troubleshooting managed online endpoints deployment](how-to-troubleshoot-managed-online-endpoints.md)
+- [Torch serve sample](https://github.com/Azure/azureml-examples/blob/main/cli/deploy-torchserve.sh)
machine-learning How To Deploy Custom Docker Image https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-deploy-custom-docker-image.md
- Title: Deploy models with custom Docker image-
-description: Learn how to use a custom Docker base image to deploy your Azure Machine Learning models. While Azure Machine Learning provides a default base image for you, you can also use your own base image.
------ Previously updated : 11/16/2020----
-# Deploy a model using a custom Docker base image
-
-Learn how to use a custom Docker base image when deploying trained models with Azure Machine Learning.
-
-Azure Machine Learning will use a default base Docker image if none is specified. You can find the specific Docker image used with `azureml.core.runconfig.DEFAULT_CPU_IMAGE`. You can also use Azure Machine Learning __environments__ to select a specific base image, or use a custom one that you provide.
-
-A base image is used as the starting point when an image is created for a deployment. It provides the underlying operating system and components. The deployment process then adds additional components, such as your model, conda environment, and other assets, to the image.
-
-Typically, you create a custom base image when you want to use Docker to manage your dependencies, maintain tighter control over component versions or save time during deployment. You might also want to install software required by your model, where the installation process takes a long time. Installing the software when creating the base image means that you don't have to install it for each deployment.
-
-> [!IMPORTANT]
-> When you deploy a model, you cannot override core components such as the web server or IoT Edge components. These components provide a known working environment that is tested and supported by Microsoft.
-
-> [!WARNING]
-> Microsoft may not be able to help troubleshoot problems caused by a custom image. If you encounter problems, you may be asked to use the default image or one of the images Microsoft provides to see if the problem is specific to your image.
-
-This document is broken into two sections:
-
-* Create a custom base image: Provides information to admins and DevOps on creating a custom image and configuring authentication to an Azure Container Registry using the Azure CLI and Machine Learning CLI.
-* Deploy a model using a custom base image: Provides information to Data Scientists and DevOps / ML Engineers on using custom images when deploying a trained model from the Python SDK or ML CLI.
-
-## Prerequisites
-
-* An Azure Machine Learning workspace. For more information, see the [Create a workspace](how-to-manage-workspace.md) article.
-* The [Azure Machine Learning SDK](/python/api/overview/azure/ml/install).
-* The [Azure CLI](/cli/azure/install-azure-cli).
-* The [CLI extension for Azure Machine Learning](reference-azure-machine-learning-cli.md).
-* An [Azure Container Registry](../container-registry/index.yml) or other Docker registry that is accessible on the internet.
-* The steps in this document assume that you are familiar with creating and using an __inference configuration__ object as part of model deployment. For more information, see [Where to deploy and how](how-to-deploy-and-where.md).
-
-## Create a custom base image
-
-The information in this section assumes that you are using an Azure Container Registry to store Docker images. Use the following checklist when planning to create custom images for Azure Machine Learning:
-
-* Will you use the Azure Container Registry created for the Azure Machine Learning workspace, or a standalone Azure Container Registry?
-
- When using images stored in the __container registry for the workspace__, you do not need to authenticate to the registry. Authentication is handled by the workspace.
-
- > [!WARNING]
- > The Azure Container Registry for your workspace is __created the first time you train or deploy a model__ using the workspace. If you've created a new workspace, but not trained or created a model, no Azure Container Registry will exist for the workspace.
-
- When using images stored in a __standalone container registry__, you will need to configure a service principal that has at least read access. You then provide the service principal ID (username) and password to anyone that uses images from the registry. The exception is if you make the container registry publicly accessible.
-
- For information on creating a private Azure Container Registry, see [Create a private container registry](../container-registry/container-registry-get-started-azure-cli.md).
-
- For information on using service principals with Azure Container Registry, see [Azure Container Registry authentication with service principals](../container-registry/container-registry-auth-service-principal.md).
-
-* Azure Container Registry and image information: Provide the image name to anyone that needs to use it. For example, an image named `myimage`, stored in a registry named `myregistry`, is referenced as `myregistry.azurecr.io/myimage` when using the image for model deployment
-
-### Image requirements
-
-Azure Machine Learning only supports Docker images that provide the following software:
-* Ubuntu 16.04 or greater.
-* Conda 4.5.# or greater.
-* Python 3.6+.
-
-To use Datasets, please install the libfuse-dev package. Also make sure to install any user space packages you may need.
-
-Azure ML maintains a set of CPU and GPU base images published to Microsoft Container Registry that you can optionally leverage (or reference) instead of creating your own custom image. To see the Dockerfiles for those images, refer to the [Azure/AzureML-Containers](https://github.com/Azure/AzureML-Containers) GitHub repository.
-
-For GPU images, Azure ML currently offers both cuda9 and cuda10 base images. The major dependencies installed in these base images are:
-
-| Dependencies | IntelMPI CPU | OpenMPI CPU | IntelMPI GPU | OpenMPI GPU |
-| | | | | |
-| miniconda | ==4.5.11 | ==4.5.11 | ==4.5.11 | ==4.5.11 |
-| mpi | intelmpi==2018.3.222 |openmpi==3.1.2 |intelmpi==2018.3.222| openmpi==3.1.2 |
-| cuda | - | - | 9.0/10.0 | 9.0/10.0/10.1 |
-| cudnn | - | - | 7.4/7.5 | 7.4/7.5 |
-| nccl | - | - | 2.4 | 2.4 |
-| git | 2.7.4 | 2.7.4 | 2.7.4 | 2.7.4 |
-
-The CPU images are built from ubuntu16.04. The GPU images for cuda9 are built from nvidia/cuda:9.0-cudnn7-devel-ubuntu16.04. The GPU images for cuda10 are built from nvidia/cuda:10.0-cudnn7-devel-ubuntu16.04.
-<a id="getname"></a>
-
-> [!IMPORTANT]
-> When using custom Docker images, it is recommended that you pin package versions in order to better ensure reproducibility.
-
-### Get container registry information
-
-In this section, learn how to get the name of the Azure Container Registry for your Azure Machine Learning workspace.
-
-> [!WARNING]
-> The Azure Container Registry for your workspace is __created the first time you train or deploy a model__ using the workspace. If you've created a new workspace, but not trained or created a model, no Azure Container Registry will exist for the workspace.
-
-If you've already trained or deployed models using Azure Machine Learning, a container registry was created for your workspace. To find the name of this container registry, use the following steps:
-
-1. Open a new shell or command-prompt and use the following command to authenticate to your Azure subscription:
-
- ```azurecli-interactive
- az login
- ```
-
- Follow the prompts to authenticate to the subscription.
-
- [!INCLUDE [select-subscription](../../includes/machine-learning-cli-subscription.md)]
-
-2. Use the following command to list the container registry for the workspace. Replace `<myworkspace>` with your Azure Machine Learning workspace name. Replace `<resourcegroup>` with the Azure resource group that contains your workspace:
-
- ```azurecli-interactive
- az ml workspace show -w <myworkspace> -g <resourcegroup> --query containerRegistry
- ```
-
- [!INCLUDE [install extension](../../includes/machine-learning-service-install-extension.md)]
-
- The information returned is similar to the following text:
-
- ```text
- /subscriptions/<subscription_id>/resourceGroups/<resource_group>/providers/Microsoft.ContainerRegistry/registries/<registry_name>
- ```
-
- The `<registry_name>` value is the name of the Azure Container Registry for your workspace.
-
-### Build a custom base image
-
-The steps in this section walk-through creating a custom Docker image in your Azure Container Registry. For sample dockerfiles, see the [Azure/AzureML-Containers](https://github.com/Azure/AzureML-Containers) GitHub repo).
-
-1. Create a new text file named `Dockerfile`, and use the following text as the contents:
-
- ```text
- FROM ubuntu:16.04
-
- ARG CONDA_VERSION=4.9.2
- ARG PYTHON_VERSION=3.7
- ARG AZUREML_SDK_VERSION=1.27.0
- ARG INFERENCE_SCHEMA_VERSION=1.1.0
-
- ENV LANG=C.UTF-8 LC_ALL=C.UTF-8
- ENV PATH /opt/miniconda/bin:$PATH
- ENV DEBIAN_FRONTEND=noninteractive
-
- RUN apt-get update --fix-missing && \
- apt-get install -y wget bzip2 && \
- apt-get install -y fuse && \
- apt-get clean -y && \
- rm -rf /var/lib/apt/lists/*
-
- RUN useradd --create-home dockeruser
- WORKDIR /home/dockeruser
- USER dockeruser
-
- RUN wget --quiet https://repo.anaconda.com/miniconda/Miniconda3-${CONDA_VERSION}-Linux-x86_64.sh -O ~/miniconda.sh && \
- /bin/bash ~/miniconda.sh -b -p ~/miniconda && \
- rm ~/miniconda.sh && \
- ~/miniconda/bin/conda clean -tipsy
- ENV PATH="/home/dockeruser/miniconda/bin/:${PATH}"
-
- RUN conda install -y conda=${CONDA_VERSION} python=${PYTHON_VERSION} && \
- pip install azureml-defaults==${AZUREML_SDK_VERSION} inference-schema==${INFERENCE_SCHEMA_VERSION} &&\
- conda clean -aqy && \
- rm -rf ~/miniconda/pkgs && \
- find ~/miniconda/ -type d -name __pycache__ -prune -exec rm -rf {} \;
- ```
-
-2. From a shell or command-prompt, use the following to authenticate to the Azure Container Registry. Replace the `<registry_name>` with the name of the container registry you want to store the image in:
-
- ```azurecli-interactive
- az acr login --name <registry_name>
- ```
-
-3. To upload the Dockerfile, and build it, use the following command. Replace `<registry_name>` with the name of the container registry you want to store the image in:
-
- ```azurecli-interactive
- az acr build --image myimage:v1 --registry <registry_name> --file Dockerfile .
- ```
-
- > [!TIP]
- > In this example, a tag of `:v1` is applied to the image. If no tag is provided, a tag of `:latest` is applied.
-
- During the build process, information is streamed to back to the command line. If the build is successful, you receive a message similar to the following text:
-
- ```text
- Run ID: cda was successful after 2m56s
- ```
-
-For more information on building images with an Azure Container Registry, see [Build and run a container image using Azure Container Registry Tasks](../container-registry/container-registry-quickstart-task-cli.md)
-
-For more information on uploading existing images to an Azure Container Registry, see [Push your first image to a private Docker container registry](../container-registry/container-registry-get-started-docker-cli.md).
-
-## Use a custom base image
-
-To use a custom image, you need the following information:
-
-* The __image name__. For example, `mcr.microsoft.com/azureml/o16n-sample-user-base/ubuntu-miniconda:latest` is the path to a simple Docker Image provided by Microsoft.
-
- > [!IMPORTANT]
- > For custom images that you've created, be sure to include any tags that were used with the image. For example, if your image was created with a specific tag, such as `:v1`. If you did not use a specific tag when creating the image, a tag of `:latest` was applied.
-
-* If the image is in a __private repository__, you need the following information:
-
- * The registry __address__. For example, `myregistry.azureecr.io`.
- * A service principal __username__ and __password__ that has read access to the registry.
-
- If you do not have this information, speak to the administrator for the Azure Container Registry that contains your image.
-
-### Publicly available base images
-
-Microsoft provides several docker images on a publicly accessible repository, which can be used with the steps in this section:
-
-| Image | Description |
-| -- | -- |
-| `mcr.microsoft.com/azureml/o16n-sample-user-base/ubuntu-miniconda` | Core image for Azure Machine Learning |
-| `mcr.microsoft.com/azureml/onnxruntime:latest` | Contains ONNX Runtime for CPU inferencing |
-| `mcr.microsoft.com/azureml/onnxruntime:latest-cuda` | Contains the ONNX Runtime and CUDA for GPU |
-| `mcr.microsoft.com/azureml/onnxruntime:latest-tensorrt` | Contains ONNX Runtime and TensorRT for GPU |
-| `mcr.microsoft.com/azureml/onnxruntime:latest-openvino-vadm` | Contains ONNX Runtime and OpenVINO for Intel<sup></sup> Vision Accelerator Design based on Movidius<sup>TM</sup> MyriadX VPUs |
-| `mcr.microsoft.com/azureml/onnxruntime:latest-openvino-myriad` | Contains ONNX Runtime and OpenVINO for Intel<sup></sup> Movidius<sup>TM</sup> USB sticks |
-
-For more information about the ONNX Runtime base images see the [ONNX Runtime dockerfile section](https://github.com/microsoft/onnxruntime/blob/master/dockerfiles/README.md) in the GitHub repo.
-
-> [!TIP]
-> Since these images are publicly available, you do not need to provide an address, username or password when using them.
-
-For more information, see [Azure Machine Learning containers](https://github.com/Azure/AzureML-Containers) repository on GitHub.
-
-### Use an image with the Azure Machine Learning SDK
-
-To use an image stored in the **Azure Container Registry for your workspace**, or a **container registry that is publicly accessible**, set the following [Environment](/python/api/azureml-core/azureml.core.environment.environment) attributes:
-
-+ `docker.enabled=True`
-+ `docker.base_image`: Set to the registry and path to the image.
-
-```python
-from azureml.core.environment import Environment
-# Create the environment
-myenv = Environment(name="myenv")
-# Enable Docker and reference an image
-myenv.docker.enabled = True
-myenv.docker.base_image = "mcr.microsoft.com/azureml/o16n-sample-user-base/ubuntu-miniconda:latest"
-```
-
-To use an image from a __private container registry__ that is not in your workspace, you must use `docker.base_image_registry` to specify the address of the repository and a user name and password:
-
-```python
-# Set the container registry information
-myenv.docker.base_image_registry.address = "myregistry.azurecr.io"
-myenv.docker.base_image_registry.username = "username"
-myenv.docker.base_image_registry.password = "password"
-
-myenv.inferencing_stack_version = "latest" # This will install the inference specific apt packages.
-
-# Define the packages needed by the model and scripts
-from azureml.core.conda_dependencies import CondaDependencies
-conda_dep = CondaDependencies()
-# you must list azureml-defaults as a pip dependency
-conda_dep.add_pip_package("azureml-defaults")
-myenv.python.conda_dependencies=conda_dep
-```
-
-You must add azureml-defaults with version >= 1.0.45 as a pip dependency. This package contains the functionality needed to host the model as a web service. You must also set inferencing_stack_version property on the environment to "latest", this will install specific apt packages needed by web service.
-
-After defining the environment, use it with an [InferenceConfig](/python/api/azureml-core/azureml.core.model.inferenceconfig) object to define the inference environment in which the model and web service will run.
-
-```python
-from azureml.core.model import InferenceConfig
-# Use environment in InferenceConfig
-inference_config = InferenceConfig(entry_script="score.py",
- environment=myenv)
-```
-
-At this point, you can continue with deployment. For example, the following code snippet would deploy a web service locally using the inference configuration and custom image:
-
-```python
-from azureml.core.webservice import LocalWebservice, Webservice
-
-deployment_config = LocalWebservice.deploy_configuration(port=8890)
-service = Model.deploy(ws, "myservice", [model], inference_config, deployment_config)
-service.wait_for_deployment(show_output = True)
-print(service.state)
-```
-
-For more information on deployment, see [Deploy models with Azure Machine Learning](how-to-deploy-and-where.md).
-
-For more information on customizing your Python environment, see [Create and manage environments for training and deployment](how-to-use-environments.md).
-
-### Use an image with the Machine Learning CLI
-
-> [!IMPORTANT]
-> Currently the Machine Learning CLI can use images from the Azure Container Registry for your workspace or publicly accessible repositories. It cannot use images from standalone private registries.
-
-Before deploying a model using the Machine Learning CLI, create an [environment](/python/api/azureml-core/azureml.core.environment.environment) that uses the custom image. Then create an inference configuration file that references the environment. You can also define the environment directly in the inference configuration file. The following JSON document demonstrates how to reference an image in a public container registry. In this example, the environment is defined inline:
-
-```json
-{
- "entryScript": "score.py",
- "environment": {
- "docker": {
- "arguments": [],
- "baseDockerfile": null,
- "baseImage": "mcr.microsoft.com/azureml/o16n-sample-user-base/ubuntu-miniconda:latest",
- "enabled": false,
- "sharedVolumes": true,
- "shmSize": null
- },
- "environmentVariables": {
- "EXAMPLE_ENV_VAR": "EXAMPLE_VALUE"
- },
- "name": "my-deploy-env",
- "python": {
- "baseCondaEnvironment": null,
- "condaDependencies": {
- "channels": [
- "conda-forge"
- ],
- "dependencies": [
- "python=3.6.2",
- {
- "pip": [
- "azureml-defaults",
- "azureml-telemetry",
- "scikit-learn",
- "inference-schema[numpy-support]"
- ]
- }
- ],
- "name": "project_environment"
- },
- "condaDependenciesFile": null,
- "interpreterPath": "python",
- "userManagedDependencies": false
- },
- "version": "1"
- }
-}
-```
-
-This file is used with the `az ml model deploy` command. The `--ic` parameter is used to specify the inference configuration file.
-
-```azurecli
-az ml model deploy -n myservice -m mymodel:1 --ic inferenceconfig.json --dc deploymentconfig.json --ct akscomputetarget
-```
-
-For more information on deploying a model using the ML CLI, see the "model registration, profiling, and deployment" section of the [CLI extension for Azure Machine Learning](reference-azure-machine-learning-cli.md#model-registration-profiling-deployment) article.
-
-## Next steps
-
-* Learn more about [Where to deploy and how](how-to-deploy-and-where.md).
-* Learn how to [Train and deploy machine learning models using Azure Pipelines](/azure/devops/pipelines/targets/azure-machine-learning).
machine-learning How To Designer Import Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-designer-import-data.md
Previously updated : 11/13/2020 Last updated : 06/13/2021
If you register a file dataset, the output port type of the dataset is **AnyDire
### Limitations - Currently you can only visualize tabular dataset in the designer. If you register a file dataset outside designer, you cannot visualize it in the designer canvas.-- Your dataset is stored in virtual network (VNet). If you want to visualize, you need to enable workspace managed identity of the datastore.
- 1. Go the the related datastore and click **Update Credentials**
+- Currently the designer only supports preview outputs which are stored in **Azure blob storage**. You can check and change your output datastore in the **Output settings** under **Parameters** tab in the right panel of the module.
+- If your data is stored in virtual network (VNet) and you want to preview, you need to enable workspace managed identity of the datastore.
+ 1. Go the the related datastore and click **Update authentication**
:::image type="content" source="./media/resource-known-issues/datastore-update-credential.png" alt-text="Update Credentials"::: 1. Select **Yes** to enable workspace managed identity. :::image type="content" source="./media/resource-known-issues/enable-workspace-managed-identity.png" alt-text="Enable Workspace Managed Identity":::
machine-learning How To Train Mlflow Projects https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-train-mlflow-projects.md
Previously updated : 05/25/2021 Last updated : 06/16/2021
-# Train ML models with MLflow Projects and Azure Machine Learning (preview)
+# Train ML models with MLflow Projects and Azure Machine Learning
-In this article, learn how to enable MLflow's tracking URI and logging API, collectively known as [MLflow Tracking](https://mlflow.org/docs/latest/quickstart.html#using-the-tracking-api), to submit training jobs with [MLflow Projects](https://www.mlflow.org/docs/latest/projects.html) and Azure Machine Learning backend support (preview). You can submit jobs locally with Azure Machine Learning tracking or migrate your runs to the cloud like via an [Azure Machine Learning Compute](./how-to-create-attach-compute-cluster.md).
+In this article, learn how to enable MLflow's tracking URI and logging API, collectively known as [MLflow Tracking](https://mlflow.org/docs/latest/quickstart.html#using-the-tracking-api), to submit training jobs with [MLflow Projects](https://www.mlflow.org/docs/latest/projects.html) and Azure Machine Learning backend support. You can submit jobs locally with Azure Machine Learning tracking or migrate your runs to the cloud like via an [Azure Machine Learning Compute](./how-to-create-attach-compute-cluster.md).
[MLflow Projects](https://mlflow.org/docs/latest/projects.html) allow for you to organize and describe your code to let other data scientists (or automated tools) run it. MLflow Projects with Azure Machine Learning enable you to track and manage your training runs in your workspace.
In this article, learn how to enable MLflow's tracking URI and logging API, coll
[Learn more about the MLflow and Azure Machine Learning integration.](how-to-use-mlflow.md).
->[!NOTE]
-> As an open source library, MLflow changes frequently. As such, the functionality made available via the Azure Machine Learning and MLflow integration should be considered as a preview, and not fully supported by Microsoft.
- > [!TIP] > The information in this document is primarily for data scientists and developers who want to monitor the model training process. If you are an administrator interested in monitoring resource usage and events from Azure Machine Learning, such as quotas, completed training runs, or completed model deployments, see [Monitoring Azure Machine Learning](monitor-azure-machine-learning.md).
machine-learning How To Use Mlflow Azure Databricks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-use-mlflow-azure-databricks.md
In this article, learn how to enable MLflow's tracking URI and logging API, coll
See [Track experiment runs with MLflow and Azure Machine Learning](how-to-use-mlflow.md) for additional MLflow and Azure Machine Learning functionality integrations.
-If you have an MLflow Project to train with Azure Machine Learning, see [Train ML models with MLflow Projects and Azure Machine Learning (preview)](how-to-train-mlflow-projects.md).
+If you have an MLflow Project to train with Azure Machine Learning, see [Train ML models with MLflow Projects and Azure Machine Learning](how-to-train-mlflow-projects.md).
> [!TIP] > The information in this document is primarily for data scientists and developers who want to monitor the model training process. If you are an administrator interested in monitoring resource usage and events from Azure Machine Learning, such as quotas, completed training runs, or completed model deployments, see [Monitoring Azure Machine Learning](monitor-azure-machine-learning.md).
marketplace Azure Vm Create Certification Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/azure-vm-create-certification-faq.md
When you publish your virtual machine (VM) image to Azure Marketplace, the Azure
This article explains common error messages during VM image publishing, along with related solutions. > [!NOTE]
-> If you have questions about this article or suggestions for improvement, contact [Partner Center support](https://aka.ms/marketplacepublishersupport).
+> If you have questions about this article or suggestions for improvement, contact [Partner Center support](https://go.microsoft.com/fwlink/?linkid=2165533).
## VM extension failure
For more information, please visit [VM Extension](../virtual-machines/extensions
- [Configure VM offer properties](azure-vm-create-properties.md) - [Active marketplace rewards](marketplace-rewards.md)-- If you have questions or feedback for improvement, contact [Partner Center support](https://aka.ms/marketplacepublishersupport)
+- If you have questions or feedback for improvement, contact [Partner Center support](https://go.microsoft.com/fwlink/?linkid=2165533)
marketplace Azure Vm Image Test https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/azure-vm-image-test.md
In this example, curl will be used to to make a POST API call to Azure Active Di
## Next steps -- Sign in to [Partner Center](https://partner.microsoft.com/).
+- Sign in to [Partner Center](https://go.microsoft.com/fwlink/?linkid=2166002).
marketplace Business Applications Isv Program https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/business-applications-isv-program.md
Participation in this program requires you to review and accept the [Business Ap
> [!NOTE] > This step requires an *Owner* or *Manager* role in Partner Center for your account to sign legal agreements.
-1. Sign in to [Partner Center](https://partner.microsoft.com/dashboard).
+1. Sign in to [Partner Center](https://go.microsoft.com/fwlink/?linkid=2165507).
1. Select **Settings** (gear icon) > **Account Settings**. 1. Select **Agreements**. 1. Select the version link and view the agreement.
Set up billing information for the Business Applications ISV Connect Program.
> [!NOTE] > This step requires an *Owner* or *Manager* role in Partner Center for your account to update billing information.
-1. Sign in to [Partner Center](https://partner.microsoft.com/dashboard).
+1. Sign in to [Partner Center](https://go.microsoft.com/fwlink/?linkid=2165507).
1. Select **Settings** (gear icon) > **Account Settings**. 1. Under **Organization profile**, select **Billing profile** and then the **Developer** tab. 1. Review the primary contact and billing information that is populated from your legal entity.
marketplace Cloud Partner Portal Api Go Live https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/cloud-partner-portal-api-go-live.md
Last updated 07/14/2020
-# Go Live
+# Go live API
> [!NOTE] > The Cloud Partner Portal APIs are integrated with and will continue working in Partner Center. The transition introduces small changes. Review the changes listed in [Cloud Partner Portal API Reference](./cloud-partner-portal-api-overview.md) to ensure your code continues working after transitioning to Partner Center. CPP APIs should only be used for existing products that were already integrated before transition to Partner Center; new products should use Partner Center submission APIs.
marketplace Cloud Partner Portal Migration Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/cloud-partner-portal-migration-faq.md
The offers you created in Cloud Partner Portal are available in Partner Center u
### Access the right program in Partner Center
-1. Sign in to [Partner Center](https://partner.microsoft.com/dashboard/commercial-marketplace/overview) with the same credentials used to sign into the Cloud Partner Portal. The navigation pane on the left displays options associated with the programs you are enrolled in.
+1. Sign in to [Partner Center](https://go.microsoft.com/fwlink/?linkid=2166002) with the same credentials used to sign into the Cloud Partner Portal. The navigation pane on the left displays options associated with the programs you are enrolled in.
Example: assume you have access to three programs: the MPN program, the Referrals program, and the Commercial Marketplace program. When you sign into Partner Center, you will see these three programs on the navigation pane.
If you are part of multiple accounts, in Partner center you will see an account
## How do I create new offers?
-Access the Commercial marketplace program in [Partner Center](https://partner.microsoft.com/dashboard/commercial-marketplace/overview) to create new offers. On the Overview page, select **+ New offer**.
+Access the Commercial marketplace program in [Partner Center](https://go.microsoft.com/fwlink/?linkid=2166002) to create new offers. On the Overview page, select **+ New offer**.
[![Screenshot shows the Partner Center Overview menu.](media/cpp-pc-faq/new-offer.png "Shows the Partner Center Overview menu")](media/cpp-pc-faq/new-offer.png#lightbox) ## I can't sign in and need to open a support ticket
-If you can't sign in to your account, you can open a [support ticket](https://partner.microsoft.com/support/v2/?stage=1) here.
+If you can't sign in to your account, you can open a [support ticket](https://go.microsoft.com/fwlink/?linkid=2165533) here.
## Where are instructions for using Partner Center?
You'll notice some branding changes. For example, *SKUs* are branded as *Plans*
Also, the information you previously provided in the **Marketplace** or **Storefront Details** (Consulting Service, Power BI app) pages in the Cloud Partner Portal is now collected on the **Offer listing** page in Partner Center:
-[![Screenshot shows the Partner Center Offer listing page.](media/cpp-pc-faq/offer-listing.png](media/cpp-pc-faq/offer-listing.png#lightbox)
+[ ![Screenshot shows the Partner Center Offer listing page.](./media/cpp-pc-faq/offer-listing.png) ](./media/cpp-pc-faq/offer-listing.png#lightbox)
The information you previously provided for SKUs in a single page in the Cloud Partner Portal may now be collected throughout several pages in Partner Center:
marketplace Cloud Solution Providers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/cloud-solution-providers.md
If you've authorized a partner in the CSP program and that partner has already r
If a partner in the CSP program has not sold your product to their customers and you'd like to remove the CSP after your offer has been published, use the following instructions:
-1. Go to the [Support request page](https://aka.ms/marketplacepublishersupport). The first few dropdown menus are automatically filled in for you.
+1. Go to the [Support request page](https://go.microsoft.com/fwlink/?linkid=2165533). The first few dropdown menus are automatically filled in for you.
> [!NOTE] > Don't change the pre-populated dropdown menu selections.
Use this section to navigate between the three CSP reseller options.
If your offer is currently **Option 1: Any partner in the CSP program** and you'd like to navigate to either of the other two options, use the following instructions to create a request:
-1. Go to the [Support request page](https://aka.ms/marketplacepublishersupport). The first few dropdown menus are automatically filled in for you.
+1. Go to the [Support request page](https://go.microsoft.com/fwlink/?linkid=2165533). The first few dropdown menus are automatically filled in for you.
> [!NOTE] > Don't change the pre-populated dropdown menu selections.
If your offer is currently **Option 1: Any partner in the CSP program** and you'
If your offer is currently **Option 2: Specific partners in the CSP program I select** and you'd like to navigate to **Option one: Any partner in the CSP program**, use the following instructions to create a request:
-1. Go to the [Support request page](https://aka.ms/marketplacepublishersupport). The first few dropdown menus are automatically filled in for you.
+1. Go to the [Support request page](https://go.microsoft.com/fwlink/?linkid=2165533). The first few dropdown menus are automatically filled in for you.
> [!NOTE] > Don't change the pre-populated dropdown menu selections.
If your offer is currently **Option 2: Specific partners in the CSP program I se
If your offer is currently **Option 2: Specific partners in the CSP program I select** and you'd like to navigate to **Option 3: No partners in the CSP program**, you'll only be able to navigate to that option if the partners in the CSP program you'd previously authorized have not resold your offer to end customers. Use the following instructions to create a request:
-1. Go to the [Support request page](https://aka.ms/marketplacepublishersupport). The first few dropdown menus are automatically filled in for you.
+1. Go to the [Support request page](https://go.microsoft.com/fwlink/?linkid=2165533). The first few dropdown menus are automatically filled in for you.
> [!NOTE] > Don't change the pre-populated dropdown menu selections.
marketplace Co Sell Solution Migration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/co-sell-solution-migration.md
After youΓÇÖve enrolled in the commercial marketplace, prepare to migrate your s
Follow these steps before importing your solutions from OCP GTM:
-1. Visit your company's [publisher list](https://partner.microsoft.com/dashboard/account/v3/publishers/list). It includes the account owner, managers, and developers who have publishing access. Learn more about [Partner Center user roles](user-roles.md).
-2. Ask one of the listed contacts to [add users](https://partner.microsoft.com/dashboard/account/usermanagement) to the commercial marketplace as *managers* or *developers*, since only these roles can edit and publish solutions.
+1. Visit your company's [publisher list](https://go.microsoft.com/fwlink/?linkid=2165704). It includes the account owner, managers, and developers who have publishing access. Learn more about [Partner Center user roles](user-roles.md).
+2. Ask one of the listed contacts to [add users](https://go.microsoft.com/fwlink/?linkid=2166003) to the commercial marketplace as *managers* or *developers*, since only these roles can edit and publish solutions.
3. Work with your developers to move your solutions from your OCP GTM account to the commercial marketplace. 4. Decide which of the following you want to do: 1. If you have a solution in OCP GTM that you want to migrate to Partner Center - *to retain referral pipeline, collateral, co-sell status and incentives* - there are two scenarios for you to choose from:
Follow these steps before importing your solutions from OCP GTM:
## Begin the migration of your solutions from OCP GTM
-1. Begin the migration [here](https://partner.microsoft.com/solutions/migration#).
+1. Begin the migration [here](https://go.microsoft.com/fwlink/?linkid=2165807).
2. Select the **Overview** page, then **Click here to get started**. :::image type="content" source="media/co-sell-migrate/welcome-overveiw.png" alt-text="Displays overview page":::
marketplace Dynamics 365 Customer Engage Offer Setup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/dynamics-365-customer-engage-offer-setup.md
For **How do you want potential customers to interact with this listing offer?**
- **Enable app license management through Microsoft** ΓÇô Manage your app licenses through Microsoft. To let customers run your appΓÇÖs base functionality without a license and run premium features after theyΓÇÖve purchased a license, select the **Allow customers to install my app even if licenses are not assigned box**. If you select this second box, you need to configure your solution package to not require a license. > [!NOTE]
- > You cannot change this setting after you publish your offer. To learn more about this setting, see [Third-party app license management through Microsoft](third-party-license.md).
+ > You cannot change this setting after you publish your offer. To learn more about this setting, see [ISV app license management](isv-app-license.md).
- **Get it now (free)** ΓÇô List your offer to customers for free. - **Free trial (listing)** ΓÇô List your offer to customers with a link to a free trial. Offer listing free trials are created, managed, and configured by your service and do not have subscriptions managed by Microsoft.
marketplace Dynamics 365 Operations Validation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/dynamics-365-operations-validation.md
Last updated 05/19/2021
# Dynamics 365 for Operations functional validation
-Publishing a Dynamics 365 for Operations offer in [Partner Center](https://partner.microsoft.com/dashboard/home) requires two functional validations:
+Publishing a Dynamics 365 for Operations offer in [Partner Center](https://go.microsoft.com/fwlink/?linkid=2166002) requires two functional validations:
- Upload a demonstration video of the Dynamics 365 environment that shows basic functionality. - Present screenshots that demonstrate the solution's [Lifecycle Services](https://lcs.dynamics.com/) (LCS) environment.
Publishing a Dynamics 365 for Operations offer in [Partner Center](https://partn
There are two options for functional validation: - Hold a 30-minute conference call with us during Pacific Standard time (PST) business hours to demonstrate and record the [LCS](https://lcs.dynamics.com/) environment and solution, or-- In Partner Center, go to [Commercial Marketplace](https://partner.microsoft.com/dashboard/commercial-marketplace/overview) > **Overview** and upload a demo video URL and LCS screenshots on the offer's Supplemental Content tab.
+- In Partner Center, go to [Commercial Marketplace](https://go.microsoft.com/fwlink/?linkid=2165290) and upload a demo video URL and LCS screenshots on the offer's Supplemental Content tab.
The Microsoft certification team reviews the video and files, then either approves the solution or emails you about next steps.
To schedule a final review call, contact [appsourceCRM@microsoft.com](mailto:app
3. Upload to Partner Center. 1. Create a text document that includes the demo video address and screenshots, or save the screenshots as separate JPG files.
- 2. Add the text and images to a .zip file in [Partner Center](https://partner.microsoft.com/dashboard/commercial-marketplace/overview) on the offer's **Supplemental content** tab.
+ 2. Add the text and images to a .zip file in [Partner Center](https://go.microsoft.com/fwlink/?linkid=2165290) on the offer's **Supplemental content** tab.
[![Shows the project library window](media/dynamics-365-operations/supplemental-content.png)](media/dynamics-365-operations/supplemental-content.png#lightbox)
marketplace Isv App License https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/isv-app-license.md
+
+ Title: ISV app license management - Microsoft AppSource and Azure Marketplace
+description: Learn about managing ISV app licenses through Microsoft.
++++++ Last updated : 04/30/2021++
+# ISV app license management
+
+> [!IMPORTANT]
+> This capability is currently in Public Preview.
+
+Applies to the following offer type:
+
+- Dynamics 365 for Customer Engagement & Power Apps
+
+_ISV app license management_ enables independent software vendors (ISVs) who build solutions using Dynamics 365 suite of products to manage and enforce licenses for their solutions using systems provided by Microsoft. By adopting this approach you can:
+
+- Enable your customers to assign and unassign your solutionΓÇÖs licenses using familiar tools such as Microsoft 365 Admin Center, which they use to manage Office and Dynamics licenses.
+- Have the Power Platform enforce your licenses at runtime to ensure that only licensed users can access your solution.
+- Save yourself the effort of building and maintaining your own license management and enforcement system.
++
+> [!NOTE]
+> ISV app license management is only available to ISVs participating in the ISV Connect program. Microsoft is not involved in the sale of licenses.
+
+## Prerequisites
+
+To manage your ISV app licenses, you need to comply with the following pre-requisites.
+
+1. Have a valid [Microsoft Partner Network account](/partner-center/mpn-create-a-partner-center-account).
+1. Be signed up for commercial marketplace program. For more information, see [Create a commercial marketplace account in Partner Center](create-account.md).
+1. Be signed up for the [ISV Connect program](https://partner.microsoft.com/solutions/business-applications/isv-overview). For more information, see [Microsoft Business Applications Independent Software Vendor (ISV) Connect Program onboarding guide](business-applications-isv-program.md).
+1. Your developer team has the development environments and tools required to create Dataverse solutions. Your Dataverse solution must include model-driven applications (currently these are the only type of solution components that are supported through the license management feature).
+
+## High-level process
+
+This table illustrates the high-level process to manage ISV app licenses:
+
+| Step | Details |
+| | - |
+| Step 1: Create offer | The ISV creates an offer in Partner Center and chooses to manage licenses for this offer through Microsoft. This includes defining one or more licensing plans for the offer. For more information, see [Create a Dynamics 365 for Customer Engagement & Power Apps offer on Microsoft AppSource](dynamics-365-customer-engage-offer-setup.md). |
+| Step 2: Update package | The ISV creates a solution package for the offer that includes license plan information as metadata, and uploads it to Partner Center for publication to Microsoft AppSource. To learn more, see [Adding license metadata to your solution](/powerapps/developer/data-platform/appendix-add-license-information-to-your-solution). |
+| Step 3: Purchase licenses | Customers discover the ISVΓÇÖs offer in AppSource or directly on the ISVΓÇÖs website. Customers purchase licenses for the plans they want directly from the ISV (these offers are not purchasable through AppSource at this time). |
+| Step 4: Register deal | The ISV registers the purchase with Microsoft in Partner Center. As part of [deal registration](/partner-center/csp-commercial-marketplace-licensing#register-isv-connect-deal-in-deal-registration), the ISV will specify the type and quantity of each licensing plan purchased by the customer. |
+| Step 5: Manage licenses | The license plans will appear in Microsoft 365 Admin Center for the customer to [assign to users or groups](/microsoft-365/commerce/licenses/manage-third-party-app-licenses) in their organization. The customer can also install the application in their tenant via the Power Platform Admin Center. |
+| Step 6: Perform license check | When a user within the customerΓÇÖs organization tries to run an application, Microsoft checks to ensure that user has a license before permitting them to run it. If they donΓÇÖt have a license, the user sees a message explaining that they need to contact an administrator for a license. |
+| Step 7: View reports | ISVs can view information on provisioned and assigned licenses over a period of time and by geography. |
+|||
+
+## Enabling app license management through Microsoft
+
+When creating an offer, there are two check boxes on the Offer setup tab used to enable ISV app license management on an offer.
+
+### Enable app license management through Microsoft check box
+
+HereΓÇÖs how it works:
+
+- After you select the **Enable app license management through Microsoft** box, you can define licensing plans for your offer.
+- Customers will see a **Get it now** button on the offer listing page in AppSource. Customers can select this button to contact you to purchase licenses for the app.
+
+### Allow customers to install my app even if licenses are not assigned check box
+
+After you select the first box, the **Allow customers to install my app even if licenses are not assigned** box appears. This option is useful if you are employing a ΓÇ£freemiumΓÇ¥ licensing strategy whereby you want to offer some basic features of your solution for free to all users and charge for premium features. Conversely, if you want to ensure that only tenants who currently own licenses for your product can download it from AppSource, then donΓÇÖt select this option.
+
+> [!NOTE]
+> If you choose this option, you need to configure your solution package to not require a license.
+
+HereΓÇÖs how it works:
+
+- All AppSource users see the **Get it now** button on the offer listing page along with the **Contact me** button and will be permitted to download and install your offer.
+- If you do not select this option, then AppSource checks that the userΓÇÖs tenant has at least one license for your solution before showing the **Get it now** button. If there is no license in the userΓÇÖs tenant then only the **Contact Me** button is shown.
+
+For details about configuring an offer, see [How to create a Dynamics 365 for Customer Engagement & Power App offer](dynamics-365-customer-engage-offer-setup.md).
+
+## Offer listing page on AppSource
+
+After your offer is published, the options you chose will drive which buttons appear to a user. This screenshot shows an offer listing page on AppSource with the **Get it now** and **Contact me** buttons.
++
+***Figure 1: Offer listing page on Microsoft AppSource***
+
+## Next steps
+
+- [Plan a Dynamics 365 offer](marketplace-dynamics-365.md)
+- [How to create a Dynamics 365 for Customer Engagement & Power App offer](dynamics-365-customer-engage-offer-setup.md)
marketplace Marketplace Dynamics 365 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/marketplace-dynamics-365.md
These are the available licensing options for Dynamics 365 offers:
| Contact me | Collect customer contact information by connecting your Customer Relationship Management (CRM) system. The customer will be asked for permission to share their information. These customer details, along with the offer name, ID, and marketplace source where they found your offer, will be sent to the CRM system that you've configured. For more information about configuring your CRM, see the **Customer leads** section of your offer type's **Offer setup** page. | | Free trial (listing) | Offer your customers a one-, three- or six-month free trial. Offer listing free trials are created, managed, and configured by your service and do not have subscriptions managed by Microsoft. | | Get it now (free) | List your offer to customers for free. |
-| Get it now | Enables you to manage your third-party licenses in Partner Center.<br>Currently available to the following offer type only:<ul><li>Dynamics 365 for Customer Engagement & Power Apps</li></ul><br>For more information about this option, see [Third-party app license management through Microsoft](third-party-license.md). |
+| Get it now | Enables you to manage your ISV app0 licenses in Partner Center.<br>Currently available to the following offer type only:<ul><li>Dynamics 365 for Customer Engagement & Power Apps</li></ul><br>For more information about this option, see [ISV app license management](isv-app-license.md). |
||| ## Test drive
After you've considered the planning items described above, select one of the fo
| [Dynamics 365 for Business Central](dynamics-365-business-central-offer-setup.md) | | | [Dynamics 365 for Customer Engagement & Power Apps](dynamics-365-customer-engage-offer-setup.md) | First review these additional [publishing processes and guidelines](/dynamics365/customer-engagement/developer/publish-app-appsource). | | [Power BI](./power-bi-app-offer-setup.md) | First review these additional [publishing processes and guidelines](/power-bi/developer/office-store). |
-|||
+|||
marketplace Marketplace Faq Publisher Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/marketplace-faq-publisher-guide.md
No, there's no cost to publish offers in our commercial marketplace. We keep a s
To create offers in the commercial marketplace, your organization needs to be a Microsoft partner by agreeing to the Microsoft Partner Agreement and accepting the Publisher agreement.
-To sign up to be a commercial marketplace publisher, go to [Partner Center](https://partner.microsoft.com/dashboard/account/v3/enrollment/introduction/azureisv).
+To sign up to be a commercial marketplace publisher, go to [Partner Center](https://go.microsoft.com/fwlink/?linkid=2165614).
### How can customers engage with my offers in the commercial marketplace?
You can also use our [Private Marketplace](/marketplace/create-manage-private-az
### How do I get support assistance for the commercial marketplace?
-To contact our marketplace publisher support team, you can [submit a support ticket](https://aka.ms/marketplacepublishersupport) from within Partner Center.
+To contact our marketplace publisher support team, you can [submit a support ticket](https://go.microsoft.com/fwlink/?linkid=2165533) from within Partner Center.
You can also [join our active community forum](https://www.microsoftpartnercommunity.com/t5/Microsoft-AppSource-and-Azure/bd-p/2222) to learn about best practices and share information.
marketplace Marketplace Rewards https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/marketplace-rewards.md
To check your eligibility for the Marketplace Rewards program, see the [Marketpl
Your steps to get started are easy: 1. Publish an offer in either Microsoft AppSource or Azure Marketplace.
-1. To see your list of benefits, go to the [Marketplace Rewards](https://partner.microsoft.com/dashboard/mpn/program/commercialmarketplace) page in Partner Center, and select the **Sales and Marketing benefits** tab.
+1. To see your list of benefits, go to the [Marketplace Rewards](https://go.microsoft.com/fwlink/?linkid=2165388) page in Partner Center, and select the **Sales and Marketing benefits** tab.
1. To activate sales and marketing benefit, you must first assign a company marketing contact. This contact will receive follow-up communications about your Marketplace Rewards. 1. To add or update your marketing contact information, go to the top of the Sales and Marketing benefits tab on Marketplace Rewards page, then select **Add, update, or change**. Next, do the following:
Your steps to get started are easy:
1. How to make use of Azure sponsorship benefits will be shared via an email as you unlock these benefits. >[!NOTE]
->If your offer has been live for more than four weeks and you have not received a message, check in Partner Center to find who in your organization owns the offer. They should have the communication and next steps. If you cannot determine the owner, or if the owner has left your company, open a [support ticket](https://aka.ms/marketplacepublishersupport).
+>If your offer has been live for more than four weeks and you have not received a message, check in Partner Center to find who in your organization owns the offer. They should have the communication and next steps. If you cannot determine the owner, or if the owner has left your company, open a [support ticket](https://go.microsoft.com/fwlink/?linkid=2165533).
The scope of the activities available to you expands as you grow your offerings in the marketplace. All listings receive a base level of optimization recommendations and promotion as part of a self-serve email of resources and best practices.
marketplace Pc Saas Fulfillment Api V2 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/partner-center-portal/pc-saas-fulfillment-api-v2.md
Forbidden. The authorization token is invalid, expired, or was not provided. Th
This error is often a symptom of not performing the [SaaS registration](pc-saas-registration.md) correctly. Code: 500
-Internal server error. Retry the API call. If the error persists, contact [Microsoft support](https://partner.microsoft.com/support/v2/?stage=1).
+Internal server error. Retry the API call. If the error persists, contact [Microsoft support](https://go.microsoft.com/fwlink/?linkid=2165533).
#### Activate a subscription
This error is often a symptom of not performing the [SaaS registration](pc-saas-
Code: 404 Not found. The SaaS subscription is in an *Unsubscribed* state.
-Code: 500
-Internal server error. Retry the API call. If the error persists, contact [Microsoft support](https://partner.microsoft.com/support/v2/?stage=1).
+Code: 500
+Internal server error. Retry the API call. If the error persists, contact [Microsoft support](https://go.microsoft.com/fwlink/?linkid=2165533).
#### Get list of all subscriptions
Forbidden. The authorization token is unavailable, invalid, or expired.
This error is often a symptom of not performing the [SaaS registration](pc-saas-registration.md) correctly. Code: 500
-Internal server error. Retry the API call. If the error persists, contact [Microsoft support](https://partner.microsoft.com/support/v2/?stage=1).
+Internal server error. Retry the API call. If the error persists, contact [Microsoft support](https://go.microsoft.com/fwlink/?linkid=2165533).
#### Get subscription
Code: 404
Not found. SaaS subscription with the specified `subscriptionId` cannot be found. Code: 500
-Internal server error. Retry the API call. If the error persists, contact [Microsoft support](https://partner.microsoft.com/support/v2/?stage=1).
+Internal server error. Retry the API call. If the error persists, contact [Microsoft support](https://go.microsoft.com/fwlink/?linkid=2165533).
#### List available plans
Forbidden. The authorization token is invalid, expired, or was not provided. Th
This error is often a symptom of not performing the [SaaS registration](pc-saas-registration.md) correctly. Code: 500
-Internal server error. Retry the API call. If the error persists, contact [Microsoft support](https://partner.microsoft.com/support/v2/?stage=1).
+Internal server error. Retry the API call. If the error persists, contact [Microsoft support](https://go.microsoft.com/fwlink/?linkid=2165533).
#### Change the plan on the subscription
Code: 404
Not found. The SaaS subscription with `subscriptionId` is not found. Code: 500
-Internal server error. Retry the API call. If the error persists, contact [Microsoft support](https://partner.microsoft.com/support/v2/?stage=1).
+Internal server error. Retry the API call. If the error persists, contact [Microsoft support](https://go.microsoft.com/fwlink/?linkid=2165533).
>[!NOTE] >Either the plan or quantity of seats can be changed at one time, not both.
Code: 404
Not found. The SaaS subscription with `subscriptionId` is not found. Code: 500
-Internal server error. Retry the API call. If the error persists, contact [Microsoft support](https://partner.microsoft.com/support/v2/?stage=1).
+Internal server error. Retry the API call. If the error persists, contact [Microsoft support](https://go.microsoft.com/fwlink/?linkid=2165533).
>[!Note] >Only a plan or quantity can be changed at one time, not both.
Code: 404
Not found. The SaaS subscription with `subscriptionId` is not found. Code: 500
-Internal server error. Retry the API call. If the error persists, contact [Microsoft support](https://partner.microsoft.com/support/v2/?stage=1).
+Internal server error. Retry the API call. If the error persists, contact [Microsoft support](https://go.microsoft.com/fwlink/?linkid=2165533).
### Operations APIs
Code: 404
Not found. The SaaS subscription with `subscriptionId` is not found. Code: 500
-Internal server error. Retry the API call. If the error persists, contact [Microsoft support](https://partner.microsoft.com/support/v2/?stage=1).
+Internal server error. Retry the API call. If the error persists, contact [Microsoft support](https://go.microsoft.com/fwlink/?linkid=2165533).
#### Get operation status
Not found.
* Operation with `operationId` is not found. Code: 500
-Internal server error. Retry the API call. If the error persists, contact [Microsoft support](https://partner.microsoft.com/support/v2/?stage=1).
+Internal server error. Retry the API call. If the error persists, contact [Microsoft support](https://go.microsoft.com/fwlink/?linkid=2165533).
#### Update the status of an operation
Code: 409
Conflict. For example, a newer update is already fulfilled. Code: 500
-Internal server error. Retry the API call. If the error persists, contact [Microsoft support](https://partner.microsoft.com/support/v2/?stage=1).
+Internal server error. Retry the API call. If the error persists, contact [Microsoft support](https://go.microsoft.com/fwlink/?linkid=2165533).
## Implementing a webhook on the SaaS service
media-services Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/live-video-analytics-edge/faq.md
- Title: Azure Live Video Analytics on IoT Edge FAQ
-description: This article answers commonly asked questions about Azure Live Video Analytics on IoT Edge.
- Previously updated : 12/01/2020--
-# Azure Live Video Analytics on IoT Edge FAQ
--
-This article answers commonly asked questions about Live Video Analytics on Azure IoT Edge.
-
-## General
-
-**What system variables can I use in the graph topology definition?**
-
-| Variable | Description |
-| | |
-| [System.DateTime](/dotnet/framework/datdTHHmmssZ* |
-| System.PreciseDateTime | Represents a Coordinated Universal Time (UTC) date-time instance in an ISO8601 file-compliant format with milliseconds, in the following format:<br>*yyyyMMddTHHmmss.fffZ* |
-| System.GraphTopologyName | Represents a media graph topology, and holds the blueprint of a graph. |
-| System.GraphInstanceName | Represents a media graph instance, holds parameter values, and references the topology. |
-
-## Configuration and deployment
-
-**Can I deploy the media edge module to a Windows 10 device?**
-
-Yes. For more information, see [Linux containers on Windows 10](/virtualization/windowscontainers/deploy-containers/linux-containers).
-
-## Capture from IP camera and RTSP settings
-
-**Do I need to use a special SDK on my device to send in a video stream?**
-
-No, Live Video Analytics on IoT Edge supports capturing media by using RTSP (Real-Time Streaming Protocol) for video streaming, which is supported on most IP cameras.
-
-**Can I push media to Live Video Analytics on IoT Edge by using Real-Time Messaging Protocol (RTMP) or Smooth Streaming Protocol (such as a Media Services Live Event)?**
-
-No, Live Video Analytics supports only RTSP for capturing video from IP cameras. Any camera that supports RTSP streaming over TCP/HTTP should work.
-
-**Can I reset or update the RTSP source URL in a graph instance?**
-
-Yes, when the graph instance is in *inactive* state.
-
-**Is an RTSP simulator available to use during testing and development?**
-
-Yes, an [RTSP simulator](https://github.com/Azure/live-video-analytics/tree/master/utilities/rtspsim-live555) edge module is available for use in the quickstarts and tutorials to support the learning process. This module is provided as best-effort and might not always be available. We recommend strongly that you *not* use the simulator for more than a few hours. You should invest in testing with your actual RTSP source before you plan a production deployment.
-
-**Do you support ONVIF discovery of IP cameras at the edge?**
-
-No, we don't support Open Network Video Interface Forum (ONVIF) discovery of devices on the edge.
-
-## Streaming and playback
-
-**Can I play back assets recorded to Azure Media Services from the edge by using streaming technologies such as HLS or DASH?**
-
-Yes. You can stream recorded assets like any other asset in Azure Media Services. To stream the content, you must have a streaming endpoint created and in the running state. Using the standard Streaming Locator creation process will give you access to an Apple HTTP Live Streaming (HLS) or Dynamic Adaptive Streaming over HTTP (DASH, also known as MPEG-DASH) manifest for streaming to any capable player framework. For more information about creating and publishing HLS or DASH manifests, see [dynamic packaging](../latest/encode-dynamic-packaging-concept.md).
-
-**Can I use the standard content protection and DRM features of Media Services on an archived asset?**
-
-Yes. All the standard dynamic encryption content protection and digital rights management (DRM) features are available for use on assets that are recorded from a media graph.
-
-**What players can I use to view content from the recorded assets?**
-
-All standard players that support compliant HLS version 3 or version 4 are supported. In addition, any player that's capable of compliant MPEG-DASH playback is also supported.
-
-Recommended players for testing include:
-
-* [Azure Media Player](../latest/player-use-azure-media-player-how-to.md)
-* [HLS.js](https://hls-js.netlify.app/demo/)
-* [Video.js](https://videojs.com/)
-* [Dash.js](https://github.com/Dash-Industry-Forum/dash.js/wiki)
-* [Shaka Player](https://github.com/google/shaka-player)
-* [ExoPlayer](https://github.com/google/ExoPlayer)
-* [Apple native HTTP Live Streaming](https://developer.apple.com/streaming/)
-* Edge, Chrome, or Safari built-in HTML5 video player
-* Commercial players that support HLS or DASH playback
-
-**What are the limits on streaming a media graph asset?**
-
-Streaming a live or recorded asset from a media graph uses the same high-scale infrastructure and streaming endpoint that Media Services supports for on-demand and live streaming for Media & Entertainment, Over the Top (OTT), and broadcast customers. This means that you can quickly and easily enable Azure Content Delivery Network, Verizon, or Akamai to deliver your content to an audience as small as a few viewers or up to millions, depending on your scenario.
-
-You can deliver content by using either Apple HLS or MPEG-DASH.
-
-## Design your AI model
-
-**I have multiple AI models wrapped in a Docker container. How should I use them with Live Video Analytics?**
-
-Solutions vary depending on the communication protocol that's used by the inferencing server to communicate with Live Video Analytics. The following sections describe how each protocol works.
-
-*Use the HTTP protocol*:
-
-* Single container (single lvaExtension):
-
- In your inferencing server, you can use a single port but different endpoints for different AI models. For example, for a Python sample you can use different `route`s per model, as shown here:
-
- ```
- @app.route('/score/face_detection', methods=['POST'])
- …
- Your code specific to face detection model…
-
- @app.route('/score/vehicle_detection', methods=['POST'])
- …
- Your code specific to vehicle detection model
- …
- ```
-
- And then, in your Live Video Analytics deployment, when you instantiate graphs, set the inference server URL for each instance, as shown here:
-
- 1st instance: inference server URL=`http://lvaExtension:44000/score/face_detection`<br/>
- 2nd instance: inference server URL=`http://lvaExtension:44000/score/vehicle_detection`
-
- > [!NOTE]
- > Alternatively, you can expose your AI models on different ports and call them when you instantiate graphs.
-
-* Multiple containers:
-
- Each container is deployed with a different name. Previously, in the Live Video Analytics documentation set, we showed you how to deploy an extension named *lvaExtension*. Now you can develop two different containers, each with the same HTTP interface, which means they have the same `/score` endpoint. Deploy these two containers with different names, and ensure that both are listening on *different ports*.
-
- For example, one container named `lvaExtension1` is listening for the port `44000`, and a second container named `lvaExtension2` is listening for the port `44001`.
-
- In your Live Video Analytics topology, you instantiate two graphs with different inference URLs, as shown here:
-
- First instance: inference server URL = `http://lvaExtension1:44001/score`
- Second instance: inference server URL = `http://lvaExtension2:44001/score`
-
-*Use the gRPC protocol*:
-
-* With Live Video Analytics module 1.0, when you use a general-purpose remote procedure call (gRPC) protocol, the only way to do so is if the gRPC server exposes different AI models via different ports. In [this code example](https://github.com/Azure/live-video-analytics/blob/master/MediaGraph/topologies/grpcExtensionOpenVINO/2.0/topology.json), a single port, 44000, exposes all the yolo models. In theory, the yolo gRPC server could be rewritten to expose some models at port 44000 and others at port 45000.
-
-* With Live Video Analytics module 2.0, a new property is added to the gRPC extension node. This property, **extensionConfiguration**, is an optional string that can be used as a part of the gRPC contract. When you have multiple AI models packaged in a single inference server, you don't need to expose a node for every AI model. Instead, for a graph instance, you, as the extension provider, can define how to select the different AI models by using the **extensionConfiguration** property. During execution, Live Video Analytics passes this string to the inferencing server, which can use it to invoke the desired AI model.
-
-**I'm building a gRPC server around an AI model, and I want to be able to support its use by multiple cameras or graph instances. How should I build my server?**
-
- First, be sure that your server can either handle more than one request at a time or work in parallel threads.
-
-For example, a default number of parallel channels has been set in the following [Live Video Analytics gRPC sample](https://github.com/Azure/live-video-analytics/blob/master/utilities/video-analysis/notebooks/Yolo/yolov3/yolov3-grpc-icpu-onnx/lvaextension/server/server.py):
-
-```
-server = grpc.server(futures.ThreadPoolExecutor(max_workers=3))
-```
-
-In the preceding gRPC server instantiation, the server can open only three channels at a time per camera, or per graph topology instance. Don't try to connect more than three instances to the server. If you do try to open more than three channels, requests will be pending until an existing channel drops.
-
-The preceding gRPC server implementation is used in our Python samples. As a developer, you can implement your own server or use the preceding default implementation to increase the worker number, which you set to the number of cameras to use for video feeds.
-
-To set up and use multiple cameras, you can instantiate multiple graph topology instances, each pointing to the same or a different inference server (for example, the server mentioned in the preceding paragraph).
-
-**I want to be able to receive multiple frames from upstream before I make an inferencing decision. How can I enable that?**
-
-Our current [default samples](https://github.com/Azure/live-video-analytics/tree/master/utilities/video-analysis) work in a *stateless* mode. They don't keep the state of the previous calls or even who called. This means that multiple topology instances might call the same inference server, but the server can't distinguish who is calling or the state per caller.
-
-*Use the HTTP protocol*:
-
-To keep the state, each caller, or graph topology instance, calls the inferencing server by using the HTTP query parameter that's unique to caller. For example, the inference server URL addresses for each instance are shown here:
-
-1st topology instance= `http://lvaExtension:44000/score?id=1`<br/>
-2nd topology instance= `http://lvaExtension:44000/score?id=2`
-
-…
-
-On the server side, the score route knows who is calling. If ID=1, then it can keep the state separately for that caller or graph topology instance. You can then keep the received video frames in a buffer. For example, use an array, or a dictionary with a DateTime key, and the value is the frame. You can then define the server to process (infer) after *x* number of frames are received.
-
-*Use the gRPC protocol*:
-
-With a gRPC extension, each session is for a single camera feed, so there's no need to provide an ID. Now, with the extensionConfiguration property, you can store the video frames in a buffer and define the server to process (infer) after *x* number of frames are received.
-
-**Do all ProcessMediaStreams on a particular container run the same AI model?**
-
-No. Start or stop calls from the end user in a graph instance constitute a session, or perhaps there's a camera disconnect or reconnect. The goal is to persist one session if the camera is streaming video.
-
-* Two cameras sending video for processing creates two sessions.
-* One camera going to a graph that has two gRPC extension nodes creates two sessions.
-
-Each session is a full duplex connection between Live Video Analytics and the gRPC server, and each session can have a different model or pipeline.
-
-> [!NOTE]
-> In case of a camera disconnect or reconnect, with the camera going offline for a period beyond tolerance limits, Live Video Analytics will open a new session with the gRPC server. There's no requirement for the server to track the state across these sessions.
-
-Live Video Analytics also adds support for multiple gRPC extensions for a single camera in a graph instance. You can use these gRPC extensions to carry out AI processing sequentially, in parallel, or as a combination of both.
-
-> [!NOTE]
-> Having multiple extensions run in parallel will affect your hardware resources. Keep this in mind as you're choosing the hardware that suits your computational needs.
-
-**What is the maximum number of simultaneous ProcessMediaStreams?**
-
-Live Video Analytics applies no limits to this number.
-
-**How can I decide whether my inferencing server should use CPU or GPU or any other hardware accelerator?**
-
-Your decision depends on the complexity of the developed AI model and how you want to use the CPU and hardware accelerators. As you're developing the AI model, you can specify what resources the model should use and what actions it should perform.
-
-**How do I store images with bounding boxes post-processing?**
-
-Today, we are providing bounding box coordinates as inference messages only. You can build a custom MJPEG streamer that can use these messages and overlay the bounding boxes on the video frames.
-
-## gRPC compatibility
-
-**How will I know what the mandatory fields for the media stream descriptor are?**
-
-Any field that you don't supply a value to is given a [default value, as specified by gRPC](https://developers.google.com/protocol-buffers/docs/proto3#default).
-
-Live Video Analytics uses the *proto3* version of the protocol buffer language. All the protocol buffer data that's used by Live Video Analytics contracts is available in the [protocol buffer files](https://github.com/Azure/live-video-analytics/tree/master/contracts/grpc).
-
-**How can I ensure that I'm using the latest protocol buffer files?**
-
-You can obtain the latest protocol buffer files on the [contract files site](https://github.com/Azure/live-video-analytics/tree/master/contracts/grpc). Whenever we update the contract files, they'll be in this location. There's no immediate plan to update the protocol files, so look for the package name at the top of the files to know the version. It should read:
-
-```
-microsoft.azure.media.live_video_analytics.extensibility.grpc.v1
-```
-
-Any updates to these files will increment the "v-value" at the end of the name.
-
-> [!NOTE]
-> Because Live Video Analytics uses the proto3 version of the language, the fields are optional, and the version is backward and forward compatible.
-
-**What gRPC features are available for me to use with Live Video Analytics? Which features are mandatory and which are optional?**
-
-You can use any server-side gRPC features, provided that the Protocol Buffers (Protobuf) contract is fulfilled.
-
-## Monitoring and metrics
-
-**Can I monitor the media graph on the edge by using Azure Event Grid?**
-
-Yes. You can consume Prometheus metrics and publish them to your event grid.
-
-**Can I use Azure Monitor to view the health, metrics, and performance of my media graphs in the cloud or on the edge?**
-
-Yes, we support this approach. To learn more, see [Azure Monitor Metrics overview](../../azure-monitor/essentials/data-platform-metrics.md).
-
-**Are there any tools to make it easier to monitor the Media Services IoT Edge module?**
-
-Visual Studio Code supports the Azure IoT Tools extension, with which you can easily monitor the LVAEdge module endpoints. You can use this tool to quickly start monitoring your IoT hub built-in endpoint for "events" and view the inference messages that are routed from the edge device to the cloud.
-
-In addition, you can use this extension to edit the module twin for the LVAEdge module to modify the media graph settings.
-
-For more information, see the [monitoring and logging](monitoring-logging.md) article.
-
-## Billing and availability
-
-**How is Live Video Analytics on IoT Edge billed?**
-
-For billing details, see [Media Services pricing](https://azure.microsoft.com/pricing/details/media-services/).
-
-## Next steps
-
-[Quickstart: Get started with Live Video Analytics on IoT Edge](get-started-detect-motion-emit-events-quickstart.md)
media-services Monitoring Logging https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/live-video-analytics-edge/monitoring-logging.md
The module will now write debug logs in a binary format to the device storage pa
## FAQ
-If you have questions, see the [monitoring and metrics FAQ](faq.md#monitoring-and-metrics).
+If you have questions, see the [monitoring and metrics FAQ](/azure/media-services/live-video-analytics-edge/faq#monitoring-and-metrics).
## Next steps
media-services Media Services Frequently Asked Questions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/previous/media-services-frequently-asked-questions.md
- Title: Azure Media Services frequently asked questions
-description: This article gives answers to the Frequently asked questions about Azure Media Services.
------ Previously updated : 03/10/2021--
-# Media Services v2 frequently asked questions
--
-This article addresses frequently asked questions raised by the Azure Media Services (AMS) user community.
-
-## General AMS FAQs
-
-Q: How do you stream to Apple iOS devices?
-
-A: add "(format=m3u8-aapl)" path to the "/Manifest" portion of the URL to tell the streaming origin server to return back HLS content for consumption on Apple iOS native devices (for details, see [delivering content](media-services-deliver-content-overview.md)),
-
-Q: How do you scale indexing?
-
-A: The reserved units are the same for Encoding and Indexing tasks. Follow instructions on [How to Scale Encoding Reserved Units](media-services-scale-media-processing-overview.md). **Note** that Indexer performance is not affected by Reserved Unit Type.
-
-Q: I uploaded, encoded, and published a video. What would be the reason the video does not play when I try to stream it?
-
-A: One of the most common reasons is you do not have the streaming endpoint from which you are trying to play back in the **Running** state.
-
-Q: Can I do compositing on a live stream?
-
-A: Compositing on live streams is currently not offered in Azure Media Services, so you would need to pre-compose on your computer.
-
-Q: Can I use Azure CDN with Live Streaming?
-
-A: Media Services supports integration with Azure CDN (for more information, see [How to Manage Streaming Endpoints in a Media Services Account](media-services-portal-manage-streaming-endpoints.md)). You can use Live streaming with CDN. Azure Media Services provides Smooth Streaming, HLS and MPEG-DASH outputs. All these formats use HTTP for transferring data and get benefits of HTTP caching. In live streaming actual video/audio data is divided to fragments and this individual fragments get cached in CDN. Only data needs to be refreshed is the manifest data. CDN periodically refreshes manifest data.
-
-Q: Does Azure Media services support storing images?
-
-A: If you are just looking to store JPEG or PNG images, you should keep those in Azure Blob Storage. There is no benefit to putting them in your Media Services account unless you want to keep them associated with your Video or Audio Assets. Or if you might have a need to use the images as overlays in the video encoder.Media Encoder Standard supports overlaying images on top of videos, and that is what it lists JPEG and PNG as supported input formats. For more information, see [Creating Overlays](media-services-advanced-encoding-with-mes.md#overlay).
-
-Q: How can I copy assets from one Media Services account to another?
-
-A: To copy assets from one Media Services account to another using .NET, use [IAsset.Copy](https://github.com/Azure/azure-sdk-for-media-services-extensions/blob/dev/MediaServices.Client.Extensions/IAssetExtensions.cs#L354) extension method available in the [Azure Media Services .NET SDK Extensions](https://github.com/Azure/azure-sdk-for-media-services-extensions/) repository. For more information, see [this](https://social.msdn.microsoft.com/Forums/azure/28912d5d-6733-41c1-b27d-5d5dff2695ca/migrate-media-services-across-subscription?forum=MediaServices) forum thread.
-
-Q: What are the supported characters for naming files when working with AMS?
-
-A: Media Services uses the value of the IAssetFile.Name property when building URLs for the streaming content (for example, http://{AMSAccount}.origin.mediaservices.windows.net/{GUID}/{IAssetFile.Name}/streamingParameters.) For this reason, percent-encoding is not allowed. The value of the **Name** property cannot have any of the following [percent-encoding-reserved characters](https://en.wikipedia.org/wiki/Percent-encoding#Percent-encoding_reserved_characters): !*'();:@&=+$,/?%#[]". Also, there can only be one '.' for the file name extension.
-
-Q: How to connect using REST?
-
-A: For information on how to connect to the AMS API, see [Access the Azure Media Services API with Azure AD authentication](media-services-use-aad-auth-to-access-ams-api.md).
-
-Q: How can I rotate a video during the encoding process?
-
-A: The [Media Encoder Standard](media-services-dotnet-encode-with-media-encoder-standard.md) supports rotation by angles of 90/180/270. The default behavior is "Auto", where it tries to detect the rotation metadata in the incoming MP4/MOV file and compensate for it. Include the following **Sources** element to one of the json presets defined [here](media-services-mes-presets-overview.md):
-
-```json
-"Version": 1.0,
-"Sources": [
-{
- "Streams": [],
- "Filters": {
- "Rotation": "90"
- }
-}
-],
-"Codecs": [
-
-...
-```
--
-## Media Services learning paths
-
-## Provide feedback
migrate Common Questions Server Migration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/common-questions-server-migration.md
This article answers common questions about the Azure Migrate: Server Migration
- Questions about [discovery, assessment, and dependency visualization](common-questions-discovery-assessment.md) - Get questions answered in the [Azure Migrate forum](https://aka.ms/AzureMigrateForum)
-## Does Azure Migrate convert UEFI-based machines to BIOS-based machines and migrate them to Azure as Azure generation 1 VMs?
-Azure Migrate: Server Migration tool migrates all the UEFI-based machines to Azure as Azure generation 2 VMs. We no longer support the conversion of UEFI-based VMs to BIOS-based VMs. Note that all the BIOS-based machines are migrated to Azure as Azure generation 1 VMs only.
+## General questions
-## How can I migrate UEFI-based machines to Azure as Azure generation 1 VMs?
-Azure Migrate: Server Migration tool migrates UEFI-based machines to Azure as Azure generation 2 VMs. If you want to migrate them to Azure generation 1 VMs, convert the boot-type to BIOS before starting replication, and then use the Azure Migrate: Server Migration tool to migrate to Azure.
-
-## Which operating systems are supported for migration of UEFI-based machines to Azure?
-| **Operating systems supported for UEFI-based machines** | **Agentless VMware to Azure** | **Agentless Hyper-V to Azure** | **Agent-based VMware, physical and other clouds to Azure** |
-| - | -- | | - |
-| Windows Server 2019, 2016, 2012 R2, 2012 | Y | Y | Y |
-| Windows 10 Pro, Windows 10 Enterprise | Y | Y | Y |
-| SUSE Linux Enterprise Server 15 SP1 | Y | Y | Y |
-| SUSE Linux Enterprise Server 12 SP4 | Y | Y | Y |
-| Ubuntu Server 16.04, 18.04, 19.04, 19.10 | Y | Y | Y |
-| RHEL 8.1, 8.0, 7.8, 7.7, 7.6, 7.5, 7.4, 7.0, 6.x | Y<br> _RHEL 8.x requires [manual preparation](./prepare-for-migration.md#linux-machines)_ | Y | Y |
-| Cent OS 8.1, 8.0, 7.7, 7.6, 7.5, 7.4, 6.x | Y<br>_Cent OS 8.x requires [manual preparation](./prepare-for-migration.md#linux-machines)_ | Y | Y |
-| Oracle Linux 7.7, 7.7-CI | Y | Y | Y |
+### What are the migration options in Azure Migrate: Server Migration?
-## Can I use the recovery services vault created by Azure Migrate for Disaster Recovery scenarios?
-We do not recommend using the recovery services vault created by Azure Migrate for Disaster Recovery scenarios. Doing so can result in start replication failures in Azure Migrate.
+The Azure Migrate: Server Migration tool offers two options for migrating your source servers and virtual machines to Azure: agentless migration and agent-based migration.
-## Where should I install the replication appliance for agent-based migrations?
+Regardless of the migration option chosen, the first step to migrate a server using Azure Migration: Server Migration is to start replication for the server. This performs an initial replication of your VM/server data to Azure. After the initial replication is completed, an ongoing replication (ongoing delta-sync) is established to migrate incremental data to Azure. Once the operation reaches the delta-sync stage, you can choose to migrate to Azure at any time.
-The replication appliance should be installed on a dedicated machine. The replication appliance shouldn't be installed on a source machine that you want to replicate or on the Azure Migrate discovery and assessment appliance you may have installed before. Follow the [tutorial](./tutorial-migrate-physical-virtual-machines.md) for more details.
+Here are some considerations to keep in mind while deciding on the migration option.
-## How can I migrate my AWS EC2 instances to Azure?
+**Agentless migrations** do not require any software (agents) to be deployed on the source VMs/servers being migrated. The agentless option orchestrates replication by integrating with the functionality provided by the virtualization provider.
+The Agentless replication options are available for [VMware VMs](./tutorial-migrate-vmware.md) and [Hyper-V VMs](./tutorial-migrate-hyper-v.md).
-Review this [article](./tutorial-migrate-aws-virtual-machines.md) to discover, assess, and migrate your AWS EC2 instances to Azure.
+**Agent-based migrations** require Azure Migrate software (agents) to be installed on the source VMs/machines to be migrated. The agent-based option doesnΓÇÖt rely on the virtualization platform for the replication functionality. Therefore, it can be used with any server running an x86/x64 architecture and a version of an operating system supported by the agent-based replication method.
-## Can I migrate AWS VMs running Amazon Linux Operating system?
+Agent-based migration option can be used for [VMware VMs](./tutorial-migrate-vmware-agent.md), [Hyper-V VMs](./tutorial-migrate-physical-virtual-machines.md), [physical servers](./tutorial-migrate-physical-virtual-machines.md), [VMs running on AWS](./tutorial-migrate-aws-virtual-machines.md), VMs running on GCP, or VMs running on a different virtualization provider. The agent-based migration treats your machines as physical servers for migration.
-VMs running Amazon Linux cannot be migrated as-is as Amazon Linux OS is only supported on AWS.
-To migrate workloads running on Amazon Linux, you can spin up a CentOS/RHEL VM in Azure and migrate the workload running on the AWS Linux machine using a relevant workload migration approach. For example, depending on the workload, there may be workload-specific tools to aid the migration ΓÇô such as for databases or deployment tools in case of web servers.
+While the agentless migration offers another convenience and simplicity over the agent-based replication options for the supported scenarios (VMware and Hyper-V), you may want to consider using the agent-based scenario for the following use cases:
-## What geographies are supported for migration with Azure Migrate?
+- IOPS constrained environment: Agentless replication uses snapshots and consumes storage IOPS/bandwidth. We recommend the agent-based migration method if there are constraints on storage/IOPS in your environment.
+- If you don't have a vCenter Server, you can treat your VMware VMs as physical servers and use the agent-based migration workflow.
+
+To learn more, review this [article](./server-migrate-overview.md) to compare migration options for VMware migrations.
+
+### What geographies are supported for migration with Azure Migrate?
Review the supported geographies for [public](migrate-support-matrix.md#supported-geographies-public-cloud) and [government clouds](migrate-support-matrix.md#supported-geographies-azure-government).
-## Can we use the same Azure Migrate project to migrate to multiple regions?
+### Can I use the same Azure Migrate project to migrate to multiple regions?
While you can create assessments for multiple regions in an Azure Migrate project, one Azure Migrate project can be used to migrate servers to one Azure region only. You can create additional Azure Migrate projects for each region that you need to migrate to.
While you can create assessments for multiple regions in an Azure Migrate projec
- For agent-based migrations (VMware, physical servers, and servers from other clouds), the target region is locked once the ΓÇ£Create ResourcesΓÇ¥ button is clicked on the portal while setting up the replication appliance. - For agentless Hyper-V migrations, the target region is locked once the ΓÇ£Create ResourcesΓÇ¥ button is clicked on the portal while setting up the Hyper-V replication provider.
-## Can we use the same Azure Migrate project to migrate to multiple subscriptions?
+### Can I use the same Azure Migrate project to migrate to multiple subscriptions?
Yes, you can migrate to multiple subscriptions (same Azure tenant) in the same target region for an Azure Migrate project. You can select the target subscription while enabling replication for a machine or a set of machines. The target region is locked post first replication for agentless VMware migrations and during the replication appliance and Hyper-V provider installation for agent-based migrations and agentless Hyper-V migrations respectively.
-## What are the migration options in Azure Migrate: Server Migration?
+### How is the data transmitted from on-prem environment to Azure? Is it encrypted before transmission?
-The Azure Migrate: Server Migration tool provides two options to perform migrations of your source Servers/VMs to Azure ΓÇô agentless migration and agent-based migration.
+The Azure Migrate appliance in the agentless replication case compresses data and encrypts before uploading. Data is transmitted over a secure communication channel over https and uses TLS 1.2 or later. Additionally, Azure Storage automatically encrypts your data when it is persisted it to the cloud (encryption-at-rest).
-Regardless of the migration option chosen, the first step to migrate a server using Azure Migration: Server Migration is to enable replication for the server. This performs an initial replication of your VM/server data to Azure. After the initial replication is completed, an ongoing replication (ongoing delta-sync) is established to migrate incremental data to Azure. Once the operation reaches the delta-sync stage, you can choose to migrate to Azure at any time.
+### Can I use the recovery services vault created by Azure Migrate for Disaster Recovery scenarios?
+We do not recommend using the recovery services vault created by Azure Migrate for Disaster Recovery scenarios. Doing so can result in start replication failures in Azure Migrate.
-Here are some considerations to keep in mind while deciding on the migration option.
+### What is the difference between the Test Migration and Migrate operations?
-**Agentless migrations** do not require any software (agents) to be deployed on the source VMs/servers being migrated. The agentless option orchestrates replication by integrating with the functionality provided by the virtualization provider.
-The Agentless replication options are available for [VMware VMs](./tutorial-migrate-vmware.md) and [Hyper-V VMs](./tutorial-migrate-hyper-v.md).
+Test migration provides a way to test and validate migrations prior to the actual migration. Test migration works by letting you create test copies of replicating VMs in a sandbox environment in Azure. The sandbox environment is demarcated by a test virtual network you specify. The test migration operation is non-disruptive, with applications continuing to run at the source while letting you perform tests on a cloned copy in an isolated sandbox environment. You can perform multiple tests as needed to validate the migration, perform app testing, and address any issues before the actual migration.
-**Agent-based migrations** require Azure Migrate software (agents) to be installed on the source VMs/machines to be migrated. The agent-based option doesnΓÇÖt rely on the virtualization platform for the replication functionality and can, therefore, be used with any server running an x86/x64 architecture and a version of an operating system supported by the agent-based replication method.
-Agent-based migration option can be used for [VMware VMs](./tutorial-migrate-vmware-agent.md), [Hyper-V VMs](./tutorial-migrate-physical-virtual-machines.md), [physical servers](./tutorial-migrate-physical-virtual-machines.md), [VMs running on AWS](./tutorial-migrate-aws-virtual-machines.md), VMs running on GCP, or VMs running on a different virtualization provider. The agent-based migration treats your machines as physical servers for the purpose of the migration.
+### Is there a Rollback option for Azure Migrate?
-While the Agentless Migration offers additional convenience and simplicity over the agent-based replication options for the supported scenarios (VMWare and Hyper-V), you may want to consider using the agent-based scenario for the following use cases:
+You can use the Test Migration option to validate your application functionality and performance in Azure. You can perform any number of test migrations and can execute the final migration after establishing confidence through the test migration operation.
+A test migration doesnΓÇÖt impact the on-premises machine, which remains operational and continues replicating until you perform the actual migration. If there were any errors during the test migration UAT, you can choose to postpone the final migration and keep your source VM/server running and replicating to Azure. You can reattempt the final migration once you resolve the errors.
+Note: Once you have performed a final migration to Azure and the on-premises source machine was shut down, you cannot perform a rollback from Azure to your on-premises environment.
-- IOPS constrained environment: Agentless replication uses snapshots and consumes storage IOPS/bandwidth. We recommend the agent-based migration method if there are constraints on storage/IOPS in your environment.-- If you don't have a vCenter Server, you can treat your VMware VMs as physical servers and use the agent-based migration workflow.
+### Can I select the Virtual Network and subnet to use for test migrations?
-To learn more, review this [article](./server-migrate-overview.md) to compare migration options for VMware migrations.
+You can select a Virtual Network for test migrations. The subnet is automatically selected based on the following logic:
+
+- If a target subnet (other than default) was specified as an input while enabling replication, then Azure Migrate prioritizes using a subnet with the same name in the Virtual Network selected for the test migration.
+- If the subnet with the same name is not found, then Azure Migrate selects the first subnet available alphabetically that is not a Gateway/Application Gateway/Firewall/Bastion subnet.
-## How does Agentless Migration work?
+### Why is the Test Migration button disabled for my Server?
-Azure Migrate: Server Migration provides agentless replication options for the migration of VMware virtual machines and Hyper-V virtual machines running Windows or Linux. The tool also provides an additional agent-based replication option for Windows and Linux servers that can be used to migrate physical servers, as well as x86/x64 virtual machines on VMware, Hyper-V, AWS, GCP, etc. The agent-based replication option requires the installation of agent software on the server/virtual machine thatΓÇÖs being migrated, whereas in the agentless option no software needs to be installed on the virtual machines themselves, thus offering additional convenience and simplicity over the agent-based replication option.
+The test migration button could be in a disabled state in the following scenarios:
-The agentless replication option works by using mechanisms provided by the virtualization provider (VMware, Hyper-V). In the case of VMware virtual machines, the agentless replication mechanism uses VMware snapshots and VMware changed block tracking technology to replicate data from virtual machine disks. This mechanism is similar to the one used by many backup products. In the case of Hyper-V virtual machines, the agentless replication mechanism uses VM snapshots and the change tracking capability of the Hyper-V replica to replicate data from virtual machine disks.
+- You canΓÇÖt begin a test migration until an initial replication (IR) has been completed for the VM. The test migration button will be disabled until the IR process is completed. You can perform a test migration once your VM is in a delta-sync stage.
+- The button can be disabled if a test migration was already completed, but a test-migration cleanup was not performed for that VM. Perform a test migration cleanup and retry the operation.
-When replication is configured for a virtual machine, it first goes through an initial replication phase. During initial replication, a VM snapshot is taken, and a full copy of data from the snapshot disks are replicated to managed disks in your subscription. After initial replication for the VM is complete, the replication process transitions to an incremental replication (delta replication) phase. In the incremental replication phase, data changes that have occurred since the last completed replication cycle are periodically replicated and applied to the replica managed disks, thus keeping replication in sync with changes happening on the VM. In the case of VMware virtual machines, VMware changed block tracking technology is used to keep track of changes between replication cycles. At the start of the replication cycle, a VM snapshot is taken and changed block tracking is used to get the changes between the current snapshot and the last successfully replicated snapshot. That way only data that has changed since the last completed replication cycle needs to be replicated to keep replication for the VM in sync. At the end of each replication cycle, the snapshot is released, and snapshot consolidation is performed for the virtual machine. Similarly, in the case of Hyper-V virtual machines, the Hyper-V replica change tracking engine is used to keep track of changes between consecutive replication cycles.
-When you perform the migrate operation on a replicating virtual machine, you have the option to shutdown the on-premise virtual machine and perform one final incremental replication to ensure zero data loss. On performing the migrate option, the replica managed disks corresponding to the virtual machine are used to create the virtual machine in Azure.
+### What happens if I donΓÇÖt clean up my test migration?
-To get started, refer the [VMware agentless migration](./tutorial-migrate-vmware.md) and [Hyper-V agentless migration](./tutorial-migrate-hyper-v.md) tutorials.
+Test migration simulates the actual migration by creating a test Azure VM using replicated data. The server will be deployed with a point in time copy of the replicated data to the target Resource Group (selected while enabling replication) with a ΓÇ£-testΓÇ¥ suffix. Test migrations are intended for validating server functionality so that post migration issues are minimized. If the test migration is not cleaned up post testing, the test virtual machine will continue to run in Azure and will incur charges. To clean up post a test migration, go to the replicating machines view in the Server Migration tool, and use the 'cleanup test migration' action on the machine.
-## How does Agent-based Migration work?
-In addition to agentless migration options for VMware virtual machines and Hyper-V virtual machines, the Server Migration tool provides an agent-based migration option to migrate Windows and Linux servers running on physical servers, or running as x86/x64 virtual machines on VMware, Hyper-V, AWS, Google Cloud Platform, etc.
+### How do I know if my VM was successfully migrated?
-The agent-based migration method uses agent software installed on the server being migrated to replicate server data to Azure. The replication process uses an offload architecture in which the agent relays replication data to a dedicated replication server called the replication appliance or Configuration Server (or to a scale-out Process Server). [Learn more](./agent-based-migration-architecture.md) about how the agent-based migration option works.
+Once you have migrated your VM/server successfully, you can view and manage the VM from the Virtual Machines page. Connect to the migrated VM to validate.
+Alternatively, you can review the ΓÇÿJob statusΓÇÖ for the operation to check if the migration was successfully completed. If you see any errors, resolve them, and retry the migration operation.
-Note: The replication appliance is different from the Azure Migrate discovery appliance and must be installed on a separate/dedicated machine.
+### What happens if I donΓÇÖt stop replication after migration?
+
+When you stop replication, the Azure Migrate: Server Migration tool cleans up the managed disks in the subscription that were created for replication. If you do not stop replication after a migration, you will continue to incur charges for these disks. Stop replication will not impact the disks attached to machines that have already been migrated.
+
+### How can I migrate UEFI-based machines to Azure as Azure generation 1 VMs?
+Azure Migrate: Server Migration tool migrates UEFI-based machines to Azure as Azure generation 2 VMs. If you want to migrate them to Azure generation 1 VMs, convert the boot-type to BIOS before starting replication, and then use the Azure Migrate: Server Migration tool to migrate to Azure.
-## How do I gauge the bandwidth requirement for my migrations?
+### Does Azure Migrate convert UEFI-based machines to BIOS-based machines and migrate them to Azure as Azure generation 1 VMs?
+Azure Migrate: Server Migration tool migrates all the UEFI-based machines to Azure as Azure generation 2 VMs. We no longer support the conversion of UEFI-based VMs to BIOS-based VMs. All the BIOS-based machines are migrated to Azure as Azure generation 1 VMs only.
+
+### Which operating systems are supported for migration of UEFI-based machines to Azure?
+
+| **Operating systems supported for UEFI-based machines** | **Agentless VMware to Azure** | **Agentless Hyper-V to Azure** | **Agent-based VMware, physical and other clouds to Azure** |
+| - | -- | | - |
+| Windows Server 2019, 2016, 2012 R2, 2012 | Y | Y | Y |
+| Windows 10 Pro, Windows 10 Enterprise | Y | Y | Y |
+| SUSE Linux Enterprise Server 15 SP1 | Y | Y | Y |
+| SUSE Linux Enterprise Server 12 SP4 | Y | Y | Y |
+| Ubuntu Server 16.04, 18.04, 19.04, 19.10 | Y | Y | Y |
+| RHEL 8.1, 8.0, 7.8, 7.7, 7.6, 7.5, 7.4, 7.0, 6.x | Y<br> _RHEL 8.x requires [manual preparation](./prepare-for-migration.md#linux-machines)_ | Y | Y |
+| Cent OS 8.1, 8.0, 7.7, 7.6, 7.5, 7.4, 6.x | Y<br>_Cent OS 8.x requires [manual preparation](./prepare-for-migration.md#linux-machines)_ | Y | Y |
+| Oracle Linux 7.7, 7.7-CI | Y | Y | Y |
+
+### Can I migrate Active Directory domain-controllers using Azure Migrate?
+
+The Server Migration tool is application agnostic and works for most applications. When you migrate a server using the Server Migration tool, all the applications installed on the server are migrated along with it. However, for some applications, alternate migration methods other than server migration may be better suited for the migration. For Active Directory, in the case of hybrid environments where the on-premises site is connected to your Azure environment, you can extend your Directory into Azure by adding extra domain controllers in Azure and setting up Active Directory replication. If you are migrating into an isolated environment in Azure requiring its own domain controllers (or testing applications in a sandbox environment), you can migrate servers using the server migration tool.
++
+### Can I upgrade my OS while migrating?
+
+The Azure Migrate: Server Migration tool only supports like-for-like migrations currently. The tool doesnΓÇÖt support upgrading the OS version during migration. The migrated machine will have the same OS as the source machine.
+
+### Do I need VMware vCenter to migrate VMware VMs?
+
+To [migrate VMware VMs](server-migrate-overview.md) using VMware agent-based or agentless migration, ESXi hosts on which VMs are located must be managed by vCenter Server. If you don't have vCenter Server, you can migrate VMware VMs by migrating them as physical servers. [Learn more](migrate-support-matrix-physical-migration.md).
+
+### Can I consolidate multiple source VMs into one VM while migrating?
+
+Azure Migrate server migration capabilities support like for like migrations currently. We do not support consolidating servers or upgrading the operating system as part of the migration.
+
+### Will Windows Server 2008 and 2008 R2 be supported in Azure after migration?
+
+You can migrate your on-premises Windows Server 2008 and 2008 R2 servers to Azure virtual machines and get Extended Security Updates for three years after the End of Support dates at no extra charge above the cost of running the virtual machine. You can use the Azure Migrate: Server Migration tool to migrate your Windows Server 2008 and 2008 R2 workloads.
+
+### How do I migrate Windows Server 2003 running on VMware/Hyper-V to Azure?
+
+[Windows Server 2003 extended support](/troubleshoot/azure/virtual-machines/run-win-server-2003#microsoft-windows-server-2003-end-of-support) ended on July 14, 2015. The Azure support team continues to help in troubleshooting issues that concern running Windows Server 2003 on Azure. However, this support is limited to issues that don't require OS-level troubleshooting or patches.
+Migrating your applications to Azure instances running a newer version of Windows Server is the recommended approach to ensure that you are effectively using the flexibility and reliability of the Azure cloud.
+
+However, if you still choose to migrate your Windows Server 2003 to Azure, you can use the Azure Migrate: Server Migration tool if your Windows Server is a VM running on VMware or Hyper-V
+Review this article to [prepare your Windows Server 2003 machines for migration](./prepare-windows-server-2003-migration.md).
+
+## Agentless VMware Migration
+### How does Agentless Migration work?
+
+Azure Migrate: Server Migration provides agentless replication options for the migration of VMware virtual machines and Hyper-V virtual machines running Windows or Linux. The tool also provides another agent-based replication option for Windows and Linux servers that can be used to migrate physical servers, and x86/x64 virtual machines on VMware, Hyper-V, AWS, GCP, etc. The agent-based replication option requires the installation of agent software on the server/virtual machine thatΓÇÖs being migrated, whereas in the agentless option no software needs to be installed on the virtual machines themselves, thus offering more convenience and simplicity over the agent-based replication option.
+
+The agentless replication option works by using mechanisms provided by the virtualization provider (VMware, Hyper-V). In the case of VMware virtual machines, the agentless replication mechanism uses VMware snapshots and VMware changed block tracking technology to replicate data from virtual machine disks. This mechanism is similar to the one used by many backup products. In the case of Hyper-V virtual machines, the agentless replication mechanism uses VM snapshots and the change tracking capability of the Hyper-V replica to replicate data from virtual machine disks.
+
+When replication is configured for a virtual machine, it first goes through an initial replication phase. During initial replication, a VM snapshot is taken, and a full copy of data from the snapshot disks are replicated to managed disks in your subscription. After initial replication for the VM is complete, the replication process transitions to an incremental replication (delta replication) phase. In the incremental replication phase, data changes that have occurred since the last completed replication cycle are periodically replicated and applied to the replica managed disks, thus keeping replication in sync with changes happening on the VM. In the case of VMware virtual machines, VMware changed block tracking technology is used to keep track of changes between replication cycles. At the start of the replication cycle, a VM snapshot is taken and changed block tracking is used to get the changes between the current snapshot and the last successfully replicated snapshot. That way only data that has changed since the last completed replication cycle needs to be replicated to keep replication for the VM in sync. At the end of each replication cycle, the snapshot is released, and snapshot consolidation is performed for the virtual machine. Similarly, in the case of Hyper-V virtual machines, the Hyper-V replica change tracking engine is used to keep track of changes between consecutive replication cycles.
+When you perform the migrate operation on a replicating virtual machine, you have the option to shutdown the on-premise virtual machine and perform one final incremental replication to ensure zero data loss. On performing the migrate option, the replica managed disks corresponding to the virtual machine are used to create the virtual machine in Azure.
+
+To get started, refer the [VMware agentless migration](./tutorial-migrate-vmware.md) and [Hyper-V agentless migration](./tutorial-migrate-hyper-v.md) tutorials.
+
+### How do I gauge the bandwidth requirement for my migrations?
Bandwidth for replication of data to Azure depends on a range of factors and is a function of how fast the on-premises Azure Migrate appliance can read and replicate the data to Azure. Replication has two phases: initial replication and delta replication. When replication starts for a VM, an initial replication cycle occurs in which full copies of the disks are replicated. After the initial replication is completed, incremental replication cycles (delta cycles) are scheduled periodically to transfer any changes that have occurred since the previous replication cycle.
-### Agentless VMware VM migration
- You can work out the bandwidth requirement based on the volume of data needed to be moved in the wave and time within which you would like initial replication to complete (ideally youΓÇÖd want initial replication to have completed at least 3-4 days prior to the actual migration window to give you sufficient time to perform a test migration prior to the actual window and to keep downtime to a minimum during the window). You can estimate the bandwidth or time needed for agentless VMware VM migration using the following formula: Time to complete initial replication = {size of disks (or used size if available) * 0.7 (assuming a 30 percent compression average ΓÇô conservative estimate)}/bandwidth available for replication.
-### Agent-based VMware VM migration
-
-For an agent-based method of replication, the Deployment planner can help profile the environment for the data churn and help predict the necessary bandwidth requirement. To learn more, view this [article](./agent-based-migration-architecture.md#plan-vmware-deployment).
-
-## How do I throttle replication in using Azure Migrate appliance for agentless VMware replication?
+### How do I throttle replication in using Azure Migrate appliance for agentless VMware replication?
You can throttle using NetQosPolicy. For example:
Register-ScheduledTask -TaskName $ThrottleBandwidthTask -Trigger $ThrottleBandwi
Register-ScheduledTask -TaskName $IncreaseBandwidthTask -Trigger $IncreaseBandwidthTrigger -User $User -Action $IncreaseBandwidthAction -RunLevel Highest -Force ```
-## How is the data transmitted from on-prem environment to Azure? Is it encrypted before transmission?
-
-The Azure Migrate appliance in the agentless replication case compresses data and encrypts before uploading. Data is transmitted over a secure communication channel over https and uses TLS 1.2 or later. Additionally, Azure Storage automatically encrypts your data when it is persisted it to the cloud (encryption-at-rest).
-
-## How does churn rate affect agentless replication?
+### How does churn rate affect agentless replication?
Because agentless replication folds in data, the *churn pattern* is more important than the *churn rate*. When a file is written again and again, the rate doesn't have much impact. However, a pattern in which every other sector is written causes high churn in the next cycle. Because we minimize the amount of data we transfer, we allow the data to fold as much as possible before we schedule the next cycle.
-## How frequently is a replication cycle scheduled?
+### How frequently is a replication cycle scheduled?
The formula to schedule the next replication cycle is (previous cycle time / 2) or one hour, whichever is higher. For example, if a VM takes four hours for a delta cycle, the next cycle is scheduled in two hours, and not in the next hour. The process is different immediately after initial replication, when the first delta cycle is scheduled immediately.
-## How do I migrate Windows Server 2003 running on VMware/Hyper-V to Azure?
+### I deployed two (or more) appliances to discover VMs in my vCenter Server. However, when I try to migrate the VMs, I only see VMs corresponding to one of the appliances.
-[Windows Server 2003 extended support](/troubleshoot/azure/virtual-machines/run-win-server-2003#microsoft-windows-server-2003-end-of-support) ended on July 14, 2015. The Azure support team continues to help in troubleshooting issues that concern running Windows Server 2003 on Azure. However, this support is limited to issues that don't require OS-level troubleshooting or patches.
-Migrating your applications to Azure instances running a newer version of Windows Server is the recommended approach to ensure that you are effectively leveraging the flexibility and reliability of the Azure cloud.
-
-However, if you still choose to migrate your Windows Server 2003 to Azure, you can use the Azure Migrate: Server Migration tool if your Windows Server is a VM running on VMware or Hyper-V
-Review this article to [prepare your Windows Server 2003 machines for migration](./prepare-windows-server-2003-migration.md).
+If there are multiple appliances set up, it is required there is no overlap among the VMs on the vCenter accounts provided. A discovery with such an overlap is an unsupported scenario.
-## What is the difference between the Test Migration and Migrate operations?
-Test migration provides a way to test and validate migrations prior to the actual migration. Test migration works by letting you create test copies of replicating VMs in a sandbox environment in Azure. The sandbox environment is demarcated by a test virtual network you specify. The test migration operation is non-disruptive, with applications continuing to run at the source while letting you perform tests on a cloned copy in an isolated sandbox environment. You can perform multiple tests as needed to validate the migration, perform app testing, and address any issues before the actual migration.
+### How does agentless replication affect VMware servers?
-## Will Windows Server 2008 and 2008 R2 be supported in Azure after migration?
+Agentless replication results in some performance impact on VMware vCenter Server and VMware ESXi hosts. Because agentless replication uses snapshots, it consumes IOPS on storage, so some IOPS storage bandwidth is required. We don't recommend using agentless replication if you have constraints on storage or IOPs in your environment.
-You can migrate your on-premises Windows Server 2008 and 2008 R2 servers to Azure virtual machines and get Extended Security Updates for three years after the End of Support dates at no additional charge above the cost of running the virtual machine. You can use the Azure Migrate: Server Migration tool to migrate your Windows Server 2008 and 2008 R2 workloads.
-## Is there a Rollback option for Azure Migrate?
+## Agent-based Migration
-You can use the Test Migration option to validate your application functionality and performance in Azure. You can perform any number of test migrations and can execute the final migration after establishing confidence through the test migration operation.
-A test migration doesnΓÇÖt impact the on-premises machine, which remains operational and continues replicating until you perform the actual migration. If there were any errors during the test migration UAT, you can choose to postpone the final migration and keep your source VM/server running and replicating to Azure. You can reattempt the final migration once you resolve the errors.
-Note: Once you have performed a final migration to Azure and the on-premises source machine was shut down, you cannot perform a rollback from Azure to your on-premises environment.
+### How can I migrate my AWS EC2 instances to Azure?
-## Can I select the Virtual Network and subnet to use for test migrations?
+Review the [article](./tutorial-migrate-aws-virtual-machines.md) to discover, assess, and migrate your AWS EC2 instances to Azure.
-You can select a Virtual Network for test migrations. The subnet is automatically selected based on the following logic:
+### How does agent-based Migration work?
-- If a target subnet (other than default) was specified as an input while enabling replication, then Azure Migrate prioritizes using a subnet with the same name in the Virtual Network selected for the test migration.-- If the subnet with the same name is not found, then Azure Migrate selects the first subnet available alphabetically that is not a Gateway/Application Gateway/Firewall/Bastion subnet.-
-## Why is the Test Migration button disabled for my Server?
+In addition to agentless migration options for VMware virtual machines and Hyper-V virtual machines, the Server Migration tool provides an agent-based migration option to migrate Windows and Linux servers running on physical servers, or running as x86/x64 virtual machines on VMware, Hyper-V, AWS, Google Cloud Platform, etc.
-The test migration button could be in a disabled state in the following scenarios:
+The agent-based migration method uses agent software installed on the server being migrated to replicate server data to Azure. The replication process uses an offload architecture in which the agent relays replication data to a dedicated replication server called the replication appliance or Configuration Server (or to a scale-out Process Server). [Learn more](./agent-based-migration-architecture.md) about how the agent-based migration option works.
-- You canΓÇÖt begin a test migration until an initial replication (IR) has been completed for the VM. The test migration button will be disabled until the IR process is completed. You can perform a test migration once your VM is in a delta-sync stage.-- The button can be disabled if a test migration was already completed, but a test-migration cleanup was not performed for that VM. Perform a test migration cleanup and retry the operation.
+Note: The replication appliance is different from the Azure Migrate discovery appliance and must be installed on a separate/dedicated machine.
-## What happens if I donΓÇÖt clean up my test migration?
+### Where should I install the replication appliance for agent-based migrations?
-Test migration simulates the actual migration by creating a test Azure VM using replicated data. The server will be deployed with a point in time copy of the replicated data to the target Resource Group (selected while enabling replication) with a ΓÇ£-testΓÇ¥ suffix. Test migrations are intended for validating server functionality so that post migration issues are minimized. If the test migration is not cleaned up post testing, the test virtual machine will continue to run in Azure and will incur charges. To cleanup post a test migration, go to the replicating machines view in the Server Migration tool, and use the 'cleanup test migration' action on the machine.
+The replication appliance should be installed on a dedicated machine. The replication appliance shouldn't be installed on a source machine that you want to replicate or on the Azure Migrate appliance (used for discovery and assessment) you may have installed before. Follow the [tutorial](./tutorial-migrate-physical-virtual-machines.md) for more details.
-## Can I migrate Active Directory domain-controllers using Azure Migrate?
+### Can I migrate AWS VMs running Amazon Linux Operating system?
-The Server Migration tool is application agnostic and works for most applications. When you migrate a server using the Server Migration tool, all the applications installed on the server are migrated along with it. However, for some applications, alternate migration methods other than server migration may be better suited for the migration. For Active Directory, in the case of hybrid environments where the on-premises site is connected to your Azure environment, you can extend your Directory into Azure by adding additional domain controllers in Azure and setting up Active Directory replication. If you are migrating into an isolated environment in Azure requiring its own domain controllers (or testing applications in a sandbox environment), you can migrate servers using the server migration tool.
+VMs running Amazon Linux cannot be migrated as-is as Amazon Linux OS is only supported on AWS.
+To migrate workloads running on Amazon Linux, you can spin up a CentOS/RHEL VM in Azure and migrate the workload running on the AWS Linux machine using a relevant workload migration approach. For example, depending on the workload, there may be workload-specific tools to aid the migration ΓÇô such as for databases or deployment tools in case of web servers.
-## What happens if I donΓÇÖt stop replication after migration?
+### How do I gauge the bandwidth requirement for my migrations?
-When you stop replication, the Azure Migrate: Server Migration tool cleans up the managed disks in the subscription that were created for replication. If you do not stop replication after a migration, you will continue to incur charges for these disks. Stop replication will not impact the disks attached to machines that have already been migrated.
+Bandwidth for replication of data to Azure depends on a range of factors and is a function of how fast the on-premises Azure Migrate appliance can read and replicate the data to Azure. Replication has two phases: initial replication and delta replication.
-## Do I need VMware vCenter to migrate VMware VMs?
+When replication starts for a VM, an initial replication cycle occurs in which full copies of the disks are replicated. After the initial replication is completed, incremental replication cycles (delta cycles) are scheduled periodically to transfer any changes that have occurred since the previous replication cycle.
-To [migrate VMware VMs](server-migrate-overview.md) using VMware agent-based or agentless migration, ESXi hosts on which VMs are located must be managed by vCenter Server. If you don't have vCenter Server, you can migrate VMware VMs by migrating them as physical servers. [Learn more](migrate-support-matrix-physical-migration.md).
+For an agent-based method of replication, the Deployment Planner can help profile the environment for the data churn and help predict the necessary bandwidth requirement. To learn more, view this [article](./agent-based-migration-architecture.md#plan-vmware-deployment)
-## Can I upgrade my OS while migrating?
+## Agentless Hyper-V Migration
-The Azure Migrate: Server Migration tool only supports like-for-like migrations currently. The tool doesnΓÇÖt support upgrading the OS version during migration. The migrated machine will have the same OS as the source machine.
+### How does Agentless Migration work?
-## I deployed two (or more) appliances to discover VMs in my vCenter Server. However, when I try to migrate the VMs, I only see VMs corresponding to one of the appliances.
+Azure Migrate: Server Migration provides agentless replication options for the migration of VMware virtual machines and Hyper-V virtual machines running Windows or Linux. The tool also provides an additional agent-based replication option for Windows and Linux servers that can be used to migrate physical servers, as well as x86/x64 virtual machines on VMware, Hyper-V, AWS, GCP, etc. The agent-based replication option requires the installation of agent software on the server/virtual machine thatΓÇÖs being migrated, whereas in the agentless option no software needs to be installed on the virtual machines themselves, thus offering additional convenience and simplicity over the agent-based replication option.
-If there are multiple appliances set up, it is required there is no overlap among the VMs on the vCenter accounts provided. A discovery with such an overlap is an unsupported scenario.
+The agentless replication option works by using mechanisms provided by the virtualization provider (VMware, Hyper-V). In the case of Hyper-V virtual machines, the agentless replication mechanism uses VM snapshots and the change tracking capability of the Hyper-V replica to replicate data from virtual machine disks.
-## How do I know if my VM was successfully migrated?
+When replication is configured for a virtual machine, it first goes through an initial replication phase. During initial replication, a VM snapshot is taken, and a full copy of data from the snapshot disks are replicated to managed disks in your subscription. After initial replication for the VM is complete, the replication process transitions to an incremental replication (delta replication) phase. In the incremental replication phase, data changes that have occurred since the last completed replication cycle are periodically replicated and applied to the replica managed disks, thus keeping replication in sync with changes happening on the VM. In the case of VMware virtual machines, VMware changed block tracking technology is used to keep track of changes between replication cycles. At the start of the replication cycle, a VM snapshot is taken and changed block tracking is used to get the changes between the current snapshot and the last successfully replicated snapshot. That way only data that has changed since the last completed replication cycle needs to be replicated to keep replication for the VM in sync. At the end of each replication cycle, the snapshot is released, and snapshot consolidation is performed for the virtual machine. Similarly, in the case of Hyper-V virtual machines, the Hyper-V replica change tracking engine is used to keep track of changes between consecutive replication cycles.
-Once you have migrated your VM/server successfully, you can view and manage the VM from the Virtual Machines page. Connect to the migrated VM to validate.
-Alternatively, you can review the ΓÇÿJob statusΓÇÖ for the operation to check if the migration was successfully completed. If you see any errors, resolve them, and retry the migration operation.
+When you perform the migrate operation on a replicating virtual machine, you have the option to shutdown the on-premise virtual machine and perform one final incremental replication to ensure zero data loss. On performing the migrate option, the replica managed disks corresponding to the virtual machine are used to create the virtual machine in Azure.
-## Can I consolidate multiple source VMs into one VM while migrating?
+To get started, refer the [Hyper-V agentless migration](./tutorial-migrate-hyper-v.md) tutorial.
-Azure Migrate server migration capabilities support like for like migrations currently. We do not support consolidating servers or upgrading the operating system as part of the migration.
+### How do I gauge the bandwidth requirement for my migrations?
-## How does agentless replication affect VMware servers?
+Bandwidth for replication of data to Azure depends on a range of factors and is a function of how fast the on-premises Azure Migrate appliance can read and replicate the data to Azure. Replication has two phases: initial replication and delta replication.
-Agentless replication results in some performance impact on VMware vCenter Server and VMware ESXi hosts. Because agentless replication uses snapshots, it consumes IOPS on storage, so some IOPS storage bandwidth is required. We don't recommend using agentless replication if you have constraints on storage or IOPs in your environment.
+When replication starts for a VM, an initial replication cycle occurs in which full copies of the disks are replicated. After the initial replication is completed, incremental replication cycles (delta cycles) are scheduled periodically to transfer any changes that have occurred since the previous replication cycle.
+You can work out the bandwidth requirement based on the volume of data needed to be moved in the wave and time within which you would like initial replication to complete (ideally youΓÇÖd want initial replication to have completed at least 3-4 days prior to the actual migration window to give you sufficient time to perform a test migration prior to the actual window and to keep downtime to a minimum during the window).
## Next steps
migrate Migrate Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/migrate-support-matrix.md
Azure Migrate: Server Migration | N/A | Migrate [VMware VMs](tutorial-migrate-vm
[DMS](../dms/dms-overview.md) | N/A | Migrate SQL Server, Oracle, MySQL, PostgreSQL, MongoDB. [Lakeside](https://go.microsoft.com/fwlink/?linkid=2104908) | Assess virtual desktop infrastructure (VDI) | N/A [Movere](https://www.movere.io/) | Assess VMware VMs, Hyper-V VMs, Xen VMs, physical servers, workstations (including VDI) and other cloud workloads. | N/A
-[RackWare](https://go.microsoft.com/fwlink/?linkid=2102735) | N/A | Migrate VMWare VMs, Hyper-V VMs, Xen VMs, KVM VMs, physical servers and other cloud workloads
+[RackWare](https://go.microsoft.com/fwlink/?linkid=2102735) | N/A | Migrate VMware VMs, Hyper-V VMs, Xen VMs, KVM VMs, physical servers and other cloud workloads
[Turbonomic](https://go.microsoft.com/fwlink/?linkid=2094295) | Assess VMware VMs, Hyper-V VMs, physical servers and other cloud workloads. | N/A [UnifyCloud](https://go.microsoft.com/fwlink/?linkid=2097195) | Assess VMware VMs, Hyper-V VMs, physical servers and other cloud workloads, and SQL Server databases. | N/A [Webapp Migration Assistant](https://appmigration.microsoft.com/) | Assess web apps | Migrate web apps.
migrate Troubleshoot Network Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/troubleshoot-network-connectivity.md
ms. Previously updated : 05/21/2021 Last updated : 06/15/2021
This is a non-exhaustive list of items that can be found in advanced or complex
- Custom gateway (NAT) solutions may impact how traffic is routed, including traffic from DNS queries. For more information, review the [troubleshooting guide for Private Endpoint connectivity problems.](../private-link/troubleshoot-private-endpoint-connectivity.md) +
+## Common issues while using Azure Migrate with private endpoints
+In this section, we will list some of the commonly occurring issues and suggest do-it-yourself troubleshooting steps to remediate the problem.
+
+### Appliance registration fails with the error ForbiddenToAccessKeyVault
+Azure Key Vault create or update operation failed for <_KeyVaultName_> due to the error <_ErrorMessage_>
+#### Possible causes:
+This issue can occur if the Azure account being used to register the appliance doesnΓÇÖt have the required permissions or the Azure Migrate appliance cannot access the Key Vault.
+#### Remediation:
+**Steps to troubleshoot Key Vault access issues:**
+1. Make sure the Azure user account used to register the appliance has at least Contributor permissions on the subscription.
+2. Ensure that the user trying to register the appliance has access to the Key Vault and has an access policy assigned in the Key Vault>Access Policy section. [Learn more](/azure/key-vault/general/assign-access-policy-portal)
+- [Learn more](/azure/migrate/migrate-appliance#appliancevmware) about the required Azure roles and permissions.
+
+**Steps to troubleshoot connectivity issues to the Key Vault:**
+If you have enabled the appliance for private endpoint connectivity, use the following steps to troubleshoot network connectivity issues:
+- Ensure that the appliance is either hosted in the same virtual network or is connected to the target Azure virtual network (where the Key Vault private endpoint has been created) over a private link. The Key Vault private endpoint will be created in the virtual network selected during the project creation experience. You can verify the virtual network details in the **Azure Migrate > Properties** page.
+![Azure Migrate properties](./media/how-to-use-azure-migrate-with-private-endpoints/azure-migrate-properties-page.png)
+
+- Ensure that the appliance has network connectivity to the Key Vault over a private link. To validate the private link connectivity, perform a DNS resolution of the Key Vault resource endpoint from the on-premises server hosting the appliance and ensure that it resolves to a private IP address.
+- Go to **Azure Migrate: Discovery and assessment> Properties** to find the details of private endpoints for resources like the Key Vault created during the key generation step.
+
+ ![Azure Migrate server assessment properties](./media/how-to-use-azure-migrate-with-private-endpoints/azure-migrate-server-assessment-properties.png)
+- Select **Download DNS settings** to download the DNS mappings.
+
+ ![Download DNS settings](./media/how-to-use-azure-migrate-with-private-endpoints/download-dns-settings.png)
+
+- Open the command line and run the following nslookup command to verify network connectivity to the Key Vault URL mentioned in the DNS settings file.
+
+ ```console
+ nslookup <your-key-vault-name>.vault.azure.net
+ ```
+
+ If you run the ns lookup command to resolve the IP address of a key vault over a public endpoint, you will see a result that looks like this:
+
+ ```console
+ c:\ >nslookup <your-key-vault-name>.vault.azure.net
+
+ Non-authoritative answer:
+ Name:
+ Address: (public IP address)
+ Aliases: <your-key-vault-name>.vault.azure.net
+ ```
+
+ If you run the ns lookup command to resolve the IP address of a key vault over a private endpoint, you will see a result that looks like this:
+
+ ```console
+ c:\ >nslookup your_vault_name.vault.azure.net
+
+ Non-authoritative answer:
+ Name:
+ Address: 10.12.4.20 (private IP address)
+ Aliases: <your-key-vault-name>.vault.azure.net
+ <your-key-vault-name>.privatelink.vaultcore.azure.net
+ ```
+
+ The nslookup command should resolve to a private IP address as mentioned above. The private IP address should match the one listed in the DNS settings file.
+
+If the DNS resolution is incorrect, follow these steps:
+1. Manually update the source environment DNS records by editing the DNS hosts file on the on-premises appliance with the DNS mappings and the associated private IP addresses. This option is recommended for testing.
+
+ ![DNS hosts file](./media/how-to-use-azure-migrate-with-private-endpoints/dns-hosts-file-1.png)
+
+2. If you use a custom DNS server, review your custom DNS settings, and validate that the DNS configuration is correct. For guidance, see [private endpoint overview: DNS configuration](/azure/private-link/private-endpoint-overview#dns-configuration).
+3. If the issue still persists, [refer to this section](/azure/migrate/troubleshoot-network-connectivity#validate-the-private-dns-zone) for further troubleshooting.
+
+After youΓÇÖve verified the connectivity, retry the registration process.
+
+### Start Discovery fails with the error AgentNotConnected
+The appliance could not initiate discovery as the on-premises agent is unable to communicate to the Azure Migrate service endpoint: <_URLname_> in Azure.
+
+![Agent not connected error](./media/how-to-use-azure-migrate-with-private-endpoints/agent-not-connected-error.png)
+
+#### Possible causes:
+This issue can occur if the appliance is unable to reach the service endpoint(s) mentioned in the error message.
+#### Remediation:
+Ensure that the appliance has connectivity either directly or via proxy and can resolve the service endpoint provided in the error message.
+
+If you have enabled the appliance for private endpoint connectivity, ensure that the appliance is connected to the Azure virtual network over a private link and can resolve the service endpoint(s) provided in the error message.
+
+**Steps to troubleshoot private link connectivity issues to Azure Migrate service endpoints:**
+
+If you have enabled the appliance for private endpoint connectivity, use the following steps to troubleshoot network connectivity issues:
+
+- Ensure that the appliance is either hosted in the same virtual network or is connected to the target Azure virtual network (where the private endpoints have been created) over a private link. Private endpoints for the Azure Migrate services are created in the virtual network selected during the project creation experience. You can verify the virtual network details in the **Azure Migrate > Properties** page.
+
+![Azure Migrate properties](./media/how-to-use-azure-migrate-with-private-endpoints/azure-migrate-properties-page.png)
+
+- Ensure that the appliance has network connectivity to the service endpoint URLs and other URLs, mentioned in the error message, over a private link connection. To validate private link connectivity, perform a DNS resolution of the URLs from the on-premises server hosting the appliance and ensure that it resolves to private IP addresses.
+- Go to **Azure Migrate: Discovery and assessment> Properties** to find the details of private endpoints for the service endpoints created during the key generation step.
+ ![Azure Migrate server assessment properties](./media/how-to-use-azure-migrate-with-private-endpoints/azure-migrate-server-assessment-properties.png)
+- Select **Download DNS settings** to download the DNS mappings.
+
+ ![Download DNS settings](./media/how-to-use-azure-migrate-with-private-endpoints/download-dns-settings.png)
+
+|**DNS mappings containing Private endpoint URLs** | **Details** |
+| | |
+|*.disc.privatelink.test.migration.windowsazure.com | Azure Migrate Discovery service endpoint
+|*.asm.privatelink.test.migration.windowsazure.com | Azure Migrate Assessment service endpoint
+|*.hub.privatelink.test.migration.windowsazure.com | Azure Migrate hub endpoint to receive data from other Microsoft or external [independent software vendor (ISV)](/azure/migrate/migrate-services-overview#isv-integration) offerings
+|*.vault.azure.net | Key Vault endpoint
+|*.blob.core.windows.net | Storage account endpoint for dependency and performance data
+
+In addition to the URLs above, the appliance needs access to the following URLs over Internet, directly or via proxy.
+
+| **Other public cloud URLs <br> (Public endpoint URLs)** | **Details** |
+| | |
+|*.portal.azure.com | Navigate to the Azure portal
+|*.windows.net <br/> *.msftauth.net <br/> *.msauth.net <br/> *.microsoft.com <br/> *.live.com <br/> *.office.com <br/> *.microsoftonline.com <br/> *.microsoftonline-p.com <br/> | Used for access control and identity management by Azure Active Directory
+|management.azure.com | For triggering Azure Resource Manager deployments
+|*.services.visualstudio.com (optional) | Upload appliance logs used for internal monitoring
+|aka.ms/* (optional) | Allow access to aka links; used to download and install the latest updates for appliance services
+|download.microsoft.com/download | Allow downloads from Microsoft download center
+
+- Open the command line and run the following nslookup command to verify privatelink connectivity to the URLs listed in the DNS settings file. Repeat this step for all URLs in the DNS settings file.
+
+ _**Illustration**: verifying private link connectivity to the discovery service endpoint_
+
+ ```console
+ nslookup 04b8c9c73f3d477e966c8d00f352889c-agent.cus.disc.privatelink.prod.migration.windowsazure.com
+ ```
+ If the request can reach the discovery service endpoint over a private endpoint, you will see a result that looks like this:
+
+ ```console
+ nslookup 04b8c9c73f3d477e966c8d00f352889c-agent.cus.disc.privatelink.prod.migration.windowsazure.com
+
+ Non-authoritative answer:
+ Name:
+ Address: 10.12.4.23 (private IP address)
+ Aliases: 04b8c9c73f3d477e966c8d00f352889c-agent.cus.disc.privatelink.prod.migration.windowsazure.com
+ prod.cus.discoverysrv.windowsazure.com
+ ```
+
+ The nslookup command should resolve to a private IP address as mentioned above. The private IP address should match the one listed in the DNS settings file.
+
+If the DNS resolution is incorrect, follow these steps:
+1. Manually update the source environment DNS records by editing the DNS hosts file on the on-premises appliance with the DNS mappings and the associated private IP addresses. This option is recommended for testing.
+
+ ![DNS hosts file](./media/how-to-use-azure-migrate-with-private-endpoints/dns-hosts-file-1.png)
+
+2. If you use a custom DNS server, review your custom DNS settings, and validate that the DNS configuration is correct. For guidance, see [private endpoint overview: DNS configuration](/azure/private-link/private-endpoint-overview#dns-configuration).
+3. If the issue still persists, [refer to this section](/azure/migrate/troubleshoot-network-connectivity#validate-the-private-dns-zone) for further troubleshooting.
+
+After youΓÇÖve verified the connectivity, retry the discovery process.
mysql Concepts Migrate Dump Restore https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/concepts-migrate-dump-restore.md
- Title: Migrate using dump and restore - Azure Database for MySQL
-description: This article explains two common ways to back up and restore databases in your Azure Database for MySQL, using tools such as mysqldump, MySQL Workbench, and PHPMyAdmin.
---- Previously updated : 10/30/2020--
-# Migrate your MySQL database to Azure Database for MySQL using dump and restore
--
-This article explains two common ways to back up and restore databases in your Azure Database for MySQL
-- Dump and restore from the command-line (using mysqldump)-- Dump and restore using PHPMyAdmin-
-You can also refer to [Database Migration Guide](https://github.com/Azure/azure-mysql/tree/master/MigrationGuide) for detailed information and use cases about migrating databases to Azure Database for MySQL. This guide provides guidance that will lead the successful planning and execution of a MySQL migration to Azure.
-
-## Before you begin
-To step through this how-to guide, you need to have:
-- [Create Azure Database for MySQL server - Azure portal](quickstart-create-mysql-server-database-using-azure-portal.md)-- [mysqldump](https://dev.mysql.com/doc/refman/5.7/en/mysqldump.html) command-line utility installed on a machine.-- [MySQL Workbench](https://dev.mysql.com/downloads/workbench/) or another third-party MySQL tool to do dump and restore commands.-
-> [!TIP]
-> If you are looking to migrate large databases with database sizes more than 1 TBs, you may want to consider using community tools like **mydumper/myloader** which supports parallel export and import. Learn [How to migrate large MySQL databases](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/best-practices-for-migrating-large-databases-to-azure-database/ba-p/1362699).
--
-## Common use-cases for dump and restore
-
-Most common use-cases are:
--- **Moving from other managed service provider** - Most managed service provider may not provide access to the physical storage file for security reasons so logical backup and restore is the only option to migrate.-- **Migrating from on-premises environment or Virtual machine** - Azure Database for MySQL doesn't support restore of physical backups which makes logical backup and restore as the ONLY approach.-- **Moving your backup storage from locally redundant to geo-redundant storage** - Azure Database for MySQL allows configuring locally redundant or geo-redundant storage for backup is only allowed during server create. Once the server is provisioned, you cannot change the backup storage redundancy option. In order to move your backup storage from locally redundant storage to geo-redundant storage, dump and restore is the ONLY option. -- **Migrating from alternative storage engines to InnoDB** - Azure Database for MySQL supports only InnoDB Storage engine, and therefore does not support alternative storage engines. If your tables are configured with other storage engines, convert them into the InnoDB engine format before migration to Azure Database for MySQL.-
- For example, if you have a WordPress or WebApp using the MyISAM tables, first convert those tables by migrating into InnoDB format before restoring to Azure Database for MySQL. Use the clause `ENGINE=InnoDB` to set the engine used when creating a new table, then transfer the data into the compatible table before the restore.
-
- ```sql
- INSERT INTO innodb_table SELECT * FROM myisam_table ORDER BY primary_key_columns
- ```
-> [!Important]
-> - To avoid any compatibility issues, ensure the same version of MySQL is used on the source and destination systems when dumping databases. For example, if your existing MySQL server is version 5.7, then you should migrate to Azure Database for MySQL configured to run version 5.7. The `mysql_upgrade` command does not function in an Azure Database for MySQL server, and is not supported.
-> - If you need to upgrade across MySQL versions, first dump or export your lower version database into a higher version of MySQL in your own environment. Then run `mysql_upgrade`, before attempting migration into an Azure Database for MySQL.
-
-## Performance considerations
-To optimize performance, take notice of these considerations when dumping large databases:
-- Use the `exclude-triggers` option in mysqldump when dumping databases. Exclude triggers from dump files to avoid the trigger commands firing during the data restore.-- Use the `single-transaction` option to set the transaction isolation mode to REPEATABLE READ and sends a START TRANSACTION SQL statement to the server before dumping data. Dumping many tables within a single transaction causes some extra storage to be consumed during restore. The `single-transaction` option and the `lock-tables` option are mutually exclusive because LOCK TABLES causes any pending transactions to be committed implicitly. To dump large tables, combine the `single-transaction` option with the `quick` option.-- Use the `extended-insert` multiple-row syntax that includes several VALUE lists. This results in a smaller dump file and speeds up inserts when the file is reloaded.-- Use the `order-by-primary` option in mysqldump when dumping databases, so that the data is scripted in primary key order.-- Use the `disable-keys` option in mysqldump when dumping data, to disable foreign key constraints before load. Disabling foreign key checks provides performance gains. Enable the constraints and verify the data after the load to ensure referential integrity.-- Use partitioned tables when appropriate.-- Load data in parallel. Avoid too much parallelism that would cause you to hit a resource limit, and monitor resources using the metrics available in the Azure portal.-- Use the `defer-table-indexes` option in mysqlpump when dumping databases, so that index creation happens after tables data is loaded.-- Use the `skip-definer` option in mysqlpump to omit definer and SQL SECURITY clauses from the create statements for views and stored procedures. When you reload the dump file, it creates objects that use the default DEFINER and SQL SECURITY values.-- Copy the backup files to an Azure blob/store and perform the restore from there, which should be a lot faster than performing the restore across the Internet.-
-## Create a database on the target Azure Database for MySQL server
-Create an empty database on the target Azure Database for MySQL server where you want to migrate the data. Use a tool such as MySQL Workbench or mysql.exe to create the database. The database can have the same name as the database that is contained the dumped data or you can create a database with a different name.
-
-To get connected, locate the connection information in the **Overview** of your Azure Database for MySQL.
--
-Add the connection information into your MySQL Workbench.
--
-## Preparing the target Azure Database for MySQL server for fast data loads
-To prepare the target Azure Database for MySQL server for faster data loads, the following server parameters and configuration needs to be changed.
-- max_allowed_packet ΓÇô set to 1073741824 (i.e. 1GB) to prevent any overflow issue due to long rows.-- slow_query_log ΓÇô set to OFF to turn off the slow query log. This will eliminate the overhead caused by slow query logging during data loads.-- query_store_capture_mode ΓÇô set to NONE to turn off the Query Store. This will eliminate the overhead caused by sampling activities by Query Store.-- innodb_buffer_pool_size ΓÇô Scale up the server to 32 vCore Memory Optimized SKU from the Pricing tier of the portal during migration to increase the innodb_buffer_pool_size. Innodb_buffer_pool_size can only be increased by scaling up compute for Azure Database for MySQL server.-- innodb_io_capacity & innodb_io_capacity_max - Change to 9000 from the Server parameters in Azure portal to improve the IO utilization to optimize for migration speed.-- innodb_write_io_threads & innodb_write_io_threads - Change to 4 from the Server parameters in Azure portal to improve the speed of migration.-- Scale up Storage tier ΓÇô The IOPs for Azure Database for MySQL server increases progressively with the increase in storage tier. For faster loads, you may want to increase the storage tier to increase the IOPs provisioned. Please do remember the storage can only be scaled up, not down.-
-Once the migration is completed, you can revert back the server parameters and compute tier configuration to its previous values.
-
-## Dump and restore using mysqldump utility
-
-### Create a backup file from the command-line using mysqldump
-To back up an existing MySQL database on the local on-premises server or in a virtual machine, run the following command:
-```bash
-$ mysqldump --opt -u [uname] -p[pass] [dbname] > [backupfile.sql]
-```
-
-The parameters to provide are:
-- [uname] Your database username-- [pass] The password for your database (note there is no space between -p and the password)-- [dbname] The name of your database-- [backupfile.sql] The filename for your database backup-- [--opt] The mysqldump option-
-For example, to back up a database named 'testdb' on your MySQL server with the username 'testuser' and with no password to a file testdb_backup.sql, use the following command. The command backs up the `testdb` database into a file called `testdb_backup.sql`, which contains all the SQL statements needed to re-create the database. Make sure that the username 'testuser' has at least the SELECT privilege for dumped tables, SHOW VIEW for dumped views, TRIGGER for dumped triggers, and LOCK TABLES if the --single-transaction option is not used.
-
-```bash
-GRANT SELECT, LOCK TABLES, SHOW VIEW ON *.* TO 'testuser'@'hostname' IDENTIFIED BY 'password';
-```
-Now run mysqldump to create the backup of `testdb` database
-
-```bash
-$ mysqldump -u root -p testdb > testdb_backup.sql
-```
-To select specific tables in your database to back up, list the table names separated by spaces. For example, to back up only table1 and table2 tables from the 'testdb', follow this example:
-
-```bash
-$ mysqldump -u root -p testdb table1 table2 > testdb_tables_backup.sql
-```
-To back up more than one database at once, use the --database switch and list the database names separated by spaces.
-```bash
-$ mysqldump -u root -p --databases testdb1 testdb3 testdb5 > testdb135_backup.sql
-```
-
-### Restore your MySQL database using command-line or MySQL Workbench
-Once you have created the target database, you can use the mysql command or MySQL Workbench to restore the data into the specific newly created database from the dump file.
-```bash
-mysql -h [hostname] -u [uname] -p[pass] [db_to_restore] < [backupfile.sql]
-```
-In this example, restore the data into the newly created database on the target Azure Database for MySQL server.
-
-Here is an example for how to use this **mysql** for **Single Server** :
-
-```bash
-$ mysql -h mydemoserver.mysql.database.azure.com -u myadmin@mydemoserver -p testdb < testdb_backup.sql
-```
-Here is an example for how to use this **mysql** for **Flexible Server** :
-
-```bash
-$ mysql -h mydemoserver.mysql.database.azure.com -u myadmin -p testdb < testdb_backup.sql
-```
--
-## Dump and restore using PHPMyAdmin
-Follow these steps to dump and restore a database using PHPMyadmin.
-
-> [!NOTE]
-> For single server, the username must be in this format , 'username@servername' but for flexible server you can just use 'username' If you use 'username@servername' for flexible server, the connection will fail.
-
-### Export with PHPMyadmin
-To export, you can use the common tool phpMyAdmin, which you may already have installed locally in your environment. To export your MySQL database using PHPMyAdmin:
-1. Open phpMyAdmin.
-2. Select your database. Click the database name in the list on the left.
-3. Click the **Export** link. A new page appears to view the dump of database.
-4. In the Export area, click the **Select All** link to choose the tables in your database.
-5. In the SQL options area, click the appropriate options.
-6. Click the **Save as file** option and the corresponding compression option and then click the **Go** button. A dialog box should appear prompting you to save the file locally.
-
-### Import using PHPMyAdmin
-Importing your database is similar to exporting. Do the following actions:
-1. Open phpMyAdmin.
-2. In the phpMyAdmin setup page, click **Add** to add your Azure Database for MySQL server. Provide the connection details and login information.
-3. Create an appropriately named database and select it on the left of the screen. To rewrite the existing database, click the database name, select all the check boxes beside the table names, and select **Drop** to delete the existing tables.
-4. Click the **SQL** link to show the page where you can type in SQL commands, or upload your SQL file.
-5. Use the **browse** button to find the database file.
-6. Click the **Go** button to export the backup, execute the SQL commands, and re-create your database.
-
-## Known Issues
-For known issues, tips and tricks, we recommend you to look at our [techcommunity blog](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/tips-and-tricks-in-using-mysqldump-and-mysql-restore-to-azure/ba-p/916912).
-
-## Next steps
-- [Connect applications to Azure Database for MySQL](./howto-connection-string.md).-- For more information about migrating databases to Azure Database for MySQL, see the [Database Migration Guide](https://github.com/Azure/azure-mysql/tree/master/MigrationGuide).-- If you are looking to migrate large databases with database sizes more than 1 TBs, you may want to consider using community tools like **mydumper/myloader** which supports parallel export and import. Learn [How to migrate large MySQL databases](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/best-practices-for-migrating-large-databases-to-azure-database/ba-p/1362699).+
+ Title: Migrate using dump and restore - Azure Database for MySQL
+description: This article explains two common ways to back up and restore databases in your Azure Database for MySQL, using tools such as mysqldump, MySQL Workbench, and PHPMyAdmin.
++++ Last updated : 10/30/2020++
+# Migrate your MySQL database to Azure Database for MySQL using dump and restore
++
+This article explains two common ways to back up and restore databases in your Azure Database for MySQL
+- Dump and restore from the command-line (using mysqldump)
+- Dump and restore using PHPMyAdmin
+
+You can also refer to [Database Migration Guide](https://github.com/Azure/azure-mysql/tree/master/MigrationGuide) for detailed information and use cases about migrating databases to Azure Database for MySQL. This guide provides guidance that will lead the successful planning and execution of a MySQL migration to Azure.
+
+## Before you begin
+To step through this how-to guide, you need to have:
+- [Create Azure Database for MySQL server - Azure portal](quickstart-create-mysql-server-database-using-azure-portal.md)
+- [mysqldump](https://dev.mysql.com/doc/refman/5.7/en/mysqldump.html) command-line utility installed on a machine.
+- [MySQL Workbench](https://dev.mysql.com/downloads/workbench/) or another third-party MySQL tool to do dump and restore commands.
+
+> [!TIP]
+> If you are looking to migrate large databases with database sizes more than 1 TBs, you may want to consider using community tools like **mydumper/myloader** which supports parallel export and import. Learn [How to migrate large MySQL databases](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/best-practices-for-migrating-large-databases-to-azure-database/ba-p/1362699).
++
+## Common use-cases for dump and restore
+
+Most common use-cases are:
+
+- **Moving from other managed service provider** - Most managed service provider may not provide access to the physical storage file for security reasons so logical backup and restore is the only option to migrate.
+- **Migrating from on-premises environment or Virtual machine** - Azure Database for MySQL doesn't support restore of physical backups which makes logical backup and restore as the ONLY approach.
+- **Moving your backup storage from locally redundant to geo-redundant storage** - Azure Database for MySQL allows configuring locally redundant or geo-redundant storage for backup is only allowed during server create. Once the server is provisioned, you cannot change the backup storage redundancy option. In order to move your backup storage from locally redundant storage to geo-redundant storage, dump and restore is the ONLY option.
+- **Migrating from alternative storage engines to InnoDB** - Azure Database for MySQL supports only InnoDB Storage engine, and therefore does not support alternative storage engines. If your tables are configured with other storage engines, convert them into the InnoDB engine format before migration to Azure Database for MySQL.
+
+ For example, if you have a WordPress or WebApp using the MyISAM tables, first convert those tables by migrating into InnoDB format before restoring to Azure Database for MySQL. Use the clause `ENGINE=InnoDB` to set the engine used when creating a new table, then transfer the data into the compatible table before the restore.
+
+ ```sql
+ INSERT INTO innodb_table SELECT * FROM myisam_table ORDER BY primary_key_columns
+ ```
+> [!Important]
+> - To avoid any compatibility issues, ensure the same version of MySQL is used on the source and destination systems when dumping databases. For example, if your existing MySQL server is version 5.7, then you should migrate to Azure Database for MySQL configured to run version 5.7. The `mysql_upgrade` command does not function in an Azure Database for MySQL server, and is not supported.
+> - If you need to upgrade across MySQL versions, first dump or export your lower version database into a higher version of MySQL in your own environment. Then run `mysql_upgrade`, before attempting migration into an Azure Database for MySQL.
+
+## Performance considerations
+To optimize performance, take notice of these considerations when dumping large databases:
+- Use the `exclude-triggers` option in mysqldump when dumping databases. Exclude triggers from dump files to avoid the trigger commands firing during the data restore.
+- Use the `single-transaction` option to set the transaction isolation mode to REPEATABLE READ and sends a START TRANSACTION SQL statement to the server before dumping data. Dumping many tables within a single transaction causes some extra storage to be consumed during restore. The `single-transaction` option and the `lock-tables` option are mutually exclusive because LOCK TABLES causes any pending transactions to be committed implicitly. To dump large tables, combine the `single-transaction` option with the `quick` option.
+- Use the `extended-insert` multiple-row syntax that includes several VALUE lists. This results in a smaller dump file and speeds up inserts when the file is reloaded.
+- Use the `order-by-primary` option in mysqldump when dumping databases, so that the data is scripted in primary key order.
+- Use the `disable-keys` option in mysqldump when dumping data, to disable foreign key constraints before load. Disabling foreign key checks provides performance gains. Enable the constraints and verify the data after the load to ensure referential integrity.
+- Use partitioned tables when appropriate.
+- Load data in parallel. Avoid too much parallelism that would cause you to hit a resource limit, and monitor resources using the metrics available in the Azure portal.
+- Use the `defer-table-indexes` option in mysqlpump when dumping databases, so that index creation happens after tables data is loaded.
+- Use the `skip-definer` option in mysqlpump to omit definer and SQL SECURITY clauses from the create statements for views and stored procedures. When you reload the dump file, it creates objects that use the default DEFINER and SQL SECURITY values.
+- Copy the backup files to an Azure blob/store and perform the restore from there, which should be a lot faster than performing the restore across the Internet.
+
+## Create a database on the target Azure Database for MySQL server
+Create an empty database on the target Azure Database for MySQL server where you want to migrate the data. Use a tool such as MySQL Workbench or mysql.exe to create the database. The database can have the same name as the database that is contained the dumped data or you can create a database with a different name.
+
+To get connected, locate the connection information in the **Overview** of your Azure Database for MySQL.
++
+Add the connection information into your MySQL Workbench.
++
+## Preparing the target Azure Database for MySQL server for fast data loads
+To prepare the target Azure Database for MySQL server for faster data loads, the following server parameters and configuration needs to be changed.
+- max_allowed_packet ΓÇô set to 1073741824 (i.e. 1GB) to prevent any overflow issue due to long rows.
+- slow_query_log ΓÇô set to OFF to turn off the slow query log. This will eliminate the overhead caused by slow query logging during data loads.
+- query_store_capture_mode ΓÇô set to NONE to turn off the Query Store. This will eliminate the overhead caused by sampling activities by Query Store.
+- innodb_buffer_pool_size ΓÇô Scale up the server to 32 vCore Memory Optimized SKU from the Pricing tier of the portal during migration to increase the innodb_buffer_pool_size. Innodb_buffer_pool_size can only be increased by scaling up compute for Azure Database for MySQL server.
+- innodb_io_capacity & innodb_io_capacity_max - Change to 9000 from the Server parameters in Azure portal to improve the IO utilization to optimize for migration speed.
+- innodb_write_io_threads & innodb_write_io_threads - Change to 4 from the Server parameters in Azure portal to improve the speed of migration.
+- Scale up Storage tier ΓÇô The IOPs for Azure Database for MySQL server increases progressively with the increase in storage tier. For faster loads, you may want to increase the storage tier to increase the IOPs provisioned. Please do remember the storage can only be scaled up, not down.
+
+Once the migration is completed, you can revert back the server parameters and compute tier configuration to its previous values.
+
+## Dump and restore using mysqldump utility
+
+### Create a backup file from the command-line using mysqldump
+To back up an existing MySQL database on the local on-premises server or in a virtual machine, run the following command:
+```bash
+$ mysqldump --opt -u [uname] -p[pass] [dbname] > [backupfile.sql]
+```
+
+The parameters to provide are:
+- [uname] Your database username
+- [pass] The password for your database (note there is no space between -p and the password)
+- [dbname] The name of your database
+- [backupfile.sql] The filename for your database backup
+- [--opt] The mysqldump option
+
+For example, to back up a database named 'testdb' on your MySQL server with the username 'testuser' and with no password to a file testdb_backup.sql, use the following command. The command backs up the `testdb` database into a file called `testdb_backup.sql`, which contains all the SQL statements needed to re-create the database. Make sure that the username 'testuser' has at least the SELECT privilege for dumped tables, SHOW VIEW for dumped views, TRIGGER for dumped triggers, and LOCK TABLES if the --single-transaction option is not used.
+
+```bash
+GRANT SELECT, LOCK TABLES, SHOW VIEW ON *.* TO 'testuser'@'hostname' IDENTIFIED BY 'password';
+```
+Now run mysqldump to create the backup of `testdb` database
+
+```bash
+$ mysqldump -u root -p testdb > testdb_backup.sql
+```
+To select specific tables in your database to back up, list the table names separated by spaces. For example, to back up only table1 and table2 tables from the 'testdb', follow this example:
+
+```bash
+$ mysqldump -u root -p testdb table1 table2 > testdb_tables_backup.sql
+```
+To back up more than one database at once, use the --database switch and list the database names separated by spaces.
+```bash
+$ mysqldump -u root -p --databases testdb1 testdb3 testdb5 > testdb135_backup.sql
+```
+
+### Restore your MySQL database using command-line or MySQL Workbench
+Once you have created the target database, you can use the mysql command or MySQL Workbench to restore the data into the specific newly created database from the dump file.
+```bash
+mysql -h [hostname] -u [uname] -p[pass] [db_to_restore] < [backupfile.sql]
+```
+In this example, restore the data into the newly created database on the target Azure Database for MySQL server.
+
+Here is an example for how to use this **mysql** for **Single Server** :
+
+```bash
+$ mysql -h mydemoserver.mysql.database.azure.com -u myadmin@mydemoserver -p testdb < testdb_backup.sql
+```
+Here is an example for how to use this **mysql** for **Flexible Server** :
+
+```bash
+$ mysql -h mydemoserver.mysql.database.azure.com -u myadmin -p testdb < testdb_backup.sql
+```
++
+## Dump and restore using PHPMyAdmin
+Follow these steps to dump and restore a database using PHPMyadmin.
+
+> [!NOTE]
+> For single server, the username must be in this format , 'username@servername' but for flexible server you can just use 'username' If you use 'username@servername' for flexible server, the connection will fail.
+
+### Export with PHPMyadmin
+To export, you can use the common tool phpMyAdmin, which you may already have installed locally in your environment. To export your MySQL database using PHPMyAdmin:
+1. Open phpMyAdmin.
+2. Select your database. Click the database name in the list on the left.
+3. Click the **Export** link. A new page appears to view the dump of database.
+4. In the Export area, click the **Select All** link to choose the tables in your database.
+5. In the SQL options area, click the appropriate options.
+6. Click the **Save as file** option and the corresponding compression option and then click the **Go** button. A dialog box should appear prompting you to save the file locally.
+
+### Import using PHPMyAdmin
+Importing your database is similar to exporting. Do the following actions:
+1. Open phpMyAdmin.
+2. In the phpMyAdmin setup page, click **Add** to add your Azure Database for MySQL server. Provide the connection details and login information.
+3. Create an appropriately named database and select it on the left of the screen. To rewrite the existing database, click the database name, select all the check boxes beside the table names, and select **Drop** to delete the existing tables.
+4. Click the **SQL** link to show the page where you can type in SQL commands, or upload your SQL file.
+5. Use the **browse** button to find the database file.
+6. Click the **Go** button to export the backup, execute the SQL commands, and re-create your database.
+
+## Known Issues
+For known issues, tips and tricks, we recommend you to look at our [techcommunity blog](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/tips-and-tricks-in-using-mysqldump-and-mysql-restore-to-azure/ba-p/916912).
+
+## Next steps
+- [Connect applications to Azure Database for MySQL](./howto-connection-string.md).
+- For more information about migrating databases to Azure Database for MySQL, see the [Database Migration Guide](https://github.com/Azure/azure-mysql/tree/master/MigrationGuide).
+- If you are looking to migrate large databases with database sizes more than 1 TBs, you may want to consider using community tools like **mydumper/myloader** which supports parallel export and import. Learn [How to migrate large MySQL databases](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/best-practices-for-migrating-large-databases-to-azure-database/ba-p/1362699).
mysql Concepts Migrate Import Export https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/concepts-migrate-import-export.md
- Title: Import and export - Azure Database for MySQL
-description: This article explains common ways to import and export databases in Azure Database for MySQL, by using tools such as MySQL Workbench.
----- Previously updated : 10/30/2020--
-# Migrate your MySQL database by using import and export
--
-This article explains two common approaches to importing and exporting data to an Azure Database for MySQL server by using MySQL Workbench.
-
-For detailed and comprehensive migration guidance, see the [migration guide resources](https://github.com/Azure/azure-mysql/tree/master/MigrationGuide).
-
-For other migration scenarios, see the [Database Migration Guide](https://datamigration.microsoft.com/).
-
-## Prerequisites
-
-Before you begin migrating your MySQL database, you need to:
-- Create an [Azure Database for MySQL server by using the Azure portal](quickstart-create-mysql-server-database-using-azure-portal.md).-- Download and install [MySQL Workbench](https://dev.mysql.com/downloads/workbench/) or another third-party MySQL tool for importing and exporting.-
-## Create a database on the Azure Database for MySQL server
-Create an empty database on the Azure Database for MySQL server by using MySQL Workbench, Toad, or Navicat. The database can have the same name as the database that contains the dumped data, or you can create a database with a different name.
-
-To get connected, do the following:
-
-1. In the Azure portal, look for the connection information on the **Overview** pane of your Azure Database for MySQL.
-
- :::image type="content" source="./media/concepts-migrate-import-export/1_server-overview-name-login.png" alt-text="Screenshot of the Azure Database for MySQL server connection information in the Azure portal.":::
-
-1. Add the connection information to MySQL Workbench.
-
- :::image type="content" source="./media/concepts-migrate-import-export/2_setup-new-connection.png" alt-text="Screenshot of the MySQL Workbench connection string.":::
-
-## Determine when to use import and export techniques
-
-> [!TIP]
-> For scenarios where you want to dump and restore the entire database, use the [dump and restore](concepts-migrate-dump-restore.md) approach instead.
-
-In the following scenarios, use MySQL tools to import and export databases into your MySQL database. For other tools, go to the "Migration Methods" section (page 22) of the [MySQL to Azure Database migration guide](https://github.com/Azure/azure-mysql/blob/master/MigrationGuide/MySQL%20Migration%20Guide_v1.1.pdf).
--- When you need to selectively choose a few tables to import from an existing MySQL database into your Azure MySQL database, it's best to use the import and export technique. By doing so, you can omit any unneeded tables from the migration to save time and resources. For example, use the `--include-tables` or `--exclude-tables` switch with [mysqlpump](https://dev.mysql.com/doc/refman/5.7/en/mysqlpump.html#option_mysqlpump_include-tables), and the `--tables` switch with [mysqldump](https://dev.mysql.com/doc/refman/5.7/en/mysqldump.html#option_mysqldump_tables).-- When you're moving database objects other than tables, explicitly create those objects. Include constraints (primary key, foreign key, and indexes), views, functions, procedures, triggers, and any other database objects that you want to migrate.-- When you're migrating data from external data sources other than a MySQL database, create flat files and import them by using [mysqlimport](https://dev.mysql.com/doc/refman/5.7/en/mysqlimport.html).-
-> [!Important]
-> Both Single Server and Flexible Server support only the InnoDB storage engine. Make sure that all tables in the database use the InnoDB storage engine when you're loading data into your Azure database for MySQL.
->
-> If your source database uses another storage engine, convert to the InnoDB engine before you migrate the database. For example, if you have a WordPress or web app that uses the MyISAM engine, first convert the tables by migrating the data into InnoDB tables. Use the clause `ENGINE=INNODB` to set the engine for creating a table, and then transfer the data into the compatible table before the migration.
-
- ```sql
- INSERT INTO innodb_table SELECT * FROM myisam_table ORDER BY primary_key_columns
- ```
-
-## Performance recommendations for import and export
-
-For optimal data import and export performance, we recommend that you do the following:
-- Create clustered indexes and primary keys before you load data. Load the data in primary key order.-- Delay the creation of secondary indexes until after the data is loaded.-- Disable foreign key constraints before you load the data. Disabling foreign key checks provides significant performance gains. Enable the constraints and verify the data after the load to ensure referential integrity.-- Load data in parallel. Avoid too much parallelism that would cause you to hit a resource limit, and monitor resources by using the metrics available in the Azure portal.-- Use partitioned tables when appropriate.-
-## Import and export data by using MySQL Workbench
-There are two ways to export and import data in MySQL Workbench: from the object browser context menu or from the Navigator pane. Each method serves a different purpose.
-
-> [!NOTE]
-> If you're adding a connection to MySQL Single Server or Flexible Server (Preview) on MySQL Workbench, do the following:
-> - For MySQL Single Server, make sure that the user name is in the format *\<username@servername>*.
-> - For MySQL Flexible Server, use *\<username>* only. If you use *\<username@servername>* to connect, the connection will fail.
-
-### Run the table data export and import wizards from the object browser context menu
--
-The table data wizards support import and export operations by using CSV and JSON files. The wizards include several configuration options, such as separators, column selection, and encoding selection. You can run each wizard against local or remotely connected MySQL servers. The import action includes table, column, and type mapping.
-
-To access these wizards from the object browser context menu, right-click a table, and then select **Table Data Export Wizard** or **Table Data Import Wizard**.
-
-#### The table data export wizard
-
-To export a table to a CSV file:
-
-1. Right-click the table of the database to be exported.
-1. Select **Table Data Export Wizard**. Select the columns to be exported, row offset (if any), and count (if any).
-1. On the **Select data for export** pane, select **Next**. Select the file path, CSV, or JSON file type. Also select the line separator, method of enclosing strings, and field separator.
-1. On the **Select output file location** pane, select **Next**.
-1. On the **Export data** pane, select **Next**.
-
-#### The table data import wizard
-
-To import a table from a CSV file:
-
-1. Right-click the table of the database to be imported.
-1. Look for and select the CSV file to be imported, and then select **Next**.
-1. Select the destination table (new or existing), select or clear the **Truncate table before import** check box, and then select **Next**.
-1. Select the encoding and the columns to be imported, and then select **Next**.
-1. On the **Import data** pane, select **Next**. The wizard imports the data.
-
-### Run the SQL data export and import wizards from the Navigator pane
-Use a wizard to export or import SQL data that's generated from MySQL Workbench or from the mysqldump command. You can access the wizards from the **Navigator** pane or you can select **Server** from the main menu.
-
-#### Export data
--
-You can use the **Data Export** pane to export your MySQL data.
-
-1. In MySQL Workbench, on the **Navigator** pane, select **Data Export**.
-
-1. On the **Data Export** pane, select each schema that you want to export.
-
- For each schema, you can select specific schema objects or tables to export. Configuration options include export to a project folder or a self-contained SQL file, dump stored routines and events, or skip table data.
-
- Alternatively, use **Export a Result Set** to export a specific result set in the SQL editor to another format, such as CSV, JSON, HTML, and XML.
-
-1. Select the database objects to export, and configure the related options.
-1. Select **Refresh** to load the current objects.
-1. Optionally, select **Advanced Options** at the upper right to refine the export operation. For example, add table locks, use `replace` instead of `insert` statements, and quote identifiers with backtick characters.
-1. Select **Start Export** to begin the export process.
--
-#### Import data
--
-You can use the **Data Import** pane to import or restore exported data from the data export operation or from the mysqldump command.
-
-1. In MySQL Workbench, on the **Navigator** pane, select **Data Export/Restore**.
-1. Select the project folder or self-contained SQL file, select the schema to import into, or select the **New** button to define a new schema.
-1. Select **Start Import** to begin the import process.
-
-## Next steps
-- For another migration approach, see [Migrate your MySQL database to an Azure database for MySQL by using dump and restore](concepts-migrate-dump-restore.md).-- For more information about migrating databases to an Azure database for MySQL, see the [Database Migration Guide](https://github.com/Azure/azure-mysql/tree/master/MigrationGuide).+
+ Title: Import and export - Azure Database for MySQL
+description: This article explains common ways to import and export databases in Azure Database for MySQL, by using tools such as MySQL Workbench.
+++++ Last updated : 10/30/2020++
+# Migrate your MySQL database by using import and export
++
+This article explains two common approaches to importing and exporting data to an Azure Database for MySQL server by using MySQL Workbench.
+
+For detailed and comprehensive migration guidance, see the [migration guide resources](https://github.com/Azure/azure-mysql/tree/master/MigrationGuide).
+
+For other migration scenarios, see the [Database Migration Guide](https://datamigration.microsoft.com/).
+
+## Prerequisites
+
+Before you begin migrating your MySQL database, you need to:
+- Create an [Azure Database for MySQL server by using the Azure portal](quickstart-create-mysql-server-database-using-azure-portal.md).
+- Download and install [MySQL Workbench](https://dev.mysql.com/downloads/workbench/) or another third-party MySQL tool for importing and exporting.
+
+## Create a database on the Azure Database for MySQL server
+Create an empty database on the Azure Database for MySQL server by using MySQL Workbench, Toad, or Navicat. The database can have the same name as the database that contains the dumped data, or you can create a database with a different name.
+
+To get connected, do the following:
+
+1. In the Azure portal, look for the connection information on the **Overview** pane of your Azure Database for MySQL.
+
+ :::image type="content" source="./media/concepts-migrate-import-export/1_server-overview-name-login.png" alt-text="Screenshot of the Azure Database for MySQL server connection information in the Azure portal.":::
+
+1. Add the connection information to MySQL Workbench.
+
+ :::image type="content" source="./media/concepts-migrate-import-export/2_setup-new-connection.png" alt-text="Screenshot of the MySQL Workbench connection string.":::
+
+## Determine when to use import and export techniques
+
+> [!TIP]
+> For scenarios where you want to dump and restore the entire database, use the [dump and restore](concepts-migrate-dump-restore.md) approach instead.
+
+In the following scenarios, use MySQL tools to import and export databases into your MySQL database. For other tools, go to the "Migration Methods" section (page 22) of the [MySQL to Azure Database migration guide](https://github.com/Azure/azure-mysql/blob/master/MigrationGuide/MySQL%20Migration%20Guide_v1.1.pdf).
+
+- When you need to selectively choose a few tables to import from an existing MySQL database into your Azure MySQL database, it's best to use the import and export technique. By doing so, you can omit any unneeded tables from the migration to save time and resources. For example, use the `--include-tables` or `--exclude-tables` switch with [mysqlpump](https://dev.mysql.com/doc/refman/5.7/en/mysqlpump.html#option_mysqlpump_include-tables), and the `--tables` switch with [mysqldump](https://dev.mysql.com/doc/refman/5.7/en/mysqldump.html#option_mysqldump_tables).
+- When you're moving database objects other than tables, explicitly create those objects. Include constraints (primary key, foreign key, and indexes), views, functions, procedures, triggers, and any other database objects that you want to migrate.
+- When you're migrating data from external data sources other than a MySQL database, create flat files and import them by using [mysqlimport](https://dev.mysql.com/doc/refman/5.7/en/mysqlimport.html).
+
+> [!Important]
+> Both Single Server and Flexible Server support only the InnoDB storage engine. Make sure that all tables in the database use the InnoDB storage engine when you're loading data into your Azure database for MySQL.
+>
+> If your source database uses another storage engine, convert to the InnoDB engine before you migrate the database. For example, if you have a WordPress or web app that uses the MyISAM engine, first convert the tables by migrating the data into InnoDB tables. Use the clause `ENGINE=INNODB` to set the engine for creating a table, and then transfer the data into the compatible table before the migration.
+
+ ```sql
+ INSERT INTO innodb_table SELECT * FROM myisam_table ORDER BY primary_key_columns
+ ```
+
+## Performance recommendations for import and export
+
+For optimal data import and export performance, we recommend that you do the following:
+- Create clustered indexes and primary keys before you load data. Load the data in primary key order.
+- Delay the creation of secondary indexes until after the data is loaded.
+- Disable foreign key constraints before you load the data. Disabling foreign key checks provides significant performance gains. Enable the constraints and verify the data after the load to ensure referential integrity.
+- Load data in parallel. Avoid too much parallelism that would cause you to hit a resource limit, and monitor resources by using the metrics available in the Azure portal.
+- Use partitioned tables when appropriate.
+
+## Import and export data by using MySQL Workbench
+There are two ways to export and import data in MySQL Workbench: from the object browser context menu or from the Navigator pane. Each method serves a different purpose.
+
+> [!NOTE]
+> If you're adding a connection to MySQL Single Server or Flexible Server (Preview) on MySQL Workbench, do the following:
+> - For MySQL Single Server, make sure that the user name is in the format *\<username@servername>*.
+> - For MySQL Flexible Server, use *\<username>* only. If you use *\<username@servername>* to connect, the connection will fail.
+
+### Run the table data export and import wizards from the object browser context menu
++
+The table data wizards support import and export operations by using CSV and JSON files. The wizards include several configuration options, such as separators, column selection, and encoding selection. You can run each wizard against local or remotely connected MySQL servers. The import action includes table, column, and type mapping.
+
+To access these wizards from the object browser context menu, right-click a table, and then select **Table Data Export Wizard** or **Table Data Import Wizard**.
+
+#### The table data export wizard
+
+To export a table to a CSV file:
+
+1. Right-click the table of the database to be exported.
+1. Select **Table Data Export Wizard**. Select the columns to be exported, row offset (if any), and count (if any).
+1. On the **Select data for export** pane, select **Next**. Select the file path, CSV, or JSON file type. Also select the line separator, method of enclosing strings, and field separator.
+1. On the **Select output file location** pane, select **Next**.
+1. On the **Export data** pane, select **Next**.
+
+#### The table data import wizard
+
+To import a table from a CSV file:
+
+1. Right-click the table of the database to be imported.
+1. Look for and select the CSV file to be imported, and then select **Next**.
+1. Select the destination table (new or existing), select or clear the **Truncate table before import** check box, and then select **Next**.
+1. Select the encoding and the columns to be imported, and then select **Next**.
+1. On the **Import data** pane, select **Next**. The wizard imports the data.
+
+### Run the SQL data export and import wizards from the Navigator pane
+Use a wizard to export or import SQL data that's generated from MySQL Workbench or from the mysqldump command. You can access the wizards from the **Navigator** pane or you can select **Server** from the main menu.
+
+#### Export data
++
+You can use the **Data Export** pane to export your MySQL data.
+
+1. In MySQL Workbench, on the **Navigator** pane, select **Data Export**.
+
+1. On the **Data Export** pane, select each schema that you want to export.
+
+ For each schema, you can select specific schema objects or tables to export. Configuration options include export to a project folder or a self-contained SQL file, dump stored routines and events, or skip table data.
+
+ Alternatively, use **Export a Result Set** to export a specific result set in the SQL editor to another format, such as CSV, JSON, HTML, and XML.
+
+1. Select the database objects to export, and configure the related options.
+1. Select **Refresh** to load the current objects.
+1. Optionally, select **Advanced Options** at the upper right to refine the export operation. For example, add table locks, use `replace` instead of `insert` statements, and quote identifiers with backtick characters.
+1. Select **Start Export** to begin the export process.
++
+#### Import data
++
+You can use the **Data Import** pane to import or restore exported data from the data export operation or from the mysqldump command.
+
+1. In MySQL Workbench, on the **Navigator** pane, select **Data Export/Restore**.
+1. Select the project folder or self-contained SQL file, select the schema to import into, or select the **New** button to define a new schema.
+1. Select **Start Import** to begin the import process.
+
+## Next steps
+- For another migration approach, see [Migrate your MySQL database to an Azure database for MySQL by using dump and restore](concepts-migrate-dump-restore.md).
+- For more information about migrating databases to an Azure database for MySQL, see the [Database Migration Guide](https://github.com/Azure/azure-mysql/tree/master/MigrationGuide).
mysql Single Server Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/single-server-whats-new.md
Previously updated : 05/05/2021 Last updated : 06/17/2021 # What's new in Azure Database for MySQL - Single Server?
Azure Database for MySQL is a relational database service in the Microsoft cloud
This article summarizes new releases and features in Azure Database for MySQL - Single Server beginning in January 2021. Listings appear in reverse chronological order, with the most recent updates first.
+## June 2021
+
+This release of Azure Database for MySQL - Single Server includes the following updates.
+
+- **Enabled the ability to change the server parameter `activate_all_roles_on_login` from Portal/CLI for MySQL 8.0**
+
+ Users can now change the value of the activate_all_roles_on_login parameter using the Azure portal and CLI. This parameter helps to configure whether to enable automatic activation of all granted roles when users sign in to the server. For more information, see [Server System Variables](https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html).
+
+- **Enabled the parameter `redirect_enabled` by default**
+
+ With this release, the parameter `redirect_enabled` will be enabled by default. Redirection aims to reduce network latency between client applications and MySQL servers by allowing applications to connect directly to backend server nodes. Support for redirection in PHP applications is available through the [mysqlnd_azure](https://github.com/microsoft/mysqlnd_azure) extension, developed by Microsoft. For more information, see the article [Connect to Azure Database for MySQL with redirection](howto-redirection.md).
+
+- **Addressed MySQL Community Bugs #29596969 and #94668**
+
+ This release addresses an issue with the default expression being ignored in a CREATE TABLE query if the field was marked as PRIMARY KEY for MySQL 8.0. (MySQL Community Bug #29596969, Bug #94668). For more information, see [MySQL Bugs: #94668: Expression Default is made NULL during CREATE TABLE query, if field is made PK](https://bugs.mysql.com/bug.php?id=94668)
+
+- **Addressed an issue with duplicate table names in "SHOW TABLE" query**
+
+ We've introduced a new function to give a fine-grained control of the table cache during the table operation. Because of a code defect in the new feature, the entry in the directory cache might be miss configured or added and cause the unexpected behavior like return two tables with the same name. The directory cache only works for the ΓÇ£SHOW TABLEΓÇ¥ related query; it won't impact any DML or DDL queries. This issue is completely resolved in this release.
+
+- **Increased the default value for the server parameter `max_heap_table_size` to help reduce temp table spills to disk**
+
+ With this release, the max allowed value for the parameter `max_heap_table_size` has been changed to 8589934592 for General Purpose 64 vCore and Memory Optimize 32 vCore.
+
+- **Addressed an issue with setting the value of the parameter `sql_require_primary_key` from the portal**
+
+ Users can now modify the value of the parameter `sql_require_primary_key` directly from the Azure portal.
+
+- **General Availability of planned maintenance notification**
+
+ This release provides General Availability of planned maintenance notifications in Azure Database for MySQL - Single Server. For more information, see the article [Planned maintenance notification](concepts-planned-maintenance-notification.md).
+ ## February 2021 This release of Azure Database for MySQL - Single Server includes the following updates.
network-function-manager Create Device https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/network-function-manager/create-device.md
+
+ Title: 'Tutorial: Create a device resource for Azure Network Function Manager'
+description: In this tutorial, learn about how to create a device resource for Azure Network Function Manager.
++++ Last updated : 06/16/2021+++
+# Tutorial: Create a Network Function Manager Device resource (Preview)
+
+In this tutorial, you create a **Device** resource for Azure Network Function Manager (NFM). The Network Function Manager device resource is linked to the Azure Stack Edge resource. The device resource aggregates all the network functions deployed on Azure Stack Edge device and provides common services for deployment, monitoring, troubleshooting, and consistent management operations for all network functions deployed on Azure Stack Edge. You are required to create the Network Function Manager device resource before you can deploy network functions to Azure Stack Edge device.
+
+In this tutorial, you:
+
+> [!div class="checklist"]
+> * Verify the prerequisites
+> * Create a device resource
+> * Obtain a registration key
+> * Register the device
+
+## <a name="pre"></a>Prerequisites
+
+Verify the following prerequisites:
+
+* You have completed all the prerequisites listed in the [Overview](overview.md#prereq) article.
+* You have the proper permissions assigned. For more information, see [Resource Provider registration and permissions](overview.md#permissions).
+* Review the [Region Availability](overview.md#regions) section before creating a Device resource.
+* Verify that you can remotely connect from a Windows client to the Azure Stack Edge Pro GPU device via PowerShell. For more information, see [Connect to the PowerShell interface](../databox-online/azure-stack-edge-gpu-connect-powershell-interface.md#connect-to-the-powershell-interface).
+
+## <a name="create"></a>Create a device resource
+
+If you have an existing Azure Network Function Manager device resource, you don't need to create one. Skip this section and go to the [registration key](#key) section.
+
+To create a **device** resource, use the following steps.
+
+1. Sign in to the Azure [Preview portal](https://aka.ms/AzureNetworkFunctionManager) using your Microsoft Azure credentials.
+
+1. On the **Basics** tab, configure **Project details** and **Instance details** with the device settings.
+ :::image type="content" source="./media/create-device/device-settings.png" alt-text="Screenshot of device settings." lightbox="./media/create-device/device-settings.png":::
+
+ When you fill in the fields, a green check mark will appear when characters you add are validated. Some details are auto filled, while others are customizable fields:
+
+ * **Subscription:** Verify that the subscription listed is the correct one. You can change subscriptions by using the drop-down.
+ * **Resource group:** Select an existing resource group or click **Create new** to create a new one.
+ * **Region:** Select the location for your device. This must be a supported region. For more information, see [Region Availability](overview.md#regions).
+ * **Name:** Enter the name for your device.
+ * **Azure Stack Edge:** Select the Azure Stack Edge resource that you want to register as a device for Azure Network Function Manager. The ASE resource must be in **Online** status for a device resource to be successfully created. You can check the status of your ASE resource by going to **Properties** section in the Azure Stack Edge resource page.
+1. Select **Review + create** to validate the device settings.
+1. After all the settings have been validated, select **Create**.
+
+ >[!NOTE]
+ >The Network Function Manager device resource can be linked to only one Azure Stack Edge resource. You will be required to create a separate NFM device resource for each Azure Stack Edge resource.
+ >
+
+## <a name="key"></a>Get the registration key
+
+1. Once your device is successfully provisioned, navigate to the resource group in which the device resource is deployed.
+1. Click the **device** resource. To obtain the registration key, click **Get registration key**. Ensure you have the proper permissions to generate a registration key. For more information, see [permissions](overview.md#permissions).
+
+ :::image type="content" source="./media/create-device/register-device.png" alt-text="Screenshot of registration key." lightbox="./media/create-device/register-device.png":::
+1. Make a note of the device registration key, which will be used in the next steps.
+
+ > [!IMPORTANT]
+ > The registration key expires 24 hours after it is generated. If your registration key is no longer active, you can generate a new registration key by repeating the steps in this section.
+ >
+
+## <a name="registration"></a>Register the device
+
+Follow these steps to register the device resource with Azure Stack Edge.
+
+1. To register the device with the registration key obtained in the previous step, connect to the Azure Stack Edge device via Windows PowerShell.
+1. Once you have a PowerShell (minishell) connection to the appliance, enter the following command with the device registration key you obtained from the previous step as the parameter:
+
+ ```powershell
+ Invoke-MecRegister <device-registration-key>
+ ```
+
+1. Verify that the device resource has **Device Status = registered**.
+
+ :::image type="content" source="./media/create-device/device-registered.png" alt-text="Screenshot of device registered." lightbox="./media/create-device/device-registered.png":::
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Deploy a network function](deploy-functions.md)
network-function-manager Deploy Functions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/network-function-manager/deploy-functions.md
+
+ Title: 'Tutorial: Deploy network functions on Azure Stack Edge'
+
+description: In this tutorial, learn how to deploy a network function as a managed application.
+++ Last updated : 06/16/2021+++
+# Tutorial: Deploy network functions on Azure Stack Edge (Preview)
+
+In this tutorial, you learn how to deploy a network function on Azure Stack Edge using the Azure Marketplace. Network Function Manager enables an Azure Managed Applications experience for a simplified deployment on Azure Stack Edge.
+
+> [!div class="checklist"]
+> * Verify [prerequisites](overview.md#prereq)
+> * Create a network function
+> * Verify network function details
+
+## Prerequisites
+
+* You have created a device resource for Network Function Manager. If you have not completed those steps, see [How to create a device resource](create-device.md).
+* On the **Overview** tab for the device, verify the following values are present:
+ * Provisioning State = Succeeded
+ * Device Status = Registered
+
+## <a name="create"></a>Create a network function
+
+1. Sign in to the [Azure Preview portal](https://aka.ms/AzureNetworkFunctionManager).
+1. Navigate to the **Device** resource in which you want to deploy a network function and select **+Create Network Function**.
+
+ :::image type="content" source="./media/deploy-functions/create-network-function.png" alt-text="Screenshot of +Create Network Function." lightbox="./media/deploy-functions/create-network-function.png":::
+1. From the dropdown, choose the **Vendor SKU** you want to use, then click **Create**.
+
+ :::image type="content" source="./media/deploy-functions/select.png" alt-text="Screenshot of vendor SKU." lightbox="./media/deploy-functions/select.png":::
+1. Depending on the selected SKU, you will be redirected to the Marketplace portal for the network function managed application.
+
+ Every network function partner will have different requirements for deploying their network function on Azure Stack Edge. Additionally, some network functions such as mobile packet core and SD-WAN edge, may require you to configure management, LAN, and WAN ports and allocate IP addresses on these ports before you deploy the network functions. Check with your partner on the required properties and Azure Stack Edge device network configuration.
+
+ > [!IMPORTANT]
+ > For all network functions that support static IP address for management, LAN, or WAN virtual network interfaces, ensure that you donΓÇÖt use the first four IP addresses from the IP address range assigned for the specific port. These IP addresses are reserved IP address for the Azure Stack Edge service.
+ >
+
+1. Follow the steps in the marketplace portal to deploy the partner-managed application. Once the managed application is successfully provisioned, you can go to the resource group to view the managed app. To check if the vendor provisioning status of the network function resource is Provisioned, go to the managed resource group. This confirms that the deployment of the network function is successful, and the required additional configurations can be provisioned through the network function partner management portal. Check with the network function partner for the management experience following initial deployment using Network Function Manager.
+
+### Example
+
+1. Find the **Fusion Core ΓÇô 5G packet core** offer in Marketplace and click **Create** to begin creating your network function.
+
+ :::image type="content" source="./media/deploy-functions/metaswitch.png" alt-text="Screenshot of Metaswitch page." lightbox="./media/deploy-functions/metaswitch.png":::
+1. Configure Basic settings.
+
+ :::image type="content" source="./media/deploy-functions/basics-blade.png" alt-text="Screenshot Basic settings." lightbox="./media/deploy-functions/basics-blade.png":::
+1. Apply managed identity. For more information, see [Managed Identity](overview.md#managed-identity).
+
+ :::image type="content" source="./media/deploy-functions/managed-identity.png" alt-text="Screenshot of Managed Identity." lightbox="./media/deploy-functions/managed-identity.png":::
+1. Enter IP Address information for Management, LAN, and WAN interfaces of the Fusion Core VM.
+
+ :::image type="content" source="./media/deploy-functions/ip-settings.png" alt-text="Screenshot of Management, LAN, and WAN interfaces of the Fusion Core VM." lightbox="./media/deploy-functions/ip-settings.png":::
+1. Enter optional settings for 5G and Test UEs configuration.
+
+ :::image type="content" source="./media/deploy-functions/5g-settings-blade.png" alt-text="Screenshot of 5G." lightbox="./media/deploy-functions/5g-settings-blade.png":::
+
+ :::image type="content" source="./media/deploy-functions/test-blade.png" alt-text="Screenshot of Test UEs configuration." lightbox="./media/deploy-functions/test-blade.png":::
+1. Once validation has passed, agree to the terms and conditions. Then click **Create** to begin creating the Fusion Core Managed Application in the Customer resource group and the Network Function resource in the managed resource group. You must check the **Show Hidden Types** box in the managed resource group view to see the Network Function resource.
+
+ :::image type="content" source="./media/deploy-functions/managed-app-customer.png" alt-text="Screenshot of Managed App in the Customer resource group." lightbox="./media/deploy-functions/managed-app-customer.png":::
+
+ :::image type="content" source="./media/deploy-functions/managed-resource-group.png" alt-text="Screenshot of the network function in the managed resource group." lightbox="./media/deploy-functions/managed-resource-group.png":::
+
+## Next steps
+
+Navigate to the vendor portal to finish configuring the network function.
network-function-manager Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/network-function-manager/faq.md
+
+ Title: Network Function Manager FAQ
+
+description: Learn FAQs about Network Function Manager.
++++ Last updated : 06/16/2021++++
+# Azure Network Function Manager FAQ (Preview)
+
+## FAQs
+
+### I am a network function partner and want to onboard to Network Function Manager. How do I offer my network function with NFM?
+
+Our goal is to provide customers a rich ecosystem of their choice of network functions available on Azure Stack Edge. Email us at **aznfmpartner@microsoft.com** to discuss onboarding requirements.
+
+### Does Network Function Manager preview support other Azure edge devices in addition to Azure Stack Edge Pro with GPU?
+
+The NFM preview is currently available on Azure Stack Edge Pro with GPU that is generally available. ASE Pro is hardware-as-a-service that is engineered to run specialized network functions, such as mobile packet core and SD-WAN edge. The device is equipped with six physical ports with network acceleration support on ports 5 and 6. Check the [Network interface specifications](../databox-online/azure-stack-edge-gpu-technical-specifications-compliance.md#network-interface-specifications) for Azure Stack Edge Pro with GPU device. Network function partners can take advantage of SR-IOV and DPDK capabilities to deliver superior network performance for their network functions.
+
+### What additional capabilities are available on Azure Stack Edge Pro with GPU in addition to running network functions?
+
+Azure Stack Edge (ASE) Pro with GPU and Azure Network Function Manager are a part of the [Azure private MEC](https://go.microsoft.com/fwlink/?linkid=2165316) solution. You can now run a private mobile network and VM or container-based edge application on your ASE device. This lets you build innovative solutions that provide predictable SLAs to your critical business applications. Azure Stack Edge Pro is also equipped with one or two [GPUs](../databox-online/azure-stack-edge-gpu-technical-specifications-compliance.md#compute-acceleration-specifications) that let you take advantage of scenarios such as video inferencing and machine learning at the edge.
+
+### What is the pricing for Network Function Manager preview?
+
+Azure Network Function Manager preview is offered at no additional cost on your Azure Stack Edge Pro device. Network function partners may have a separate charge for offering their network functions with NFM service. Check with your network function partner for pricing details.
+
+### If my Azure Stack Edge Pro device is in a disconnected mode or partially connected mode, will the network functions already deployed stop working?
+
+Network Function Manager service requires network connectivity to the ASE device for management operations to create or delete network functions, monitor, and troubleshoot the network functions running on your device. If the network function is deployed on the ASE device and the device is disconnected or partially connected the underlying network function, virtual machines should continue to operate without any interruption. Network functions deployed on these virtual machines might have different requirements based on the configuration management, and additional network connectivity requirements from network function partners. Check with your partner for the network connectivity requirements and modes of operation.
+
+### Which regions are supported for Preview? Will you add support for additional Azure regions?
+
+During Preview, Network Function Manager is available in the following regions:
++
+Azure Stack Edge Pro with GPU is available in several countries to deploy and run your choice of network functions. For a list of all the countries/regions where the Azure Stack Edge Pro GPU device is available, go to the [Azure Stack Edge Pro GPU pricing](https://azure.microsoft.com/pricing/details/azure-stack/edge/#azureStackEdgePro) page. On the **Azure Stack Edge Pro** tab, see the **Availability** section.
+
+During preview, you can register the Azure Stack Edge device and Network Function Manager resources based on your regulatory and data sovereignty requirements. The Azure region associated with Network Function Manager resources is used to guide the management operations from the cloud service to the physical device.
+
+### When I delete the managed application for my network function running on Azure Stack Edge, will the billing for network functions automatically stop?
+
+Check with your network function partner on the billing cycle for network functions deployed using Network Function Manager. Each partner will have a different billing policy for their network function offerings.
+
+### When should I use the cloud-managed Virtual Machines feature to manage VMs on my Azure Stack Edge?
+
+If you need general-purpose VMs for your apps to run along with network functions, you can use the cloud-managed VMs feature for Azure Stack Edge. For more information, see [Deploy VMs on your Azure Stack Edge Pro GPU device](../databox-online/azure-stack-edge-gpu-deploy-virtual-machine-portal.md).
+
+## Next steps
+
+For more information, see the [Overview](overview.md).
network-function-manager Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/network-function-manager/overview.md
+
+ Title: About Azure Network Function Manager
+description: Learn about Azure Network Function Manager, a fully managed cloud-native orchestration service that lets you deploy and provision network functions on Azure Stack Edge Pro with GPU for a consistent hybrid experience using the Azure portal.
++++ Last updated : 06/16/2021++++
+# What is Azure Network Function Manager? (Preview)
++
+Azure Network Function Manager offers an [Azure Marketplace](https://azure.microsoft.com/marketplace/) experience for deploying network functions such as mobile packet core, SD-WAN edge, and VPN services to your [Azure Stack Edge device](https://azure.microsoft.com/products/azure-stack/edge/) running in your on-premises environment. You can now rapidly deploy a private mobile network service or SD-WAN solution on your edge device directly from the Azure management portal. Network Function Manager brings network functions from a growing ecosystem of [partners](#partners). This preview is supported on [Azure Stack Edge Pro with GPU](../databox-online/azure-stack-edge-gpu-overview.md).
+
+## <a name="preview"></a>Preview features
+
+* **Consistent management experience ΓÇô** Network Function Manager provides a consistent Azure management experience for network functions from different partners deployed at your enterprise edge. This lets you simplify governance and management. You can use your familiar Azure tools and SDK to automate the deployment of network functions through declarative templates. You can also apply Azure role-based access control [Azure RBAC](../role-based-access-control/overview.md) for a global deployment of network functions on your Azure Stack Edge devices.
+
+* **Azure Marketplace experience for 5G network functions ΓÇô** Accelerate the deployment of private mobile network solution on your Azure Stack Edge device by selecting from your choice of LTE and 5G mobile packet core network function directly from the Azure Marketplace.
+
+* **Seamless cloud-to-edge experience for SD-WAN and VPN solutions ΓÇô** Network Function Manager extends the Azure management experience for Marketplace network functions that you are familiar with in the public cloud to your edge device. This lets you take advantage of a consistent deployment experience for your choice of SD-WAN and VPN partner network functions deployed in the Azure public cloud or Azure Stack Edge device.
+
+* **Azure Managed Applications experience for network functions on enterprise edge ΓÇô** Network Function Manager enables a simplified deployment experience for specialized network functions, such as mobile packet core, on your Azure Stack Edge device. We have prevalidated the lifecycle management for approved network function images from partners. You can have confidence that your network function resources are deployed in a consistent state across your entire fleet. For more information, see [Azure Managed Applications](../azure-resource-manager/managed-applications/overview.md).
+
+* **Network acceleration and choice of dynamic or static IP allocation for network functions ΓÇô** Network Function Manager and [Azure Stack Edge Pro](../databox-online/azure-stack-edge-gpu-overview.md) support improved network performance for network function workloads. Specialized network functions, such as mobile packet core, can now be deployed on the Azure Stack Edge device with faster data path performance on the access point network and user plane network. You can also choose from static or dynamic IP allocation for different virtual interfaces for your network function deployment. Check with your network function partner on support for these networking capabilities.
+
+## <a name="managed"></a>Azure Managed Applications for network functions
+
+The network functions that are available to be deployed using Network Function Manager are engineered to specifically run on your Azure Stack Edge device. The network function offer is published to Azure Marketplace as an [Azure Managed Application](../azure-resource-manager/managed-applications/overview.md). Customers can deploy the offer directly from [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/), or from the Network Function Manager device resource via the Azure portal.
++
+All network function offerings that are available to be deployed using Network Function Manager have a managed application that is available in Azure Marketplace. Managed applications allow partners to:
+
+* Build a custom deployment experience for their network function with the Azure portal experience.
+
+* Provide a specialized Resource Manager template that allows them to create the network function directly in the Azure Stack Edge device.
+
+* Bill software licensing costs directly, or through Azure Marketplace.
+
+* Expose custom properties and resource meters.
+
+Network function partners may create different resources, depending on their appliance deployment, configuration licensing, and management needs. As is the case with all Azure Managed Applications, when a customer creates a network function in the Azure Stack Edge device, there will be two resource groups created in their subscription:
+
+* **Customer resource group –** This resource group will contain an application placeholder for the managed application. Partners can use this to expose whatever customer properties they choose here.
+
+* **Managed resource group –** You can't configure or change resources in this resource group directly, as this is controlled by the publisher of the managed application. This resource group will contain the Network Function Manager **network functions** resource.
++
+## <a name="configuration"></a>Network function configuration process
+
+Network function partners that offer their Azure managed applications with Network Function Manager provide an experience that configures the network function automatically as part of the deployment process. After the managed application deployment is successful and the network function instance is provisioned into the Azure Stack Edge, any additional configuration that may be required for the network function must be done via the network function partners management portal. Check with your network function partner for the end-to-end management experience for the network functions deployed on Azure Stack Edge device.
+
+## <a name="prereq"></a>Prerequisites
+
+### <a name="edge-pro"></a>Azure Stack Edge Pro with GPU installed and activated
+
+Azure Network Function Manager service is enabled on Azure Stack Edge Pro device. Before you deploy network functions, confirm that the Azure Stack Edge Pro is installed and activated. Azure Stack Edge resource must be deployed in a region which is supported by Network Function Manager resources. For more information, see [Region Availability](#regions). Be sure to follow all the steps in the Azure Stack Edge Pro [Quickstarts](../databox-online/azure-stack-edge-gpu-quickstart.md) and [Tutorials](../databox-online/azure-stack-edge-gpu-deploy-checklist.md).
+
+You should also verify that the device **Status**, located in the properties section for the Azure Stack Edge resource in the Azure management portal, is **Online**.
++
+### <a name="partner-prereq"></a>Partner prerequisites
+
+Customers can choose from one or more Network Function Manager [partners](#partners) to deploy their network function on an Azure Stack Edge device. Each partner has networking requirements for deployment of their network function to an Azure Stack Edge device. Refer to the product documentation from the network function partners to complete the following configuration tasks:
+
+* [Configure network on different ports](../databox-online/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy.md).
+* [Enable compute network on your Azure Stack Edge device](../databox-online/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy.md#enable-compute-network).
++
+### <a name="account"></a>Azure account
+
+The Azure Network Function Manager service consists of Network Function Manager **Device** and Network Function Manager **Network Function** resources. The Device and Network Function resources are within Azure Subscriptions. The Azure subscription ID used to activate the Azure Stack Edge Pro device and Network Function Manager resources should be the same.
+
+Onboard your Azure subscription ID for the Network Function Manager preview by completing the [Azure Function Manager Preview Form](https://go.microsoft.com/fwlink/?linkid=2163583). For preview, Azure Network Function Manager partners must enable the same Azure subscription ID to deploy their network function from the Azure Marketplace. Ensure that your Azure subscription ID is onboarded for preview in both places.
+
+## <a name="permissions"></a>Resource provider registration and permissions
+
+Azure Network Function Manager resources are within Microsoft.HybridNetwork resource provider. After the Azure subscription ID is onboarded for preview with the Network Function Manager service, register the subscription ID with Microsoft.HybridNetwork resource provider. For more information on how to register, see [Azure resource providers and types](../azure-resource-manager/management/resource-providers-and-types.md).
+
+The accounts you use to create the Network Function Manager device resource must be assigned to a custom role that is assigned the necessary actions from the following table. For more information, see [Custom roles](../role-based-access-control/custom-roles.md).
+
+| Name | Action|
+|||
+| Microsoft.DataBoxEdge/dataBoxEdgeDevices/read|Required to read the Azure Stack Edge resource on which network functions will be deployed. |
+|Microsoft.DataBoxEdge/dataBoxEdgeDevices/getExtendedInformation/action |Required to read the properties section of Azure Stack edge resource. |
+|Microsoft.DataBoxEdge/dataBoxEdgeDevices/roles/write |Required to create the Network Function Manager device resource on Azure Stack Edge resource.|
+| Microsoft.HybridNetwork/devices/* | Required to create, update, delete the Network Function Manager device resource. |
+
+The accounts you use to create the Azure managed applications resource must be assigned to a [custom role](../role-based-access-control/custom-roles.md) that is assigned the necessary actions from the following table:
+
+|Name |Action |
+|||
+|[Managed Application Contributor Role](../role-based-access-control/built-in-roles.md#managed-application-contributor-role)|Required to create Managed app resources.|
+
+## <a name="managed-identity"></a>Managed Identity
+
+Network function partners that offer their Azure managed applications with Network Function Manager provide an experience that allows you to a deploy a managed application that is attached to an existing Network Function Manager device resource. When you deploy the partner managed application in the Azure portal, you are required to provide an Azure user-assigned managed Identity resource that has access to the Network Function Manager device resource. The user-assigned managed identity allows the managed application resource provider and the publisher of the network function appropriate permissions to Network Function Manager device resource that is deployed outside the managed resource group. For more information, see [Manage a user-assigned managed identity in the Azure portal](../active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-portal.md).
+
+To create a user-assigned managed identity for deploying network functions:
+
+1. Create user-assigned managed identity and assign it to a custom role with permissions for Microsoft.HybridNetwork/devices/join/action. For more information, see [Manage a user-assigned managed identity in the Azure portal](../active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-portal.md).
+
+1. Provide this managed identity when creating a partnerΓÇÖs managed application in the Azure portal. For more information, see [Assign a managed identity access to a resource using the Azure portal](../active-directory/managed-identities-azure-resources/howto-assign-access-portal.md).
++
+## <a name="port-firewall"></a>Port requirements and firewall rules
+
+Network Function Manager (NFM) services running on the Azure Stack Edge require outbound connectivity to the NFM cloud service for management traffic to deploy network functions. NFM is fully integrated with the Azure Stack Edge service. Review the networking port requirements and firewall rules for the [Azure Stack Edge](../databox-online/azure-stack-edge-gpu-system-requirements.md#networking-port-requirements) device.
+
+Network Function partners will have different requirements for firewall and port configuration rules to manage traffic to the partner management portal. Check with your network function partner for specific requirements.
+
+## <a name="regions"></a>Region availability
+
+The Azure Stack Edge resource, Azure Network Function Manager device, and Azure managed applications for network functions should be in the same Azure region. The Azure Stack Edge Pro GPU physical device does not have to be in the same region.
+
+* **Resource availability -** For preview, the Network Function Manager resources are available in the following regions:
+
+ [!INCLUDE [Preview- available regions](../../includes/network-function-manager-regions-include.md)]
+
+* **Device availability -** For a list of all the countries/regions where the Azure Stack Edge Pro GPU device is available, go to the [Azure Stack Edge Pro GPU pricing](https://azure.microsoft.com/pricing/details/azure-stack/edge/#azureStackEdgePro) page. On the **Azure Stack Edge Pro** tab, see the **Availability** section.
+
+With the current release, Network Function Manager is a regional service. For region-wide outages, the management operations for Network Function Manager resources will be impacted, but the network functions running on the Azure Stack Edge device will not be impacted by the region-wide outage.
+
+## <a name="partners"></a>Partner solutions
+
+See the Network Function Manager [partners page](partners.md) for a growing ecosystem of partners offering their Marketplace managed applications for private mobile network, SD-WAN, and VPN solutions.
+
+## <a name="faq"></a>FAQ
+
+For the FAQ, see the [Network Function Manager FAQ](faq.md).
+
+## Next steps
+
+[Create a device resource](create-device.md).
network-function-manager Partners https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/network-function-manager/partners.md
+
+ Title: Azure Network Function Manager partners
+description: Learn about partners offering their network functions for use with this service.
++++ Last updated : 06/16/2021+++
+# Network Function Manager partners (Preview)
+
+We have a growing ecosystem of partners offering their network function as managed applications for use with this service.
+
+## <a name="devices"></a>Devices and configuration links
+
+For Preview, the following SKUs are currently available.
+
+|Function |Category|Link|
+| | | |
+| Affirmed Private Network Service | Mobile packet core |[Configuration guide](https://go.microsoft.com/fwlink/?linkid=2165526)|
+| Celona Edge |Mobile packet core |[Azure Marketplace](https://ms.portal.azure.com/)|
+| Metaswitch Fusion Core | Mobile packet core | [Configuration guide](https://go.microsoft.com/fwlink/?linkid=2165525)|
+| NetFoundry ZTNA | Other| [Azure Marketplace](https://ms.portal.azure.com/)|
+| Nuage Networks SD-WAN From Nokia | SD-WAN| [Azure Marketplace](https://ms.portal.azure.com/)|
+| Versa SD-WAN| SD-WAN |[Azure Marketplace](https://ms.portal.azure.com/)|
+| VMware SD-WAN by Velocloud | SD-WAN | [Azure Marketplace](https://ms.portal.azure.com/)|
+
+## Next steps
+
+* [Create a device resource](create-device.md).
+* [Deploy a network function](deploy-functions.md).
network-watcher Data Residency https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/network-watcher/data-residency.md
# Data residency for Azure Network Watcher
-With the exception of the Connection Monitor (Preview) service, Azure Network Watcher doesn't store customer data.
+With the exception of the Connection Monitor service, Azure Network Watcher doesn't store customer data.
-## Connection Monitor (Preview) data residency
-The Connection Monitor (Preview) service stores customer data. This data is automatically stored by Network Watcher in a single region. So Connection Monitor (Preview) automatically satisfies in-region data residency requirements, including requirements specified on the [Trust Center](https://azuredatacentermap.azurewebsites.net/).
+## Connection Monitor data residency
+The Connection Monitor service stores customer data. This data is automatically stored by Network Watcher in a single region. So Connection Monitor automatically satisfies in-region data residency requirements, including requirements specified on the [Trust Center](https://azuredatacentermap.azurewebsites.net/).
## Data residency In Azure, the feature that enables storing customer data in a single region is currently available only in the Southeast Asia Region (Singapore) of the Asia Pacific geo and Brazil South (Sao Paulo State) Region of the Brazil geo. For all other regions, customer data is stored in Geo. For more information, see the [Trust Center](https://azuredatacentermap.azurewebsites.net/). ## Next steps
-* Read an overview of [Network Watcher](./network-watcher-monitoring-overview.md).
+* Read an overview of [Network Watcher](./network-watcher-monitoring-overview.md).
networking Azure Network Latency https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/networking/azure-network-latency.md
Previously updated : 12/07/2020 Last updated : 06/08/2021
Azure continuously monitors the latency (speed) of core areas of its network usi
The latency measurements are collected from ThousandEyes agents, hosted in Azure cloud regions worldwide, that continuously send network probes between themselves in 1-minute intervals. The monthly latency statistics are derived from averaging the collected samples for the month.
-## December 2020 round-trip latency figures
+## May 2021 round-trip latency figures
-The monthly average round-trip times between Azure regions for past 30 days (ending on December 31, 2020) are shown below. The following measurements are powered by [ThousandEyes](https://thousandeyes.com).
+The monthly average round-trip times between Azure regions for past 31 days (ending on May 31, 2021) are shown below. The following measurements are powered by [ThousandEyes](https://thousandeyes.com).
[![Azure inter-region latency statistics](media/azure-network-latency/azure-network-latency.png)](media/azure-network-latency/azure-network-latency.png#lightbox)
networking Edge Zones Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/networking/edge-zones-overview.md
- Title: About Azure Edge Zone Preview
-description: 'Learn about edge computing offerings from Microsoft: Azure Edge Zone.'
----- Previously updated : 01/13/2021----
-# About Azure Edge Zone Preview
-
-Azure Edge Zone is a family of offerings from Microsoft Azure that enables data processing close to the user. You can deploy VMs, containers, and other selected Azure services into Edge Zones to address the low latency and high throughput requirements of applications.
-
-Typical use case scenarios for Edge Zones include:
--- Real-time command and control in robotics.-- Real-time analytics and inferencing via artificial intelligence and machine learning.-- Machine vision.-- Remote rendering for mixed reality and VDI scenarios.-- Immersive multiplayer gaming.-- Media streaming and content delivery.-- Surveillance and security.-
-There are three types of Azure Edge Zones:
--- Azure Edge Zones-- Azure Edge Zones with Carrier-- Azure Private Edge Zones-
-## <a name="edge-zones"></a>Azure Edge Zones
-
-![Azure Edge Zones](./media/edge-zones-overview/edge-zones.png "Azure Edge Zones")
-
-Azure Edge Zones are small-footprint extensions of Azure placed in population centers that are far away from Azure regions. Azure Edge Zones support VMs, containers, and a selected set of Azure services that let you run latency-sensitive and throughput-intensive applications close to end users. Azure Edge Zones are part of the Microsoft global network. They provide secure, reliable, high-bandwidth connectivity between applications that run at the edge zone close to the user. Azure Edge Zones are owned and operated by Microsoft. You can use the same set of Azure tools and the same portal to manage and deploy services into Edge Zones.
-
-Typical use cases include:
--- Gaming and game streaming.-- Media streaming and content delivery.-- Real-time analytics and inferencing via artificial intelligence and machine learning.-- Rendering for mixed reality.-
-Azure Edge Zones will be available in the following metro areas:
--- New York, NY-- Los Angeles, CA-- Miami, FL-
-[Contact the Edge Zone team](https://aka.ms/EdgeZones) for more information.
-
-## <a name="carrier"></a>Azure Edge Zones with Carrier
-
-![Edge Zones with Carrier](./media/edge-zones-overview/edge-carrier.png "Edge Zones with Carrier")
-
-Azure Edge Zones with Carrier are small-footprint extensions of Azure that are placed in mobile operators' datacenters in population centers. Azure Edge Zone with Carrier infrastructure is placed one hop away from the mobile operator's 5G network. This placement offers latency of less than 10 milliseconds to applications from mobile devices.
-
-Azure Edge Zones with Carrier are deployed in mobile operators' datacenters and connected to the Microsoft global network. They provide secure, reliable, high-bandwidth connectivity between applications that run close to the user. Developers can use the same set of familiar tools to build and deploy services into the Edge Zones.
-
-Typical use cases include:
--- Gaming and game streaming.-- Media streaming and content delivery.-- Real-time analytics and inferencing via artificial intelligence and machine learning.-- Rendering for mixed reality.-- Connected automobiles.-- Tele-medicine.-
-Edge Zones will be offered in partnership with the following operators:
--- AT&T (Atlanta, Dallas, and Los Angeles)-
-ISVs working on optimized and scalable applications connected to 5G networks can now use the new Los Angeles preview location of Azure Edge Zones with AT&T when building and experimenting with ultra-low latency platforms, mobile and connected scenarios. Register for the early adopter program to take advantage of secure, high-bandwidth connectivity.
-
-[Contact the Edge Zone team](https://aka.ms/EdgeZones) for more information.
-
-## <a name="private-edge-zones"></a>Azure Private Edge Zones
-
-![Private Edge Zones](./media/edge-zones-overview/private-edge.png "Private Edge Zones")
-
-Azure Private Edge Zones are small-footprint extensions of Azure that are placed on-premises. Azure Private Edge Zone is based on the [Azure Stack Edge](https://azure.microsoft.com/products/azure-stack/edge/) platform. It enables low latency access to computing and storage services deployed on-premises. Private Edge Zone also lets you deploy applications from ISVs and virtualized network functions (VNFs) as [Azure managed applications](https://azure.microsoft.com/services/managed-applications/) along with virtual machines and containers on-premises. These VNFs can include mobile packet cores, routers, firewalls, and SD-WAN appliances. Azure Private Edge Zone comes with a cloud-native orchestration solution that lets you manage the lifecycles of VNFs and applications from the Azure portal.
-
-Azure Private Edge Zone lets you develop and deploy applications on-premises by using the same familiar tools that you use to build and deploy applications in Azure.
-
-It also lets you:
--- Run private mobile networks (private LTE, private 5G).-- Implement security functions like firewalls.-- Extend your on-premises networks across multiple branches and Azure by using SD-WAN appliances on the same Private Edge Zone appliances and manage them from Azure.-
-Typical use cases include:
--- Real-time command and control in robotics.-- Real-time analytics and inferencing with artificial intelligence and machine learning.-- Machine vision.-- Remote rendering for mixed reality and VDI scenarios.-- Surveillance and security.-
-We have a rich ecosystem of VNF vendors, ISVs, and MSP partners to enable end-to-end solutions that use Private Edge Zones. [Contact the Private Edge Zone team](https://aka.ms/EdgeZonesPartner) for more information.
-
-### <a name="private-edge-partners"></a>Private Edge Zone partners
-
-![Private Edge Zone partners](./media/edge-zones-overview/partners.png "Private Edge Zones partners")
-
-#### <a name="vnf"></a>Virtualized network functions (VNFs)
-
-##### <a name="vEPC"></a>Virtualized Evolved Packet Core (vEPC) for mobile networks
--- [Affirmed Networks](https://www.affirmednetworks.com/)-- [Celona](https://www.celona.io/azure-edge)-- [Druid Software](https://www.druidsoftware.com/)-- [Expeto](https://www.expeto.io/)-- [Mavenir](https://mavenir.com/)-- [Metaswitch](https://www.metaswitch.com/)-- [Nokia Digital Automation Cloud](https://www.dac.nokia.com/)-
-##### <a name="mobile-radio"></a>Mobile radio partners
--- [Celona](https://www.celona.io/azure-edge)-- [Commscope Ruckus](https://support.ruckuswireless.com/)-
-##### <a name="sdwan-vendors"></a>SD-WAN vendors
--- [128 Technology](https://www.128technology.com/)-- [NetFoundry](https://netfoundry.io/)-- [Nuage Networks from Nokia](https://www.nuagenetworks.net/)-- [Versa Networks](https://www.versa-networks.com/)-- [VMware SD-WAN by Velocloud](https://www.velocloud.com/)-
-##### <a name="router-vendors"></a>Router vendors
--- [Arista](https://www.arista.com/)-
-##### <a name="firewall-vendors"></a>Firewall vendors
--- [Palo Alto Networks](https://www.paloaltonetworks.com/)-
-##### <a name="msp-mobile"></a>Managed Solutions Providers: Mobile operators and Global System Integrators (GSIs)
-
-| GSIs and operators | Mobile operators |
-| | |
-| Amdocs | Etisalat |
-| American Tower | NTT Communications |
-| CenturyLink | Proximus |
-| Expeto | Rogers |
-| Federated Wireless | SK Telecom |
-| Infosys | Telefonica |
-| Tech Mahindra | Telstra |
-| | Vodafone |
-
-[Contact the Private Edge Zone team](https://aka.ms/EdgeZonesPartner) for information on how to become a partner.
-
-### <a name="solutions-private-edge"></a>Private Edge Zone solutions
-
-#### <a name="private-mobile-private-edge"></a>Private mobile network on Private Edge Zones
-
-![Private mobile network on Private Edge Zones](./media/edge-zones-overview/mobile-networks.png "Private mobile network on Private Edge Zones")
-
-You can now deploy a private mobile network on Private Edge Zones. Private mobile networks enable ultra-low latency, high capacity, and the reliable and secure wireless network that's required for business-critical applications.
-
-Private mobile networks can enable scenarios like:
-- Command and control of automated guided vehicles (AGVs) in warehouses.-- Real-time communication between robots in smart factories.-- Augmented reality and virtual reality edge applications.-
-The virtualized evolved packet core (vEPC) network function is the brains of a private mobile network. You can now deploy a vEPC on Private Edge Zones. For a list of vEPC partners that are available on Private Edge Zones, see [vEPC ISVs](#vEPC).
-
-Deploying a private mobile network solution on Private Edge Zones requires other components, like mobile access points, SIM cards, and other VNFs like routers. Access to licensed or unlicensed spectrum is critical to setting up a private mobile network. And you might need help with RF planning, physical layout, installation, and support. For a list of partners, see [Mobile radio partners](#mobile-radio).
-
-Microsoft provides a partner ecosystem that can help with all aspects of this process. Partners can help with planning the network, purchasing the required devices, setting up hardware, and managing the configuration from Azure. A set of validated partners that are tightly integrated with Microsoft ensure your solution will be reliable and easy to use. You can focus on your core scenarios and rely on Microsoft and its partners to help with the rest.
-
-#### <a name="sdwan-private-edge"></a>SD-WAN on Private Edge Zones
-
-![SD-WAN on Private Edge Zones](./media/edge-zones-overview/sd-wan.png "SD-WAN on Private Edge Zones")
-
-SD-WAN lets you create enterprise-grade wide area networks (WANs) that have these benefits:
--- Increased bandwidth-- High-performance access to the cloud-- Service insertion-- Reliability-- Policy management-- Extensive network visibility-
-SD-WAN provides seamless branch office connectivity that's orchestrated from redundant central controllers at lower cost of ownership.
-SD-WAN on Private Edge Zones lets you move from a capex-centric model to a software-as-a-service (SaaS) model to reduce IT budgets. You can use your choice of SD-WAN partners, orchestrator or controller, to enable new services and propagate them throughout your entire network immediately.
-
-## Next steps
-
-For more information, contact the following teams:
-
-* [Edge Zone team](https://aka.ms/EdgeZones)
-* [Private Edge Zone team, to become a partner](https://aka.ms/EdgeZonesPartner)
openshift Howto Restrict Egress https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/openshift/howto-restrict-egress.md
The following FQDN / application rules are required:
| Destination FQDN | Port | Use | | -- | -- | - | | **`quay.io`** | **HTTPS:443** | Mandatory for the installation, used by the cluster. This is used by the cluster to download the platform container images. |
+| **`*.quay.io`** | **HTTPS:443** | Provides core container images. |
| **`registry.redhat.io`** | **HTTPS:443** | Mandatory for core add-ons. This is used by the cluster to download core components such as dev tools, operator-based add-ons, and Red Hat provided container images. | **`mirror.openshift.com`** | **HTTPS:443** | This is required in the VDI environment or your laptop to access mirrored installation content and images. It's required in the cluster to download platform release signatures to know what images to pull from quay.io. | | **`api.openshift.com`** | **HTTPS:443** | Required by the cluster to check if there are available updates before downloading the image signatures. |
openshift Tutorial Create Cluster https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/openshift/tutorial-create-cluster.md
If you choose to install and use the CLI locally, this tutorial requires that yo
Azure Red Hat OpenShift requires a minimum of 40 cores to create and run an OpenShift cluster. The default Azure resource quota for a new Azure subscription does not meet this requirement. To request an increase in your resource limit, see [Standard quota: Increase limits by VM series](../azure-portal/supportability/per-vm-quota-requests.md).
+* For example to check the current subscription quota of the smallest supported virtual machine familly SKU "Standard DSv3":
+
+ ```azurecli-interactive
+ LOCATION=eastus
+ az vm list-usage -l $LOCATION \
+ --query "[?contains(name.value, 'standardDSv3Family')]" \
+ -o table
+ ```
+ ARO pull secret does not change the cost of the RH OpenShift license for ARO. ### Verify your permissions
private-link Private Endpoint Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/private-link/private-endpoint-overview.md
A private link resource is the destination target of a given private endpoint. T
|**Private Link Service** (Your own service) | Microsoft.Network/privateLinkServices | empty | |**Azure Automation** | Microsoft.Automation/automationAccounts | Webhook, DSCAndHybridWorker | |**Azure SQL Database** | Microsoft.Sql/servers | Sql Server (sqlServer) |
-|**Azure Synapse Analytics** | Microsoft.Sql/servers | Sql Server (sqlServer) |
+|**Azure Synapse Analytics** | Microsoft.Synapse/workspaces | Sql, SqlOnDemand, Dev |
|**Azure Storage** | Microsoft.Storage/storageAccounts | Blob (blob, blob_secondary)<BR> Table (table, table_secondary)<BR> Queue (queue, queue_secondary)<BR> File (file, file_secondary)<BR> Web (web, web_secondary) | |**Azure Data Lake Storage Gen2** | Microsoft.Storage/storageAccounts | Blob (blob, blob_secondary)<BR> Data Lake File System Gen2 (dfs, dfs_secondary) | |**Azure Cosmos DB** | Microsoft.AzureCosmosDB/databaseAccounts | Sql, MongoDB, Cassandra, Gremlin, Table|
private-multi-access-edge-compute-mec Affirmed Private Network Service Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/private-multi-access-edge-compute-mec/affirmed-private-network-service-overview.md
+
+ Title: 'What is Affirmed Private Network Service on Azure?'
+description: Learn about Affirmed Private Network Service solutions on Azure for private LTE/5G networks.
++++ Last updated : 06/16/2021+++
+# What is Affirmed Private Network Service on Azure?
+
+The Affirmed Private Network Service (APNS) is a managed network service offering created for managed service providers and mobile network operators to provide private LTE and private 5G solutions to enterprises.
+
+Affirmed has combined its mobile core-technology with AzureΓÇÖs capabilities to create a complete turnkey solution for private LTE/5G networks to help carriers and enterprises take advantage of managed networks and the mobile edge. The combination of cloud management and automation allows managed service providers to deliver a fully managed infrastructure and also brings a complete end-to-end solution for operators to pick the best of breed Radio Access Network, SIM, and Azure services from a rich ecosystem of partners offered in Azure Marketplace. The solution is composed of five components:
+
+- **Cloud-native Mobile Core**: This component is 3GPP standards compliant and supports network functions for both 4G and 5G and has virtual network probes located natively within the mobile core. The mobile core can be deployed on VMs, physical servers, or on an operator's cloud, eliminating the need for dedicated hardware.
+
+- **Private Network Service Manager - Affirmed Networks**: Private Network Service Manager is the application that operators use to deploy, monitor, and manage private mobile core networks on the Azure platform. It features a complete set of management capabilities including simple self-activation and management of private network resources through a programmatic GUI-driven portal.
+
+- **Azure Network Functions Manager**: Azure Network Functions Manager (NFM) is a fully managed cloud-native orchestration service that enables customers to deploy and provision network functions on Azure Stack Edge Pro with GPU for a consistent hybrid experience using the Azure portal.
+
+- **Azure Cloud**: A public cloud computing platform with solutions including Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS) that can be used for services such as analytics, virtual computing, storage, networking, and much more.
+
+- **Azure Stack Edge**: A cloud-managed, hardware-as-a-service solution shipped by Microsoft. It brings the Azure cloudΓÇÖs power to a local and robust server that can be deployed virtually anywhere local AI and advanced computing tasks need to be performed.
+++
+## Why use the Affirmed Private Network Solution?
+APNS provides the following key benefits to operators and their customers:
+
+- **Deployment Flexibility** - APNS employs Control and User Plane Separation technology and supports three types of deployment modes to address a variety of operator desired scenarios for offering to enterprises. By using the Private Network Service Manager, operators can configure the following deployment models:
+
+ - Standalone enables operators to provide a complete standalone private network on premises by delivering the RAN, 5G core on the Azure Stack Edge and the management layer on the centralized cloud.
+
+ - Distributed enables faster processing of data by distributing the user plane closer to the edge of the enterprise on the Azure Stack Edge while the control plane is on the cloud; an example of such a model would be manufacturing facilities.
+
+ - All in Cloud allows for the entire 5G core to be deployed on the cloud while the RAN is on the edge, enabling dynamic allocation of cloud resources to suit the changing demands of the workloads.
+
+- **MNO Integration** - APNS is mobile network operator integrated, which means it provides complete mobility across private and public operator networks with its distributed subscriber core. Operators have the advantage to scale the private mobile network to 1000s of enterprise edge sites.
+
+ - Supports all Spectrum options - MNO Licensed, Private Licensed, CBRS, Shared, Unlicensed.
+
+ - Supports isolated/standalone private networks, multi-site roaming, and macro roaming as it is MNO Integrated.
+
+ - Can provide 99.999% service availability and inter-work with any 3GPP compliant LTE and 5G NR radio. Has Carrier-Grade resiliency for enterprises.
+
+- **Automation and Ease of Management** - The APNS solution can be completely managed remotely through Service Manager on the Azure cloud. Through the Service Manager, end-users have access to their personalized dashboard and can manage, view, and turn on/off devices on the private mobile network. Operators can monitor the status of the networks for network issues and key parameters to ensure optimal performance.
+
+ - Provides secure, reliable, high bandwidth, low latency private mobile networking service that runs on Azure private multi-access edge compute.
+
+ - Supports complete remote management, without needing truck rolls.
+
+ - Provides cloud automation to enable operators to offer managed services to enterprises or to partner with MSPs who in turn can offer managed services.
+
+- **Smarter Network & Business Insights** - Affirmed mobile core has an embedded virtual probe/ packet brokering function that can be used to provide network insight. The operator can use these insights to better drive network decisions while their customers can use these insights to drive smarter monetization decisions.
+
+- **Data Privacy & Security** - APNS uses Azure to deliver security and compliance across private networks and enterprise applications. Operators can confidently deploy the solution for industry use cases that require stringent data privacy laws, such as healthcare, government, public safety, and defense.
+
+## Next steps
+- Learn how to [deploy the Affirmed private Network Service solution](deploy-affirmed-private-network-service-solution.md)
+++
private-multi-access-edge-compute-mec Deploy Affirmed Private Network Service Solution https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/private-multi-access-edge-compute-mec/deploy-affirmed-private-network-service-solution.md
+
+ Title: 'Deploy Affirmed Private Network Service on Azure'
+description: Learn how to deploy the Affirmed Private Network Service solution on Azure
++++ Last updated : 06/16/2021++
+# Deploy Affirmed Private Network Service on Azure
+
+This article provides a high-level overview of the process of deploying Affirmed Private Network Service (APNS) solution on an Azure Stack Edge device via the Microsoft Azure Marketplace.
+
+The following diagram shows the system architecture of the Affirmed Private Network Service, including the resources required to deploy.
+
+![Affirmed Private Network Service deployment](media/deploy-affirmed-private-network-service/deploy-affirmed-private-network-service.png)
+
+## Collect required information
+
+To deploy APNS, you must have the following resources:
+
+- A configured Azure Network Function Manager - Device object which serves as the digital twin of the Azure Stack Edge device.
+
+- A fully deployed Azure Stack Edge with NetFoundry VM.
+
+- Subscription approval for the Affirmed Management Systems VM Offer and APNS Managed Application.
+
+- An Azure account with an active subscription and access to the following:
+
+ - The built-in **Owner** Role for your resource group.
+
+ - The built-in **Managed Application Contributor** role for your subscription.
+
+ - A virtual network and subnet to join (open ports tcp/443 and tcp/8443).
+
+ - 5 IP addresses on the virtual subnet.
+
+ - A valid SAS Token provided by Affirmed Release Engineering.
+
+ - An administrative username/password to program during the deployment.
+
+## Deploy APNS
+
+To automatically deploy the APNS Managed application with all required resources and relevant information necessary, select the APNS Managed Application from the Microsoft Azure Marketplace. When you deploy APNS, all the required resources are automatically created for you and are contained in a Managed Resource Group.
+
+Complete the following procedure to deploy APNS:
+1. Open the Azure portal and select **Create a resource**.
+2. Enter *APNS* in the search bar and press Enter.
+3. Select **View Private Offers**.
+ > [!NOTE]
+ > The APNS Managed application will not appear until **View Private Offers** is selected.
+4. Select **Create** from the dropdown menu of the **Private Offer**, then select the option to deploy.
+5. Complete the application setup, network settings, and review and create.
+6. Select **Deploy**.
+
+## Next steps
+
+- For step-by-step instructions for deploying APNS and configuring NetFoundry settings on an Azure Stack Edge device, view the [Affirmed Private Network Service Deployment guide](https://go.microsoft.com/fwlink/?linkid=2165732).
+- For information regarding the programmatic GUI-driven portal that operators used to deploy, monitor, and manage private mobile core networks, [Affirmed Private Network Service Manager User guide](https://go.microsoft.com/fwlink/?linkid=2165932).
+- Learn more about [Affirmed Private Network Service on Azure](affirmed-private-network-service-overview.md).
private-multi-access-edge-compute-mec Deploy Metaswitch Fusion Core Solution https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/private-multi-access-edge-compute-mec/deploy-metaswitch-fusion-core-solution.md
+
+ Title: 'Deploy Fusion Core on an Azure Stack Edge device'
+description: Learn how to deploy cloud solutions from Microsoft Azure and Metaswitch Networks that can help future-proof your network, drive down costs, and create new business models and revenue streams.
++++ Last updated : 06/16/2021+++
+# Deploy Fusion Core on an Azure Stack Edge device
+
+This article provides a high-level overview of the process of deploying Fusion Core on an Azure Stack Edge device.
+
+## Collect required Azure resources
+
+You must have the following:
+
+- A fully deployed Azure Stack Edge Pro with GPU. The device's ports must be connected as follows.
+
+ - Port 5 - LAN
+ - Port 6 - WAN
+ - One port connected to your management network. You can choose any port from 1 to 4. You must have enabled your chosen port for compute, as described in the Enable compute network step of [Tutorial: Configure network for Azure Stack Edge Pro with GPU](/azure/databox-online/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy).
+- An Azure account with an active subscription and access to the following.
+
+ - The Azure Network Function Manager service. This has the resource provider namespace Microsoft.HybridNetwork. For more information, see [What is Azure Network Function Manager?](../network-function-manager/overview.md).
+ - The Fusion Core - 5G packet core managed application. You must request access by visiting