Updates from: 06/26/2021 03:06:42
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Configure Authentication Sample Spa App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/configure-authentication-sample-spa-app.md
Previously updated : 06/17/2021 Last updated : 06/25/2021
Record the **Application (client) ID** for use in a later step when you configur
## Step 3: Get the SPA sample code
-This sample demonstrates how a single-page application can use Azure AD B2C for user sign-up and sign-in, and call a protected web API. Download the sample below:
+This sample demonstrates how a single-page application can use Azure AD B2C for user sign-up and sign-in. Then the app acquires an access token and calls a protected web API. Download the sample below:
[Download a zip file](https://github.com/Azure-Samples/ms-identity-b2c-javascript-spa/archive/main.zip) or clone the sample from GitHub:
Now that you've obtained the SPA app sample, update the code with your Azure AD
|||| |authConfig.js|clientId| The SPA application ID from [step 2.1](#21-register-the-web-api-application).| |policies.js| names| The user flows, or custom policy you created in [step 1](#step-1-configure-your-user-flow).|
-|policies.js|authorities|Your Azure AD B2C [tenant name](tenant-management.md#get-your-tenant-name). For example, `contoso.onmicrosoft.com`. Then, replace with the user flows, or custom policy you created in [step 1](#step-1-configure-your-user-flow). For example `https://<your-tenant-name>.b2clogin.com/<your-tenant-name>.onmicrosoft.com/<your-sign-in-sign-up-policy>`|
+|policies.js|authorities|Your Azure AD B2C [tenant name](tenant-management.md#get-your-tenant-name). For example, `contoso.onmicrosoft.com`. Then, replace with the user flows, or custom policy you created in [step 1](#step-1-configure-your-user-flow). For example, `https://<your-tenant-name>.b2clogin.com/<your-tenant-name>.onmicrosoft.com/<your-sign-in-sign-up-policy>`|
|policies.js|authorityDomain|Your Azure AD B2C [tenant name](tenant-management.md#get-your-tenant-name). For example, `contoso.onmicrosoft.com`.| |apiConfig.js|b2cScopes|The scopes you [created for the web API](#22-configure-scopes). For example, `b2cScopes: ["https://<your-tenant-name>.onmicrosoft.com/tasks-api/tasks.read"]`.| |apiConfig.js|webApi|The URL of the web API, `http://localhost:5000/tasks`.|
You can add and modify redirect URIs in your registered applications at any time
* Learn more [about the code sample](https://github.com/Azure-Samples/ms-identity-b2c-javascript-spa) * [Enable authentication in your own SPA application](enable-authentication-spa-app.md) * Configure [authentication options in your SPA application](enable-authentication-spa-app-options.md)
+* [Enable authentication in your own web API](enable-authentication-web-api.md)
active-directory-b2c Configure Authentication Sample Web App With Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/configure-authentication-sample-web-app-with-api.md
Previously updated : 06/11/2021 Last updated : 06/25/2021
For production environment, we recommend you use a distributed memory cache. For
* Learn more [about the code sample](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/tree/master/1-WebApp-OIDC/1-5-B2C#about-the-code) * Learn how to [Enable authentication in your own web application using Azure AD B2C](enable-authentication-web-application.md)
+* [Enable authentication in your own web API](enable-authentication-web-api.md)
active-directory-b2c Enable Authentication Spa App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/enable-authentication-spa-app.md
Previously updated : 06/16/2021 Last updated : 06/25/2021
After you successfully authenticate, you can see the parsed ID token appear on t
## Next steps * Learn more [about the code sample](https://github.com/Azure-Samples/ms-identity-b2c-javascript-spa)
-* Configure [Authentication options in your own SPA application using Azure AD B2C](enable-authentication-spa-app-options.md)
+* Configure [Authentication options in your own SPA application using Azure AD B2C](enable-authentication-spa-app-options.md)
+* [Enable authentication in your own web API](enable-authentication-web-api.md)
active-directory-b2c Enable Authentication Web Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/enable-authentication-web-api.md
+
+ Title: Enable authentication in a web API using Azure Active Directory B2C
+description: Using Azure Active Directory B2C to protect a web API.
++++++ Last updated : 06/25/2021+++++
+# Enable authentication in your own web API using Azure Active Directory B2C
+
+To authorize access to a web API, only serve requests that include a valid Azure AD B2C-issued access token. This article guides you on how to enable Azure AD B2C authorization to your web API. After completing the steps in this article, only users who obtain a valid access token are authorized to call your web API endpoints.
+
+## Prerequisites
+
+Before you start reading this article, read one of the following articles. These articles guide you how to configure authentication for apps that call web APIs. Then follow the steps in this article to replace the sample web API with your own web API.
+
+- [Configure authentication in a sample ASP.NET core](configure-authentication-sample-web-app-with-api.md)
+- [Configure authentication in a sample Single Page application](configure-authentication-sample-spa-app.md)
+
+## Overview
+
+Token-based authentication ensures that requests to a web API are accompanied by a valid access token. The app takes the following steps:
+
+1. Authenticates a user with Azure AD B2C.
+1. Acquires an access token with required permission (scopes) for the web API endpoint.
+1. Passes the access token as a bearer token in the authentication header of the HTTP request using this format:
+ ```http
+ Authorization: Bearer <token>
+ ```
+
+The web API takes the following steps:
+
+1. Reads the bearer token from the authorization header in the HTTP request.
+1. Validates the token.
+1. Validates the permissions (scopes) in the token.
+1. The web API reads the claims that are encoded in the token (optional).
+1. The web API responds to the HTTP request.
+
+### App registration overview
+
+To enable your app to sign in with Azure AD B2C and call a web API, you must register two applications in the Azure AD B2C directory.
+
+- The **web, mobile, or SPA application** registration enables your app to sign in with Azure AD B2C. The app registration process generates an *Application ID*, also known as the *client ID*, that uniquely identifies your application. For example, **App ID: 1**.
+
+- The **web API** registration enables your app to call a protected web API. The registration exposes the web API permissions (scopes). The app registration process generates an *Application ID*, that uniquely identifies your web api. For example, **App ID: 2**. Grant your app (App ID: 1) permissions to the web API scopes (App ID: 2).
+
+The following diagrams describe the app registrations and the application architecture.
+
+![App registrations and the application architecture for an app with web API.](./media/enable-authentication-web-api/app-with-api-architecture.png)
+
+## Prepare your development environment
+
+In the next steps, you are going to create a new web API project. Select your programming language, ASP.NET Core, or Node.js. Make sure you have a computer that's running either:
+
+#### [ASP.NET Core](#tab/csharpclient)
+
+* [Visual Studio Code](https://code.visualstudio.com/download)
+* [C# for Visual Studio Code (latest version)](https://marketplace.visualstudio.com/items?itemName=ms-dotnettools.csharp)
+* [.NET 5.0 SDK](https://dotnet.microsoft.com/download/dotnet)
+
+#### [Node.js](#tab/nodejsgeneric)
+
+* [Visual Studio Code](https://code.visualstudio.com/), or another code editor.
+* [Node.js runtime](https://nodejs.org/en/download/)
++++
+## Create a protected web API
+
+In this step, you create a new web API project. Select your desired programming language, **ASP.NET Core**, or **Node.js**.
+
+#### [ASP.NET Core](#tab/csharpclient)
+
+Use the [dotnet new](/dotnet/core/tools/dotnet-new) command. The dotnet new command creates a new folder named **TodoList** with the web api project assets. Enter into the directory, and open [VS Code](https://code.visualstudio.com/).
+
+```dotnetcli
+dotnet new webapi -o TodoList
+cd TodoList
+code .
+```
+
+When prompted to **add required assets to the project**, select **Yes**.
++
+#### [Node.js](#tab/nodejsgeneric)
+
+Use [Express](https://expressjs.com/) for [Node.js](https://nodejs.org/) to build a web API. To create a web API, follow these steps:
+
+1. Create a new folder named **TodoList**. Then enter into the folder.
+1. Create a file **app.js**.
+1. Open the command shell, and enter `npm init -y`. This command creates a default **package.json** file for your Node.js project.
+1. In the command shell, enter `npm install express`. This command installs the Express framework.
+
+
+
+## Install the dependencies
+
+In this section, you add the authentication library to your web API project. The authentication library parses the HTTP authentication header, validates the token, and extracts claims. For more details, review the documentation for the library.
++
+#### [ASP.NET Core](#tab/csharpclient)
+
+To add the authentication library, install the package by running the following command:
+
+```dotnetcli
+dotnet add package Microsoft.Identity.Web
+```
+
+#### [Node.js](#tab/nodejsgeneric)
+
+To add the authentication library, install the packages by running the following command:
+
+```
+npm install passport
+npm install passport-azure-ad
+npm install morgan
+```
+
+The [morgen package](https://www.npmjs.com/package/morgan) is an HTTP request logger middleware for node.js.
+++
+## Initiate the authentication library
+
+In this section, you add the necessary code to initiate the authentication library.
+
+#### [ASP.NET Core](#tab/csharpclient)
+
+Open `Startup.cs`, at the beginning of the class add the following `using` declarations:
+
+```csharp
+using Microsoft.AspNetCore.Authentication.JwtBearer;
+using Microsoft.Identity.Web;
+```
++
+Find the `ConfigureServices(IServiceCollection services)` function. Then add the following code snippet before `services.AddControllers();` line of code.
++
+```csharp
+public void ConfigureServices(IServiceCollection services)
+{
+ // Adds Microsoft Identity platform (Azure AD B2C) support to protect this Api
+ services.AddAuthentication(JwtBearerDefaults.AuthenticationScheme)
+ .AddMicrosoftIdentityWebApi(options =>
+ {
+ Configuration.Bind("AzureAdB2C", options);
+
+ options.TokenValidationParameters.NameClaimType = "name";
+ },
+ options => { Configuration.Bind("AzureAdB2C", options); });
+ // End of the Microsoft Identity platform block
+
+ services.AddControllers();
+}
+```
+
+Find the `Configure` function. Then add the following code snippet immediately after the `app.UseRouting();` line of code.
++
+```csharp
+app.UseAuthentication();
+```
+
+After the change, your code should look like the following snippet:
+
+```csharp
+public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
+{
+ if (env.IsDevelopment())
+ {
+ app.UseDeveloperExceptionPage();
+ }
+
+ app.UseHttpsRedirection();
+
+ app.UseRouting();
+
+ // Add the following line
+ app.UseAuthentication();
+ // End of the block you add
+
+ app.UseAuthorization();
+
+ app.UseEndpoints(endpoints =>
+ {
+ endpoints.MapControllers();
+ });
+}
+```
+
+#### [Node.js](#tab/nodejsgeneric)
+
+Add the following JavaScript code to your *app.js* file.
+
+```javascript
+// Import the required libraries
+const express = require('express');
+const morgan = require('morgan');
+const passport = require('passport');
+const config = require('./config.json');
+
+// Import the passport Azure AD library
+const BearerStrategy = require('passport-azure-ad').BearerStrategy;
+
+// Set the Azure AD B2C options
+const options = {
+ identityMetadata: `https://${config.credentials.tenantName}.b2clogin.com/${config.credentials.tenantName}.onmicrosoft.com/${config.policies.policyName}/${config.metadata.version}/${config.metadata.discovery}`,
+ clientID: config.credentials.clientID,
+ audience: config.credentials.clientID,
+ issuer: config.credentials.issuer,
+ policyName: config.policies.policyName,
+ isB2C: config.settings.isB2C,
+ scope: config.resource.scope,
+ validateIssuer: config.settings.validateIssuer,
+ loggingLevel: config.settings.loggingLevel,
+ passReqToCallback: config.settings.passReqToCallback
+}
+
+// Instantiate the passport Azure AD library with the Azure AD B2C options
+const bearerStrategy = new BearerStrategy(options, (token, done) => {
+ // Send user info using the second argument
+ done(null, { }, token);
+ }
+);
+
+// Use the required libraries
+const app = express();
+
+app.use(morgan('dev'));
+
+app.use(passport.initialize());
+
+passport.use(bearerStrategy);
+
+//enable CORS (for testing only -remove in production/deployment)
+app.use((req, res, next) => {
+ res.header('Access-Control-Allow-Origin', '*');
+ res.header('Access-Control-Allow-Headers', 'Authorization, Origin, X-Requested-With, Content-Type, Accept');
+ next();
+});
+```
+
+
+## Add the endpoints
+
+In this section you add two endpoints to your web API:
+
+- Anonymous `/public` endpoint. This endpoint returns the current date and time. Use this endpoint to debug your web api with anonymous calls.
+- Protected `/hello` endpoint. This endpoint returns the value of the `name` claim within the access token.
+
+To add the anonymous endpoint:
+
+#### [ASP.NET Core](#tab/csharpclient)
+
+Under the */Controllers* folder, add a `PublicController.cs` file. Then add the following code snippet to the *PublicController.cs* file.
+
+```csharp
+using System;
+using Microsoft.AspNetCore.Mvc;
+using Microsoft.Extensions.Logging;
+
+namespace TodoList.Controllers
+{
+ [ApiController]
+ [Route("[controller]")]
+ public class PublicController : ControllerBase
+ {
+ private readonly ILogger<PublicController> _logger;
+
+ public PublicController(ILogger<PublicController> logger)
+ {
+ _logger = logger;
+ }
+
+ [HttpGet]
+ public ActionResult Get()
+ {
+ return Ok( new {date = DateTime.UtcNow.ToString()});
+ }
+ }
+}
+```
+
+#### [Node.js](#tab/nodejsgeneric)
+
+Add the following JavaScript code to the *app.js* file:
++
+```javascript
+// API anonymous endpoint
+app.get('/public', (req, res) => res.send( {'date': new Date() } ));
+```
+
+
+
+To add the protected endpoint:
+
+#### [ASP.NET Core](#tab/csharpclient)
+
+Under the */Controllers* folder, add a `HelloController.cs` file. Then add the following code to the *HelloController.cs* file.
+
+```csharp
+using Microsoft.AspNetCore.Authorization;
+using Microsoft.AspNetCore.Http;
+using Microsoft.AspNetCore.Mvc;
+using Microsoft.Extensions.Logging;
+using Microsoft.Identity.Web.Resource;
+
+namespace TodoList.Controllers
+{
+ [Authorize]
+ [RequiredScope("tasks.read")]
+ [ApiController]
+ [Route("[controller]")]
+ public class HelloController : ControllerBase
+ {
+
+ private readonly ILogger<HelloController> _logger;
+ private readonly IHttpContextAccessor _contextAccessor;
+
+ public HelloController(ILogger<HelloController> logger, IHttpContextAccessor contextAccessor)
+ {
+ _logger = logger;
+ _contextAccessor = contextAccessor;
+ }
+
+ [HttpGet]
+ public ActionResult Get()
+ {
+ return Ok( new { name = User.Identity.Name});
+ }
+ }
+}
+```
+
+The `HelloController` controller is decorated with the [AuthorizeAttribute](/aspnet/core/security/authorization/simple). The Authorize attribute limits access to that controller authenticated users.
+
+The controller is also decorated with the `[RequiredScope("tasks.read")]`. The [RequiredScopeAttribute](/dotnet/api/microsoft.identity.web.resource.requiredscopeattribute.-ctor) verifies that the web API is called with the right scopes, `tasks.read`.
+
+#### [Node.js](#tab/nodejsgeneric)
+
+Add the following JavaScript code to the *app.js* file.
+
+```javascript
+// API protected endpoint
+app.get('/hello',
+ passport.authenticate('oauth-bearer', {session: false}),
+ (req, res) => {
+ console.log('Validated claims: ', req.authInfo);
+
+ // Service relies on the name claim.
+ res.status(200).json({'name': req.authInfo['name']});
+ }
+);
+```
+
+The `/hello` endpoint first calls the `passport.authenticate()` function. The authentication function limits access to that controller authenticated users.
+
+The authentication function also verifies that the web API is called with the right scopes. The allowed scopes are located in the [configuration file](#configure-the-web-api).
+
+
+
+## Configure the web server
+
+In a development environment, set the web API to listen on incoming HTTP requests port number. In this example, use HTTP port 6000. The base URI of the web API will be: <http://localhost:6000>
+
+#### [ASP.NET Core](#tab/csharpclient)
+
+Add the following json snippet to the *appsettings.json* file.
+
+```json
+"Kestrel": {
+ "EndPoints": {
+ "Http": {
+ "Url": "http://localhost:6000"
+ }
+ }
+ }
+```
+
+#### [Node.js](#tab/nodejsgeneric)
+
+Add the following JavaScript code to the *app.js* file.
+
+```javascript
+// Starts listening on port 6000
+const port = process.env.PORT || 6000;
+
+app.listen(port, () => {
+ console.log('Listening on port ' + port);
+});
+```
+
+
+## Configure the web API
+
+In this section, you add configurations to a configuration file. The file contains information about your Azure AD B2C identity provider. The web API app uses this information to validate the access token the web app passes as a bearer token.
+
+#### [ASP.NET Core](#tab/csharpclient)
+
+Under the project root folder, open the `appsettings.json` file. Add the following settings:
++
+```json
+{
+ "AzureAdB2C": {
+ "Instance": "https://contoso.b2clogin.com",
+ "Domain": "contoso.onmicrosoft.com",
+ "ClientId": "<web-api-app-application-id>",
+ "SignedOutCallbackPath": "/signout/<your-sign-up-in-policy>",
+ "SignUpSignInPolicyId": "<your-sign-up-in-policy>"
+ },
+ // More setting here
+}
+```
+
+Update the following properties of the app settings:
+
+|Section |Key |Value |
+||||
+|AzureAdB2C|Instance| The first part of your Azure AD B2C [tenant name](tenant-management.md#get-your-tenant-name). For example, `https://contoso.b2clogin.com`.|
+|AzureAdB2C|Domain| Your Azure AD B2C tenant full [tenant name](tenant-management.md#get-your-tenant-name). For example, `contoso.onmicrosoft.com`.|
+|AzureAdB2C|ClientId| The web API application ID. In the [diagram above](#app-registration-overview), it's the application with *App ID: 2*. For guidance how to get your web API application registration ID, see [Prerequisites](#prerequisites). |
+|AzureAdB2C|SignUpSignInPolicyId|The user flows, or custom policy. For guidance how to get your user flow or policy, see [Prerequisites](#prerequisites). |
++
+#### [Node.js](#tab/nodejsgeneric)
+
+Under the project root folder, create a `config.json` file, and add the following JSON snippet.
+
+```json
+{
+ "credentials": {
+ "tenantName": "<your-tenant-name>",
+ "clientID": "<your-webapi-application-ID>",
+ "issuer": "https://<your-tenant-name>.b2clogin.com/<your-tenant-ID>/v2.0/"
+ },
+ "policies": {
+ "policyName": "b2c_1_susi"
+ },
+ "resource": {
+ "scope": ["tasks.read"]
+ },
+ "metadata": {
+ "discovery": ".well-known/openid-configuration",
+ "version": "v2.0"
+ },
+ "settings": {
+ "isB2C": true,
+ "validateIssuer": true,
+ "passReqToCallback": false,
+ "loggingLevel": "info"
+ }
+}
+```
+
+Update the following properties of the app settings:
+
+|Section |Key |Value |
+||||
+| credentials | tenantName | The first part of your Azure AD B2C [tenant name](tenant-management.md#get-your-tenant-name). For example, `contoso`.|
+| credentials |clientID | The web API application ID. In the [diagram above](#app-registration-overview), it's the application with *App ID: 2*. For guidance how to get your web API application registration ID, see [Prerequisites](#prerequisites). |
+| credentials | issuer| The token issuer `iss` claim value. Azure AD B2C by default returns the token in the following format: `https://<your-tenant-name>.b2clogin.com/<your-tenant-ID>/v2.0/`. Replace the `<your-tenant-name>` with the first part of your Azure AD B2C [tenant name](tenant-management.md#get-your-tenant-name). Replace the `<your-tenant-ID>` with your [Azure AD B2C tenant ID](tenant-management.md#get-your-tenant-id). |
+| policies | policyName | The user flows, or custom policy. The user flows, or custom policy. For guidance how to get your user flow or policy, see [Prerequisites](#prerequisites).|
+| resource | scope | The scopes of your web API application registration. For guidance how to get your web API scope, see [Prerequisites](#prerequisites).|
+
+
+
+## Run and test the web API
+
+Finally you run the web API with your Azure AD B2C environment settings.
+
+#### [ASP.NET Core](#tab/csharpclient)
+
+In the command shell, run the following command to start the web application:
+
+```bush
+ dotnet run
+```
+
+You should see the following output. This output means your app is up and running and ready to receive requests.
+
+```
+Now listening on: http://localhost:6000
+```
+
+To stop the program, in the command shell press `Ctrl+C`. You can rerun the app using the `node app.js` command.
+
+> [!TIP]
+> Alternatively to run the `dotnet run` command, use [VS Code debugger](https://code.visualstudio.com/docs/editor/debugging). VS Code's built-in debugger helps accelerate your edit, compile and debug loop.
+
+Open a browser and go to http://localhost:6000/public. In the browser window, you should see the following text displayed the current date and time.
++++
+#### [Node.js](#tab/nodejsgeneric)
+
+In the command shell, run the following command to start the web application:
+
+```bush
+node app.js
+```
+
+You should see the following output. This output means your app is up and running and ready to receive requests.
+
+```
+Example app listening on port 6000!
+```
+
+To stop the program, in the command shell press `Ctrl+C`. You can rerun the app using the `node app.js` command.
+
+> [!TIP]
+> Alternatively to run the `node app.js` command, use [VS Code debugger](https://code.visualstudio.com/docs/editor/debugging). VS Code's built-in debugger helps accelerate your edit, compile and debug loop.
+
+Open a browser and go to http://localhost:6000/public. In the browser window, you should see the following text displayed the current date and time.
+++
+## Calling the web API from your app
+
+First try to call the protected web API endpoint without an access token. Open a browser and go to http://localhost:6000/hello. The API will return unauthorized HTTP error message, confirming that web API is protected with a bearer token.
+
+Continue to configure your app to call the web API. For guidance, see the [Prerequisites](#prerequisites) section.
+
+## Next steps
+
+Get the complete example on GitHub:
+
+#### [ASP.NET Core](#tab/csharpclient)
+
+* [.NET Core web api using Microsoft identity library](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/tree/master/4-WebApp-your-API/4-2-B2C/TodoListService)
+
+#### [Node.js](#tab/nodejsgeneric)
+
+* [Node.js Web API using the Passport.js library](https://github.com/Azure-Samples/active-directory-b2c-javascript-nodejs-webapi)
+++++
active-directory-b2c Enable Authentication Web App With Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/enable-authentication-web-app-with-api.md
Previously updated : 06/11/2021 Last updated : 06/25/2021
# Enable authentication in your own web application that calls a web API using Azure Active Directory B2C
-This article shows you how to add Azure Active Directory B2C (Azure AD B2C) authentication to your own ASP.NET web application that calls a web API. Learn how create an ASP.NET Core web application with ASP.NET Core middleware that uses the [OpenID Connect](openid-connect.md) protocol. Use this article with [Configure authentication in a sample web application that calls a web API](configure-authentication-sample-web-app-with-api.md), substituting the sample web app with your own web app.
+This article shows you how to add Azure Active Directory B2C (Azure AD B2C) authentication to your own ASP.NET web application that calls a web API. Learn how create an ASP.NET Core web application with ASP.NET Core middleware that uses the [OpenID Connect](openid-connect.md) protocol. Use this article with [Configure authentication in a sample web application that calls a web API](configure-authentication-sample-web-app-with-api.md), replace the sample web app with your own web app.
This article focus on the web application project. For instructions how to create the web API, see the [to do list web API sample](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/tree/master/4-WebApp-your-API/4-2-B2C).
using Microsoft.AspNetCore.Authorization;
## Add the to do list view
-To call the to do web api, you need to have an access token with the right scopes. In this step you acc and action to the `Home` controller. Under the `Views/Home` folder, add the `TodoList.cshtml` view.
+To call the to do web api, you need to have an access token with the right scopes. In this step, you add an action to the `Home` controller. Under the `Views/Home` folder, add the `TodoList.cshtml` view.
```razor @{
Azure AD B2C identity provider settings are stored in the `appsettings.json` fil
1. Select **SignIn/Up**. 1. Complete the sign-up or sign-in process.
-After you successfully authenticate, you will see your display name in the navigation bar.
+After you successfully authenticate, check your display name in the navigation bar.
* To view the claims the Azure AD B2C token return to your app, select **Claims**. * To view the access token, select **To do list**.
After you successfully authenticate, you will see your display name in the navi
## Next steps * Learn how to [customize and enhance the Azure AD B2C authentication experience for your web app](enable-authentication-web-application-options.md)
+* [Enable authentication in your own web API](enable-authentication-web-api.md)
active-directory How Provisioning Works https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/how-provisioning-works.md
Previously updated : 05/11/2021 Last updated : 06/11/2021
The provisioning service continues running back-to-back incremental cycles indef
### Errors and retries
-If an error in the target system prevents an individual user from being added, updated, or deleted in the target system, the operation is retried in the next sync cycle. If the user continues to fail, then the retries will begin to occur at a reduced frequency, gradually scaling back to just one attempt per day. To resolve the failure, administrators must check the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md?context=azure/active-directory/manage-apps/context/manage-apps-context) to determine the root cause and take the appropriate action. Common failures can include:
+If an error in the target system prevents an individual user from being added, updated, or deleted in the target system, the operation is retried in the next sync cycle. The errors are continually retried, gradually scaling back the frequency of retries. To resolve the failure, administrators must check the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md?context=azure/active-directory/manage-apps/context/manage-apps-context) to determine the root cause and take the appropriate action. Common failures can include:
- Users not having an attribute populated in the source system that is required in the target system - Users having an attribute value in the source system for which there's a unique constraint in the target system, and the same value is present in another user record
active-directory Application Proxy Configure Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-proxy/application-proxy-configure-custom-domain.md
To publish your app through Application Proxy with a custom domain:
![Add CNAME DNS entry](./media/application-proxy-configure-custom-domain/dns-info.png) 10. Follow the instructions at [Manage DNS records and record sets by using the Azure portal](../../dns/dns-operations-recordsets-portal.md) to add a DNS record that redirects the new external URL to the *msappproxy.net* domain.+
+ > [!IMPORTANT]
+ > Ensure that you are properly using a CNAME record that points to the *msappproxy.net* domain. Do not point records to IP addresses or server DNS names since these are not static and may impact the resiliency of the service.
11. To check that the DNS record is configured correctly, use the [nslookup](https://social.technet.microsoft.com/wiki/contents/articles/29184.nslookup-for-beginners.aspx) command to confirm that your external URL is reachable and the *msapproxy.net* domain appears as an alias.
active-directory Active Directory Certificate Based Authentication Get Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/active-directory-certificate-based-authentication-get-started.md
For the configuration, you can use the [Azure Active Directory PowerShell Versio
1. Start Windows PowerShell with administrator privileges. 2. Install the Azure AD module version [2.0.0.33](https://www.powershellgallery.com/packages/AzureAD/2.0.0.33) or higher.
-```powershell
- Install-Module -Name AzureAD ΓÇôRequiredVersion 2.0.0.33
-```
+ ```powershell
+ Install-Module -Name AzureAD ΓÇôRequiredVersion 2.0.0.33
+ ```
As a first configuration step, you need to establish a connection with your tenant. As soon as a connection to your tenant exists, you can review, add, delete, and modify the trusted certificate authorities that are defined in your directory.
To ensure that the revocation persists, you must set the **Effective Date** of t
The following steps outline the process for updating and invalidating the authorization token by setting the **StsRefreshTokenValidFrom** field.
-**To configure revocation:**
- 1. Connect with admin credentials to the MSOL service:
-```powershell
- $msolcred = get-credential
- connect-msolservice -credential $msolcred
-```
+ ```powershell
+ $msolcred = get-credential
+ connect-msolservice -credential $msolcred
+ ```
2. Retrieve the current StsRefreshTokensValidFrom value for a user:
-```powershell
- $user = Get-MsolUser -UserPrincipalName test@yourdomain.com`
- $user.StsRefreshTokensValidFrom
-```
+ ```powershell
+ $user = Get-MsolUser -UserPrincipalName test@yourdomain.com`
+ $user.StsRefreshTokensValidFrom
+ ```
3. Configure a new StsRefreshTokensValidFrom value for the user equal to the current timestamp:
-```powershell
- Set-MsolUser -UserPrincipalName test@yourdomain.com -StsRefreshTokensValidFrom ("03/05/2016")
-```
+ ```powershell
+ Set-MsolUser -UserPrincipalName test@yourdomain.com -StsRefreshTokensValidFrom ("03/05/2016")
+ ```
The date you set must be in the future. If the date is not in the future, the **StsRefreshTokensValidFrom** property is not set. If the date is in the future, **StsRefreshTokensValidFrom** is set to the current time (not the date indicated by Set-MsolUser command).
If your sign-in is successful, then you know that:
### Testing Office mobile applications
-**To test certificate-based authentication on your mobile Office application:**
- 1. On your test device, install an Office mobile application (for example, OneDrive).
-3. Launch the application.
-4. Enter your username, and then select the user certificate you want to use.
+1. Launch the application.
+1. Enter your username, and then select the user certificate you want to use.
You should be successfully signed in.
An EAS profile can be configured and placed on the device through the utilizatio
### Testing EAS client applications on Android
-**To test certificate authentication:**
- 1. Configure an EAS profile in the application that satisfies the requirements in the prior section. 2. Open the application, and verify that mail is synchronizing.
active-directory Concept Sspr Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/concept-sspr-policy.md
Previously updated : 01/27/2021 Last updated : 06/25/2021
With a two-gate policy, administrators don't have the ability to use security qu
The two-gate policy requires two pieces of authentication data, such as an email address, authenticator app, or a phone number. A two-gate policy applies in the following circumstances: * All the following Azure administrator roles are affected:
- * Helpdesk administrator
- * Service support administrator
+ * Application administrator
+ * Application proxy service administrator
+ * Authentication administrator
+ * Azure AD Joined Device Local Administrator
* Billing administrator
- * Partner Tier1 Support
- * Partner Tier2 Support
- * Exchange administrator
- * Mailbox Administrator
- * Skype for Business administrator
- * User administrator
+ * Compliance administrator
+ * Device administrators
+ * Directory synchronization accounts
* Directory writers
+ * Dynamics 365 administrator
+ * Exchange administrator
* Global administrator or company administrator
- * SharePoint administrator
- * Compliance administrator
- * Application administrator
- * Security administrator
- * Privileged role administrator
+ * Helpdesk administrator
* Intune administrator
- * Azure AD Joined Device Local Administrator
- * Application proxy service administrator
- * Dynamics 365 administrator
- * Power BI service administrator
- * Authentication administrator
+ * Mailbox Administrator
+ * Partner Tier1 Support
+ * Partner Tier2 Support
* Password administrator
+ * Power BI service administrator
* Privileged Authentication administrator
+ * Privileged role administrator
+ * SharePoint administrator
+ * Security administrator
+ * Service support administrator
+ * Skype for Business administrator
+ * User administrator
* If 30 days have elapsed in a trial subscription; or * A custom domain has been configured for your Azure AD tenant, such as *contoso.com*; or
active-directory Howto Authentication Temporary Access Pass https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/howto-authentication-temporary-access-pass.md
Previously updated : 06/11/2021 Last updated : 06/24/2021
Keep these limitations in mind:
- Temporary Access Pass is in public preview and currently not available in Azure for US Government. - Users in scope for Self Service Password Reset (SSPR) registration policy *or* [Identity Protection Multi-factor authentication registration policy](../identity-protection/howto-identity-protection-configure-mfa-policy.md) will be required to register authentication methods after they have signed in with a Temporary Access Pass. Users in scope for these policies will get redirected to the [Interrupt mode of the combined registration](concept-registration-mfa-sspr-combined.md#combined-registration-modes). This experience does not currently support FIDO2 and Phone Sign-in registration. -- A Temporary Access Pass cannot be used with the Network Policy Server (NPS) extension and Active Directory Federation Services (AD FS) adapter, or during Windows Setup/Out-of-Box-Experience (OOBE) and Autopilot.
+- A Temporary Access Pass cannot be used with the Network Policy Server (NPS) extension and Active Directory Federation Services (AD FS) adapter, or during Windows Setup/Out-of-Box-Experience (OOBE), Autopilot, or to deploy Windows Hello for Business.
- When Seamless SSO is enabled on the tenant, the users are prompted to enter a password. The **Use your Temporary Access Pass instead** link will be available for the user to sign-in with a Temporary Access Pass. ![Screenshot of Use a Temporary Access Pass instead](./media/how-to-authentication-temporary-access-pass/alternative.png)
active-directory Troubleshoot Sspr https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/troubleshoot-sspr.md
Previously updated : 08/26/2020 Last updated : 06/25/2021
If you or your users have problems using SSPR, review the following troubleshoot
| I've set a password reset policy, but when an admin account uses password reset, that policy isn't applied. | Microsoft manages and controls the administrator password reset policy to ensure the highest level of security. | | The user is prevented from attempting a password reset too many times in a day. | An automatic throttling mechanism is used to block users from attempting to reset their passwords too many times in a short period of time. Throttling occurs the following scenarios: <br><ul><li>The user attempts to validate a phone number five times in one hour.</li><li>The user attempts to use the security questions gate five times in one hour.</li><li>The user attempts to reset a password for the same user account five times in one hour.</li></ul>If a user encounters this problem, they must wait 24 hours after the last attempt. The user can then reset their password. | | The user sees an error when validating their phone number. | This error occurs when the phone number entered doesn't match the phone number on file. Make sure the user is entering the complete phone number, including the area and country code, when they attempt to use a phone-based method for password reset. |
+| The user sees an error when using their email address. | If the UPN differs from the primary ProxyAddress/SMTPAddress of the user, the [Sign-in to Azure AD with email as an alternate login ID](howto-authentication-use-email-signin.md) setting must be enabled for the tenant. |
| There's an error processing the request. | Generic SSPR registration errors can be caused by many issues, but generally this error is caused by either a service outage or a configuration issue. If you continue to see this generic error when you re-try the SSPR registration process, [contact Microsoft support](#contact-microsoft-support) for additional assistance. | | On-premises policy violation | The password doesn't meet the on-premises Active Directory password policy. The user must define a password that meets the complexity or strength requirements. | | Password doesn't comply with fuzzy policy | The password that was used appears in the [banned password list](./concept-password-ban-bad.md#how-are-passwords-evaluated) and can't be used. The user must define a password that meets or exceeds the banned password list policy. |
active-directory Tutorial Single Forest https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/cloud-sync/tutorial-single-forest.md
You can use the environment you create in this tutorial for testing or for getti
### In your on-premises environment
-1. Identify a domain-joined host server running Windows Server 2012 R2 or greater with minimum of 4 GB RAM and .NET 4.7.1+ runtime
+1. Identify a domain-joined host server running Windows Server 2016 or greater with minimum of 4 GB RAM and .NET 4.7.1+ runtime
2. If there is a firewall between your servers and Azure AD, configure the following items: - Ensure that agents can make *outbound* requests to Azure AD over the following ports:
active-directory Terms Of Use https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/conditional-access/terms-of-use.md
Azure AD terms of use policies have the following capabilities:
To use and configure Azure AD terms of use policies, you must have: - Azure AD Premium P1, P2, EMS E3, or EMS E5 subscription.
- - If you don't have one of theses subscriptions, you can [get Azure AD Premium](../fundamentals/active-directory-get-started-premium.md) or [enable Azure AD Premium trial](https://azure.microsoft.com/trial/get-started-active-directory/).
+ - If you don't have one of these subscriptions, you can [get Azure AD Premium](../fundamentals/active-directory-get-started-premium.md) or [enable Azure AD Premium trial](https://azure.microsoft.com/trial/get-started-active-directory/).
- One of the following administrator accounts for the directory you want to configure: - Global Administrator - Security Administrator
You can edit some details of terms of use policies, but you can't modify an exis
1. In the Edit terms of use pane, you can change the following: - **Name** ΓÇô this is the internal name of the ToU that is not shared with end users - **Display name** ΓÇô this is the name that end users can see when viewing the ToU
- - **Require users to expand the terms of use** ΓÇô Setting this to **On** will force the end use to expand the terms of use policy document before accepting it.
+ - **Require users to expand the terms of use** ΓÇô Setting this to **On** will force the end user to expand the terms of use policy document before accepting it.
- (Preview) You can **update an existing terms of use** document - You can add a language to an existing ToU
active-directory Active Directory Saml Claims Customization https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/active-directory-saml-claims-customization.md
Previously updated : 12/09/2020 Last updated : 05/10/2021
To edit the NameID (name identifier value):
If the SAML request contains the element NameIDPolicy with a specific format, then the Microsoft identity platform will honor the format in the request.
-If the SAML request doesn't contain an element for NameIDPolicy, then the Microsoft identity platform will issue the NameID with the format you specify. If no format is specified, the Microsoft identity platform will use the default source format associated with the claim source selected.
+If the SAML request doesn't contain an element for NameIDPolicy, then the Microsoft identity platform will issue the NameID with the format you specify. If no format is specified, the Microsoft identity platform will use the default source format associated with the claim source selected. If a transformation results in a null or illegal value, Azure AD will send a persisistent pairwise identifier in the nameIdentifier.
From the **Choose name identifier format** dropdown, you can select one of the following options.
You can also use the claims transformations functions.
| Function | Description | |-|-| | **ExtractMailPrefix()** | Removes the domain suffix from either the email address or the user principal name. This extracts only the first part of the user name being passed through (for example, "joe_smith" instead of joe_smith@contoso.com). |
-| **Join()** | Joins an attribute with a verified domain. If the selected user identifier value has a domain, it will extract the username to append the selected verified domain. For example, if you select the email (joe_smith@contoso.com) as the user identifier value and select contoso.onmicrosoft.com as the verified domain, this will result in joe_smith@contoso.onmicrosoft.com. |
| **ToLower()** | Converts the characters of the selected attribute into lowercase characters. | | **ToUpper()** | Converts the characters of the selected attribute into uppercase characters. |
You can use the following functions to transform claims.
| Function | Description | |-|-| | **ExtractMailPrefix()** | Removes the domain suffix from either the email address or the user principal name. This extracts only the first part of the user name being passed through (for example, "joe_smith" instead of joe_smith@contoso.com). |
-| **Join()** | Creates a new value by joining two attributes. Optionally, you can use a separator between the two attributes. For NameID claim transformation, the join is restricted to a verified domain. If the selected user identifier value has a domain, it will extract the username to append the selected verified domain. For example, if you select the email (joe_smith@contoso.com) as the user identifier value and select contoso.onmicrosoft.com as the verified domain, this will result in joe_smith@contoso.onmicrosoft.com. |
+| **Join()** | Creates a new value by joining two attributes. Optionally, you can use a separator between the two attributes. |
| **ToLowercase()** | Converts the characters of the selected attribute into lowercase characters. | | **ToUppercase()** | Converts the characters of the selected attribute into uppercase characters. | | **Contains()** | Outputs an attribute or constant if the input matches the specified value. Otherwise, you can specify another output if thereΓÇÖs no match.<br/>For example, if you want to emit a claim where the value is the userΓÇÖs email address if it contains the domain ΓÇ£@contoso.comΓÇ¥, otherwise you want to output the user principal name. To do this, you would configure the following values:<br/>*Parameter 1(input)*: user.email<br/>*Value*: "@contoso.com"<br/>Parameter 2 (output): user.email<br/>Parameter 3 (output if there's no match): user.userprincipalname |
active-directory Msal Net Differences Adal Net https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/msal-net-differences-adal-net.md
+
+ Title: Differences between ADAL.NET and MSAL.NET apps | Azure
+
+description: Learn about the differences between the Microsoft Authentication Library for .NET (MSAL.NET) and Azure AD Authentication Library for .NET (ADAL.NET).
++++++++ Last updated : 06/09/2021+++
+#Customer intent: As an application developer, I want to learn about the differences between the ADAL.NET and MSAL.NET libraries so I can migrate my applications to MSAL.NET.
++
+# Differences between ADAL.NET and MSAL.NET apps
+
+Migrating your applications from using ADAL to using MSAL comes with security and resiliency benefits. This article outlines differences between MSAL.NET and ADAL.NET. In most cases you want to use MSAL.NET and the Microsoft identity platform, which is the latest generation of Microsoft Authentication Libraries. Using MSAL.NET, you acquire tokens for users signing-in to your application with Azure AD (work and school accounts), Microsoft (personal) accounts (MSA), or Azure AD B2C.
+
+If you're already familiar with ADAL.NET and the Azure AD for developers (v1.0) endpoint, get to know [what's different about the Microsoft identity platform?](../azuread-dev/azure-ad-endpoint-comparison.md). You still need to use ADAL.NET if your application needs to sign in users with earlier versions of [Active Directory Federation Services (ADFS)](/windows-server/identity/active-directory-federation-services). For more information, see [ADFS support](https://aka.ms/msal-net-adfs-support).
+
+| | **ADAL NET** | **MSAL NET** |
+|--|--||
+| **NuGet packages and Namespaces** |ADAL was consumed from the [Microsoft.IdentityModel.Clients.ActiveDirectory](https://www.nuget.org/packages/Microsoft.IdentityModel.Clients.ActiveDirectory) NuGet package. The namespace was `Microsoft.IdentityModel.Clients.ActiveDirectory`. | Add the [Microsoft.Identity.Client](https://www.nuget.org/packages/Microsoft.Identity.Client) NuGet package, and use the `Microsoft.Identity.Client` namespace. If you're building a confidential client application, check out [Microsoft.Identity.Web](https://www.nuget.org/packages/Microsoft.Identity.Web). |
+| **Scopes and resources** | ADAL.NET acquires tokens for *resources*. | MSAL.NET acquires tokens for *scopes*. Several MSAL.NET `AcquireTokenXXX` overrides require a parameter called scopes(`IEnumerable<string> scopes`). This parameter is a simple list of strings that declare the permissions and resources that are requested. Well-known scopes are the [Microsoft Graph's scopes](/graph/permissions-reference). You can also [access v1.0 resources using MSAL.NET](#scopes). |
+| **Core classes** | ADAL.NET used [AuthenticationContext](https://github.com/AzureAD/azure-activedirectory-library-for-dotnet/wiki/AuthenticationContext:-the-connection-to-Azure-AD) as the representation of your connection to the Security Token Service (STS) or authorization server, through an Authority. | MSAL.NET is designed around [client applications](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/wiki/Client-Applications). It defines `IPublicClientApplication` interfaces for public client applications and `IConfidentialClientApplication` for confidential client applications, as well as a base interface `IClientApplicationBase` for the contract common to both types of applications.|
+| **Token acquisition** | In public clients, ADAL uses `AcquireTokenAsync` and `AcquireTokenSilentAsync` for authentication calls. | In public clients, MSAL uses `AcquireTokenInteractive` and `AcquireTokenSilent` for the same authentication calls. The parameters are different from the ADAL ones. <br><br>In Confidential client applications, there are [token acquisition methods](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/wiki/Acquiring-Tokens) with an explicit name depending on the scenario. Another difference is that, in MSAL.NET, you no longer have to pass in the `ClientID` of your application in every AcquireTokenXX call. The `ClientID` is set only once when building `IPublicClientApplication` or `IConfidentialClientApplication`.|
+| **IAccount and IUser** | ADAL defines the notion of user through the IUser interface. However, a user is a human or a software agent. As such, a user can own one or more accounts in the Microsoft identity platform (several Azure AD accounts, Azure AD B2C, Microsoft personal accounts). The user can also be responsible for one or more Microsoft identity platform accounts. | MSAL.NET defines the concept of account (through the IAccount interface). The IAccount interface represents information about a single account. The user can have several accounts in different tenants. MSAL.NET provides better information in guest scenarios, as home account information is provided. You can read more about the [differences between IUser and IAccount](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/wiki/msal-net-2-released#iuser-is-replaced-by-iaccount).|
+| **Cache persistence** | ADAL.NET allows you to extend the `TokenCache` class to implement the desired persistence functionality on platforms without a secure storage (.NET Framework and .NET core) by using the `BeforeAccess`, and `BeforeWrite` methods. For details, see [token cache serialization in ADAL.NET](https://github.com/AzureAD/azure-activedirectory-library-for-dotnet/wiki/Token-cache-serialization). | MSAL.NET makes the token cache a sealed class, removing the ability to extend it. As such, your implementation of token cache persistence must be in the form of a helper class that interacts with the sealed token cache. This interaction is described in [token cache serialization in MSAL.NET](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/wiki/token-cache-serialization) article. The serialization for a public client application (See [token cache for a public client application](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/wiki/token-cache-serialization#token-cache-for-a-public-client-application)), is different from that of for a confidential client application (See [token cache for a web app or web API](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/wiki/token-cache-serialization#token-cache-for-a-public-client-application)). |
+| **Common authority** | ADAL uses Azure AD v1.0. `https://login.microsoftonline.com/common` authority in Azure AD v1.0 (which ADAL uses) allows users to sign in using any AAD organization (work or school) account. Azure AD v1.0 doesn't allow sign in with Microsoft personal accounts. For more information, see [authority validation in ADAL.NET](https://github.com/AzureAD/azure-activedirectory-library-for-dotnet/wiki/AuthenticationContext:-the-connection-to-Azure-AD#authority-validation). | MSAL uses Azure AD v2.0. `https://login.microsoftonline.com/common` authority in Azure AD v2.0 (which MSAL uses) allows users to sign in with any AAD organization (work or school) account or with a Microsoft personal account. To restrict sign in using only organization accounts (work or school account) in MSAL, you'll need to use the `https://login.microsoftonline.com/organizations` endpoint. For details, see the `authority` parameter in [public client application](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/wiki/Client-Applications#publicclientapplication). |
+
+## Supported grants
+
+Below is a summary comparing MSAL.NET and ADAL.NET supported grants for both public and confidential client applications.
+
+### Public client applications
+
+The following image summarizes some of the differences between ADAL.NET and MSAL.NET for a public client application.
+
+[![Screenshot showing some of the differences between ADAL.NET and MSAL.NET for a public client application.](media/msal-compare-msaldotnet-and-adaldotnet/differences.png)](media/msal-compare-msaldotnet-and-adaldotnet/differences.png#lightbox)
+
+Here are the grants supported in ADAL.NET and MSAL.NET for Desktop and Mobile applications.
+
+Grant | MSAL.NET | ADAL.NET |
+ | - | - |
+Interactive | [Acquiring tokens interactively in MSAL.NET](scenario-desktop-acquire-token.md#acquire-a-token-interactively) | [Interactive Auth](https://github.com/AzureAD/azure-activedirectory-library-for-dotnet/wiki/Acquiring-tokens-interactivelyPublic-client-application-flows) |
+Integrated Windows Authentication | [Integrated Windows Authentication](scenario-desktop-acquire-token.md#integrated-windows-authentication) | [Integrated authentication on Windows (Kerberos)](https://github.com/AzureAD/azure-activedirectory-library-for-dotnet/wiki/AcquireTokenSilentAsync-using-Integrated-authentication-on-Windows-(Kerberos)) |
+Username / Password | [Username Password Authentication](scenario-desktop-acquire-token.md#username-and-password) | [Acquiring tokens with username and password](https://github.com/AzureAD/azure-activedirectory-library-for-dotnet/wiki/Acquiring-tokens-with-username-and-password) |
+Device code flow | [Device Code flow](scenario-desktop-acquire-token.md#command-line-tool-without-a-web-browser) | [Device profile for devices without web browsers](https://github.com/AzureAD/azure-activedirectory-library-for-dotnet/wiki/Device-profile-for-devices-without-web-browsers) |
+
+### Confidential client applications
+
+The following image summarizes some of the differences between ADAL.NET and MSAL.NET for a confidential client application.
+
+[![Screenshot showing some of the differences between ADAL.NET and MSAL.NET for a confidential client application.](media/msal-net-migration/confidential-client-application.png)](media/msal-net-migration/confidential-client-application.png#lightbox)
++
+Here are the grants supported in ADAL.NET, MSAL.NET, and Microsoft.Identity.Web for web applications, web APIs, and daemon applications.
+
+Type of App | Grant | MSAL.NET | ADAL.NET |
+ | | | -- |
+Web app, web API, daemon | Client Credentials | [Client credential flows in MSAL.NET](scenario-daemon-acquire-token.md#acquiretokenforclient-api) | [Client credential flows in ADAL.NET](https://github.com/AzureAD/azure-activedirectory-library-for-dotnet/wiki/Client-credential-flows) |
+Web API | On behalf of | [On behalf of in MSAL.NET](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/wiki/on-behalf-of) | [Service to service calls on behalf of the user with ADAL.NET](https://github.com/AzureAD/azure-activedirectory-library-for-dotnet/wiki/Service-to-service-calls-on-behalf-of-the-user) |
+Web app | Auth Code | [Acquiring tokens with authorization codes on web apps with A MSAL.NET](scenario-web-app-call-api-acquire-token.md) | [Acquiring tokens with authorization codes on web apps with ADAL.NET](https://github.com/AzureAD/azure-activedirectory-library-for-dotnet/wiki/Acquiring-tokens-with-authorization-codes-on-web-apps) |
+
+## Migrating from ADAL 2.x with refresh tokens
+
+In ADAL.NET v2.X, the refresh tokens were exposed allowing you to develop solutions around the use of these tokens by caching them and using the `AcquireTokenByRefreshToken` methods provided by ADAL 2.x.
+
+Some of those solutions were used in scenarios such as:
+
+- Long running services that do actions including refreshing dashboards for the users when the users are no longer connected / signed-in to the app.
+- WebFarm scenarios for enabling the client to bring the refresh token to the web service (caching is done client side, encrypted cookie, and not server side).
+
+MSAL.NET doesn't expose refresh tokens for security reasons. MSAL handles refreshing tokens for you.
+
+Fortunately, MSAL.NET has an API that allows you to migrate your previous refresh tokens (acquired with ADAL) into the `IConfidentialClientApplication`:
+
+```csharp
+/// <summary>
+/// Acquires an access token from an existing refresh token and stores it and the refresh token into
+/// the application user token cache, where it will be available for further AcquireTokenSilent calls.
+/// This method can be used in migration to MSAL from ADAL v2 and in various integration
+/// scenarios where you have a RefreshToken available.
+/// (see https://aka.ms/msal-net-migration-adal2-msal2)
+/// </summary>
+/// <param name="scopes">Scope to request from the token endpoint.
+/// Setting this to null or empty will request an access token, refresh token and ID token with default scopes</param>
+/// <param name="refreshToken">The refresh token from ADAL 2.x</param>
+IByRefreshToken.AcquireTokenByRefreshToken(IEnumerable<string> scopes, string refreshToken);
+```
+
+With this method, you can provide the previously used refresh token along with any scopes (resources) you want. The refresh token will be exchanged for a new one and cached into your application.
+
+As this method is intended for scenarios that aren't typical, it isn't readily accessible with the `IConfidentialClientApplication` without first casting it to `IByRefreshToken`.
+
+The code snippet below shows some migration code in a confidential client application.
+
+```csharp
+TokenCache userCache = GetTokenCacheForSignedInUser();
+string rt = GetCachedRefreshTokenForSignedInUser();
+
+IConfidentialClientApplication app;
+app = ConfidentialClientApplicationBuilder.Create(clientId)
+ .WithAuthority(Authority)
+ .WithRedirectUri(RedirectUri)
+ .WithClientSecret(ClientSecret)
+ .Build();
+IByRefreshToken appRt = app as IByRefreshToken;
+
+AuthenticationResult result = await appRt.AcquireTokenByRefreshToken(null, rt)
+ .ExecuteAsync()
+ .ConfigureAwait(false);
+```
+
+`GetCachedRefreshTokenForSignedInUser` retrieves the refresh token that was stored in some storage by a previous version of the application that used to use ADAL 2.x. `GetTokenCacheForSignedInUser` deserializes a cache for the signed-in user (as confidential client applications should have one cache per user).
+
+An access token and an ID token are returned in the `AuthenticationResult` value while the new refresh token is stored in the cache. You can also use this method for various integration scenarios where you have a refresh token available.
+
+## v1.0 and v2.0 tokens
+
+There are two versions of tokens: v1.0 tokens and v2.0 tokens. The v1.0 endpoint (used by ADAL) emits v1.0 ID tokens while the v2.0 endpoint (used by MSAL) emits v2.0 ID tokens. However, both endpoints emit access tokens of the version of the token that the web API accepts. A property of the application manifest of the web API enables developers to choose which version of token is accepted. See `accessTokenAcceptedVersion` in the [application manifest](reference-app-manifest.md) reference documentation.
+
+For more information about v1.0 and v2.0 access tokens, see [Azure Active Directory access tokens](access-tokens.md).
+
+## Exceptions
+
+### Interaction required exceptions
+
+Using MSAL.NET, you catch `MsalUiRequiredException` as described in [AcquireTokenSilent](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/wiki/AcquireTokenSilentAsync-using-a-cached-token).
+
+```csharp
+catch(MsalUiRequiredException exception)
+{
+ try {ΓÇ£try to authenticate interactivelyΓÇ¥}
+}
+```
+
+For details, see [Handle errors and exceptions in MSAL.NET](msal-error-handling-dotnet.md)
+
+ADAL.NET had less explicit exceptions. For example, when silent authentication failed in ADAL the procedure was to catch the exception and look for the `user_interaction_required` error code:
+
+```csharp
+catch(AdalException exception)
+{
+ if (exception.ErrorCode == "user_interaction_required")
+ {
+ try
+ {ΓÇ£try to authenticate interactivelyΓÇ¥}}
+ }
+}
+```
+
+For details, see [the recommended pattern to acquire a token in public client applications](https://github.com/AzureAD/azure-activedirectory-library-for-dotnet/wiki/AcquireTokenSilentAsync-using-a-cached-token#recommended-pattern-to-acquire-a-token) with ADAL.NET.
+
+### Handling claim challenge exceptions
+
+At times when acquiring a token, Azure AD throws an exception in case a resource requires more claims from the user (for instance two-factor authentication).
+
+In MSAL.NET, claim challenge exceptions are handled in the following way:
+
+- The `Claims` are surfaced in the `MsalServiceException`.
+- There's a `.WithClaim(claims)` method that can apply to the `AcquireTokenXXX` builders.
+
+For details see [Handling MsalUiRequiredException](msal-error-handling-dotnet.md#msaluirequiredexception).
+
+In ADAL.NET, claim challenge exceptions were handled in the following way:
+
+- `AdalClaimChallengeException` is an exception (deriving from `AdalServiceException`). The `Claims` member contains some JSON fragment with the claims, which are expected.
+- The public client application receiving this exception needed to call the `AcquireTokenInteractive` override having a claims parameter. This override of `AcquireTokenInteractive` doesn't even try to hit the cache as it isn't necessary. The reason is that the token in the cache doesn't have the right claims (otherwise an `AdalClaimChallengeException` wouldn't have been thrown). As such, there's no need to look at the cache. The `ClaimChallengeException` can be received in a WebAPI doing OBO, but the `AcquireTokenInteractive` needs to be called in a public client application calling this web API.
+
+For details, including samples see [handling AdalClaimChallengeException](https://github.com/AzureAD/azure-activedirectory-library-for-dotnet/wiki/Exceptions-in-ADAL.NET#handling-adalclaimchallengeexception).
++
+## Scopes
+
+ADAL uses the concept of resources with `resourceId` string, MSAL.NET, however, uses scopes. The logic used by Azure AD is as follows:
+
+- For ADAL (v1.0) endpoint with a v1.0 access token (the only possible), `aud=resource`.
+- For MSAL (v2.0 endpoint) asking an access token for a resource accepting v2.0 tokens, `aud=resource.AppId`.
+- For MSAL (v2.0 endpoint) asking an access token for a resource accepting a v1.0 access token, Azure AD parses the desired audience from the requested scope. This is done by taking everything before the last slash and using it as the resource identifier. As such, if `https://database.windows.net` expects an audience of `https://database.windows.net/`, you'll need to request a scope of `https://database.windows.net//.default` (notice the double slash before ./default). This is illustrated by examples 1 and 2 below.
+
+### Example 1
+
+If you want to acquire tokens for an application accepting v1.0 tokens (for instance the Microsoft Graph API, which is `https://graph.microsoft.com`), you'd need to create `scopes` by concatenating a desired resource identifier with a desired OAuth2 permission for that resource.
+
+For instance, to access the name of the user via a v1.0 web API whose App ID URI is `ResourceId`, you'd want to use:
+
+```csharp
+var scopes = new [] { ResourceId+"/user_impersonation" };
+```
+
+If you want to read and write with MSAL.NET Azure Active Directory using the Microsoft Graph API (`https://graph.microsoft.com/`), you'd create a list of scopes like in the code snippet below:
+
+```csharp
+string ResourceId = "https://graph.microsoft.com/";
+string[] scopes = { ResourceId + "Directory.Read", ResourceId + "Directory.Write" }
+```
+
+### Example 2
+
+If resourceId ends with a '/', you'll need to have a double '/' when writing the scope value. For example, if you want to write the scope corresponding to the Azure Resource Manager API (`https://management.core.windows.net/`), request the following scope (note the two slashes).
+
+```csharp
+var resource = "https://management.core.windows.net/"
+var scopes = new[] {"https://management.core.windows.net//user_impersonation"};
+var result = await app.AcquireTokenInteractive(scopes).ExecuteAsync();
+
+// then call the API: https://management.azure.com/subscriptions?api-version=2016-09-01
+```
+
+This is because the Resource Manager API expects a slash in its audience claim (`aud`), and then there's a slash to separate the API name from the scope.
+
+If you want to acquire a token for all the static scopes of a v1.0 application, you'd create your scopes list as shown in the code snippet below:
+
+```csharp
+ResourceId = "someAppIDURI";
+var scopes = new [] { ResourceId+"/.default" };
+```
+
+For a client credential flow, the scope to pass would also be `/.default`. This scope tells to Azure AD: "all the app-level permissions that the admin has consented to in the application registration.
+
+## Next steps
+
+[Migrate your apps from ADAL to MSAL](msal-net-migration.md)
+[Migrate your ADAL.NET confidential client apps to use MSAL.NET](msal-net-migration-confidential-client.md)
active-directory Msal Net Migration Confidential Client https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/msal-net-migration-confidential-client.md
+
+ Title: Migrating confidential client applications to MSAL.NET
+
+description: Learn how to migrate a confidential client application from Azure AD Authentication Library for .NET (ADAL.NET) to Microsoft Authentication Library for .NET (MSAL.NET).
++++++++ Last updated : 06/08/2021+++
+#Customer intent: As an application developer, I want to migrate my confidential client app from ADAL.NET to MSAL.NET.
++
+# How to migrate confidential client applications from ADAL.NET to MSAL.NET
+
+Confidential client applications are web apps, web APIs, and daemon applications (calling another service on their own behalf). For details see [Authentication flows and application scenarios](authentication-flows-app-scenarios.md). If your app is based on ASP.NET Core, use [Microsoft.Identity.Web](microsoft-identity-web.md)
+
+The migration process consists of three steps:
+
+1. Inventory - identify the code in your apps that uses ADAL.NET.
+2. Install the MSAL.NET NuGet package.
+3. Update the code depending on your scenario.
+
+For app registrations, if your application isn't dual stacked (AAD and MSA being two apps):
+
+- You don't need to create a new app registration (you keep the same ClientID)
+- You don't need to change the pre-authorizations.
+
+## Step 1 - Find the code using ADAL.NET in your app
+
+The code using ADAL in confidential client application instantiates an `AuthenticationContext` and calls either `AcquireTokenByAuthorizationCode` or one override of `AcquireTokenAsync` with the following parameters:
+
+- A `resourceId` string. This variable is the **App ID URI** of the web API that you want to call.
+- An instance of `IClientAssertionCertificate` or `ClientAssertion` instance. This instance provides the client credentials for your app (proving the identity of your app).
+
+## Step 2 - Install the MSAL.NET NuGet package
+
+Once you've identified that you have apps that are using ADAL.NET, install the MSAL.NET NuGet package: [Microsoft.Identity.Client](https://www.nuget.org/packages/Microsoft.Identity.Client) and update your project library references.
+For more information on how to install a NuGet package, see [install a NuGet package](https://www.bing.com/search?q=install+nuget+package).
+
+## Step 3 - Update the code
+
+Updating code depends on the confidential client scenario. Some steps are common and apply across all the confidential client scenarios. There are also steps that are unique to each scenario.
+
+The confidential client scenarios are as listed below:
+
+- [Daemon scenarios](/active-directory/develop/msal-net-migration-confidential-client?tabs=daemon#migrate-daemon-scenarios) supported by web apps, web APIs, and daemon console applications.
+- [Web api calling downstream web apis](/active-directory/develop/msal-net-migration-confidential-client?tabs=obo#migrate-on-behalf-of-calls-obo-in-web-apis) supported by web APIs calling downstream web APIs on behalf of the user.
+- [Web app calling web apis](/active-directory/develop/msal-net-migration-confidential-client?tabs=authcode#migrate-acquiretokenbyauthorizationcodeasync-in-web-apps) supported by Web apps that sign in users and call a downstream web API.
+
+You may have provided a wrapper around ADAL.NET to handle certificates and caching. This article uses the same approach to illustrate the migration from ADAL.NET to MSAL.NET process. However, this code is only for demonstration purposes. Don't copy/paste these wrappers or integrate them in your code as they are.
+
+## [Daemon](#tab/daemon)
+
+### Migrate daemon apps
+
+Daemon scenarios use the OAuth2.0 [client credential flow](v2-oauth2-client-creds-grant-flow.md). They're also called service to service calls. Your app acquires a token on its own behalf, not on behalf of a user.
+
+#### Find if your code uses daemon scenarios
+
+The ADAL code for your app uses daemon scenarios if it contains a call to `AuthenticationContext.AcquireTokenAsync` with the following parameters:
+
+- A resource (App ID URI) as a first parameter.
+- A `IClientAssertionCertificate` or `ClientAssertion` as the second parameter.
+
+It doesn't have a parameter of type `UserAssertion`. If it does, then your app is a web API, and it's using [on behalf of flow](/active-directory/develop/msal-net-migration-confidential-client?#migrate-on-behalf-of-calls-obo-in-web-apis) scenario.
+
+#### Update the code of daemon scenarios
++
+In this case, we replace the call to `AuthenticationContext.AcquireTokenAsync` by a call to `IConfidentialClientApplication.AcquireTokenClient`.
+
+##### Sample daemon code
+
+The following table compares the ADAL.NET and MSAL.NET code for daemon scenarios.
+
+ ADAL
+ MSAL
+
+```csharp
+using Microsoft.IdentityModel.Clients.ActiveDirectory;
+using System.Security.Cryptography.X509Certificates;
+using System.Threading.Tasks;
+
+public partial class AuthWrapper
+{
+ const string ClientId = "Guid (AppID)";
+ const string authority
+ = "https://login.microsoftonline.com/{tenant}";
+ // App ID Uri of web API to call
+ const string resourceId = "https://target-api.domain.com";
+ X509Certificate2 certificate = LoadCertificate();
+++
+ public async Task<AuthenticationResult> GetAuthenticationResult()
+ {
++
+ var authContext = new AuthenticationContext(authority);
+ var clientAssertionCert = new ClientAssertionCertificate(
+ ClientId,
+ certificate);
++
+ var authResult = await authContext.AcquireTokenAsync(
+ resourceId,
+ clientAssertionCert,
+ );
+
+ return authResult;
+ }
+}
+```
+```csharp
+using Microsoft.Identity.Client;
+using System.Security.Cryptography.X509Certificates;
+using System.Threading.Tasks;
+
+public partial class AuthWrapper
+{
+ const string ClientId = "Guid (Application ID)";
+ const string authority
+ = "https://login.microsoftonline.com/{tenant}";
+ // App ID Uri of web API to call
+ const string resourceId = "https://target-api.domain.com";
+ X509Certificate2 certificate = LoadCertificate();
+
+ IConfidentialClientApplication app;
+
+ public async Task<AuthenticationResult> GetAuthenticationResult()
+ {
+ if (app == null)
+ {
+ app = ConfidentialClientApplicationBuilder.Create(ClientId)
+ .WithCertificate(certificate)
+ .WithAuthority(authority)
+ .Build();
+ }
+
+ var authResult = await app.AcquireTokenForClient(
+ new [] { $"{resourceId}/.default" })
+ .ExecuteAsync()
+ .ConfigureAwait(false);
+
+ return authResult;
+ }
+}
+```
+ :::column-end:::
+
+#### Token caching
+
+To benefit from the in-memory cache, the instance of `IConfidentialClientApplication` needs to be kept in a member variable. If you re-create the confidential client application each time you request a token, you won't benefit from the token cache.
+
+You'll need to serialize the AppTokenCache if you choose not to use the default in-memory app token cache. Similarly, If you want to implement a distributed token cache, you'll need to serialize the AppTokenCache. For details see [token cache for a web app or web API (confidential client application)](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/wiki/token-cache-serialization#token-cache-for-a-web-app-or-web-api-confidential-client-application) and this sample [active-directory-dotnet-v1-to-v2/ConfidentialClientTokenCache](https://github.com/Azure-Samples/active-directory-dotnet-v1-to-v2/tree/master/ConfidentialClientTokenCache).
+
+[Learn more about demon scenario](scenario-daemon-overview.md) and how it's implemented with MSAL.NET or Microsoft.Identity.Web in new applications.
+
+## [Web api calling downstream web apis](#tab/obo)
+
+### Migrate web api calling downstream web apis
+
+Web apis calling downstream web apis use the OAuth2.0 [On-Behalf-Of](v2-oauth2-on-behalf-of-flow.md)(OBO) flow. The code of the web API will use the token retrieved from the HTTP authorized header and validate it. This token will be exchanged against a token to call the downstream web API. This token is used as a `UserAssertion` in both ADAL.NET and MSAL.NET.
+
+#### Find if your code uses OBO
+
+The ADAL code for your app uses OBO if it contains a call to `AuthenticationContext.AcquireTokenAsync` with the following parameters:
+
+- A resource (App ID URI) as a first parameter
+- A `IClientAssertionCertificate` or `ClientAssertion` as the second parameter.
+- A parameter of type `UserAssertion`.
+
+#### Update the code using OBO
++
+In this case, we replace the call to `AuthenticationContext.AcquireTokenAsync` by a call to `IConfidentialClientApplication.AcquireTokenOnBehalfOf`.
+
+##### Sample OBO code
+ :::column span="":::
+ ADAL
+ :::column-end:::
+ :::column span="":::
+ MSAL
+ :::column-end:::
+```csharp
+using Microsoft.IdentityModel.Clients.ActiveDirectory;
+using System.Security.Cryptography.X509Certificates;
+using System.Threading.Tasks;
+
+public partial class AuthWrapper
+{
+ const string ClientId = "Guid (AppID)";
+ const string authority
+ = "https://login.microsoftonline.com/common";
+ X509Certificate2 certificate = LoadCertificate();
+++
+ public async Task<AuthenticationResult> GetAuthenticationResult(
+ string resourceId,
+ string tokenUsedToCallTheWebApi)
+ {
++
+ var authContext = new AuthenticationContext(authority);
+ var clientAssertionCert = new ClientAssertionCertificate(
+ ClientId,
+ certificate);
+++
+ var userAssertion = new UserAssertion(tokenUsedToCallTheWebApi);
+
+ var authResult = await authContext.AcquireTokenAsync(
+ resourceId,
+ clientAssertionCert,
+ userAssertion,
+ );
+
+ return authResult;
+ }
+}
+```
+```csharp
+using Microsoft.Identity.Client;
+using System.Security.Cryptography.X509Certificates;
+using System.Threading.Tasks;
+
+public partial class AuthWrapper
+{
+ const string ClientId = "Guid (Application ID)";
+ const string authority
+ = "https://login.microsoftonline.com/common";
+ X509Certificate2 certificate = LoadCertificate();
+
+ IConfidentialClientApplication app;
+
+ public async Task<AuthenticationResult> GetAuthenticationResult(
+ string resourceId,
+ string tokenUsedToCallTheWebApi)
+ {
+ if (app == null)
+ {
+ app = ConfidentialClientApplicationBuilder.Create(ClientId)
+ .WithCertificate(certificate)
+ .WithAuthority(authority)
+ .Build();
+ }
++
+ var userAssertion = new UserAssertion(tokenUsedToCallTheWebApi);
+
+ var authResult = await app.AcquireTokenOnBehalfOf(
+ new string[] { $"{resourceId}/.default" },
+ userAssertion)
+ .ExecuteAsync()
+ .ConfigureAwait(false);
+
+ return authResult;
+ }
+}
+```
+
+#### Token caching
+
+For token caching in OBOs, you need to use a distributed token cache. For details see [token cache for a web app or web API (confidential client application)](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/wiki/token-cache-serialization#token-cache-for-a-web-app-or-web-api-confidential-client-application) and [read through sample code](https://github.com/Azure-Samples/active-directory-dotnet-v1-to-v2/tree/master/ConfidentialClientTokenCache)
+
+```CSharp
+IMsalTokenCacheProvider msalTokenCacheProvider = CreateTokenCache(cacheImplementation)
+msalTokenCacheProvider.Initialize(app.UserTokenCache);
+```
+
+Refer to [code samples](https://github.com/Azure-Samples/active-directory-dotnet-v1-to-v2/blob/master/ConfidentialClientTokenCache/Program.cs) for an example of implementation of `CreateTokenCache`.
+
+[Learn more about web APIs calling downstream web API](scenario-web-api-call-api-overview.md) and how they're implemented with MSAL.NET or Microsoft.Identity.Web in new applications.
+
+## [Web app calling web apis.](#tab/authcode)
+
+### Migrate web apps calling web apis
+
+If your app uses ASP.NET Core, Microsoft strongly recommends you update to Microsoft.Identity.Web which processes everything for you. See [Microsoft identity web GA](https://github.com/AzureAD/microsoft-identity-web/wiki/1.0.0) for a quick presentation, and [https://aka.ms/ms-id-web/webapp](https://aka.ms/ms-id-web/webapp) for details about how to use it in a web app.
+
+Web apps that sign in users and call web APIs on behalf of the user use the OAuth2.0 [authorization code flow](v2-oauth2-auth-code-flow.md). Typically:
+
+1. The web app signs-in a user by executing a first leg of the auth code flow. It does this by going to Azure AD's authorize endpoint. The users signs-in, and performs multiple factor authentications if needed. As an outcome of this operation, the app receives the **authorization code**. So far ADAL/MSAL aren't involved.
+2. The app will, then, execute the second leg of the authorization code flow. It uses the authorization code to get an access token, an ID Token, and a refresh token. Your application needs to provide the redirectUri, which is the URI at which Azure AD will provide the security tokens. Once received, the web app will typically call ADAL/MSAL `AcquireTokenByAuthorizationCode` to redeem the code, and get a token that will be stored in the token cache.
+3. The app will then use ADAL or MSAL to call `AcquireTokenSilent` to acquire tokens used to call the web APIs it needs to call. This is done from the web app controllers.
+
+#### Find if your code uses the auth code flow
+
+The ADAL code for your app uses auth code flow if it contains a call to `AuthenticationContext.AcquireTokenByAuthorizationCodeAsync`.
+
+#### Update the code using auth code flow
++
+In this case, we replace the call to `AuthenticationContext.AcquireTokenAsync` by a call to `IConfidentialClientApplication.AcquireTokenByAuthorizationCode`.
+
+##### Sample auth code flow code
+
+ :::column span="":::
+ ADAL
+ :::column-end:::
+ :::column span="":::
+ MSAL
+ :::column-end:::
+ :::column span="":::
+```csharp
+public partial class AuthWrapper
+{
+ const string ClientId = "Guid (AppID)";
+ const string authority
+ = "https://login.microsoftonline.com/common";
+ private Uri redirectUri = new Uri("host/login_oidc");
+ X509Certificate2 certificate = LoadCertificate();
+++
+ public async Task<AuthenticationResult> GetAuthenticationResult(
+ string resourceId,
+ string authorizationCode)
+ {
++
+ var ac = new AuthenticationContext(authority);
+ var clientAssertionCert = new ClientAssertionCertificate(
+ ClientId,
+ certificate);
+++
+ var authResult = await ac.AcquireTokenByAuthorizationCodeAsync(
+ authorizationCode,
+ redirectUri,
+ clientAssertionCert,
+ resourceId,
+ );
+ return authResult;
+ }
+}
+```
+ :::column-end:::
+ :::column span="":::
+```csharp
+public partial class AuthWrapper
+{
+ const string ClientId = "Guid (Application ID)";
+ const string authority
+ = "https://login.microsoftonline.com/{tenant}";
+ private Uri redirectUri = new Uri("host/login_oidc");
+ X509Certificate2 certificate = LoadCertificate();
+
+ IConfidentialClientApplication app;
+
+ public async Task<AuthenticationResult> GetAuthenticationResult(
+ string resourceId,
+ string authorizationCode)
+ {
+ if (app == null)
+ {
+ app = ConfidentialClientApplicationBuilder.Create(ClientId)
+ .WithCertificate(certificate)
+ .WithAuthority(authority)
+ .WithRedirectUri(redirectUri.ToString())
+ .Build();
+ }
+
+ var authResult = await app.AcquireTokenByAuthorizationCode(
+ new [] { $"{resourceId}/.default" },
+ authorizationCode)
+ .ExecuteAsync()
+ .ConfigureAwait(false);
+
+ return authResult;
+ }
+}
+```
+ :::column-end:::
+
+Calling `AcquireTokenByAuthorizationCode` adds a token to the token cache. To acquire extra token(s) for other resources or tenants, use `AcquireTokenSilent` in your controllers.
+
+#### Token caching
+
+Since your web app uses `AcquireTokenByAuthorizationCode`, your app needs to use a distributed token cache for token caching. For details see [token cache for a web app or web API (confidential client application)](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/wiki/token-cache-serialization#token-cache-for-a-web-app-or-web-api-confidential-client-application) and [read through sample code](https://github.com/Azure-Samples/active-directory-dotnet-v1-to-v2/tree/master/ConfidentialClientTokenCache)
++
+```CSharp
+IMsalTokenCacheProvider msalTokenCacheProvider = CreateTokenCache(cacheImplementation)
+msalTokenCacheProvider.Initialize(app.UserTokenCache);
+```
+
+Refer to [code samples](https://github.com/Azure-Samples/active-directory-dotnet-v1-to-v2/blob/master/ConfidentialClientTokenCache/Program.cs) for an example of implementation of `CreateTokenCache`.
+
+[Learn more about web apps calling web APIs](scenario-web-app-call-api-overview.md) and how they're implemented with MSAL.NET or Microsoft.Identity.Web in new applications.
+++
+## MSAL benefits
+
+Some of the key features that come with MSAL.NET are resilience, security, performance, and scalability. These are described below.
+
+### Resilience
+
+Using MSAL.NET ensures your app is resilient. This is achieved through the following:
+
+- AAD Cached Credential Service(CCS) benefits. CCS operates as an AAD backup.
+- Proactive renewal of tokens if the API you call enables long lived tokens through [continuous access evaluation](app-resilience-continuous-access-evaluation.md).
+
+### Security
+
+You can also acquire Proof of Possession (PoP) tokens if the web API that you want to call requires it. For details see [Proof Of Possession (PoP) tokens in MSAL.NET](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/wiki/Proof-Of-Possession-(PoP)-tokens)
+
+### Performance and scalability
+
+If you don't need to share your cache with ADAL.NET, disable the legacy cache compatibility when creating the confidential client application (`.WithLegacyCacheCompatibility(false)`). This increases the performance significantly.
+
+```csharp
+app = ConfidentialClientApplicationBuilder.Create(ClientId)
+ .WithCertificate(certificate)
+ .WithAuthority(authority)
+ .WithLegacyCacheCompatibility(false)
+ .Build();
+```
+
+## Troubleshooting
+
+This troubleshooting guide makes two assumptions:
+
+- It assumes that your ADAL.NET code was working.
+- It assumes that you migrated to MSAL keeping the same ClientID.
+
+### AADSTS700027 exception
+
+If you get an exception with the following message: `AADSTS700027: Client assertion contains an invalid signature. [Reason - The key was not found.]`:
+
+- Confirm that you're using the latest version of MSAL.NET,
+- Confirm that the authority host set when building the confidential client application and the authority host you used with ADAL are similar. In particular, is it the same [cloud](msal-national-cloud.md)? (Azure Government, Azure China 21Vianet, Azure Germany).
+
+### AADSTS700030 exception
+
+If you get an exception with the following message: `AADSTS90002: Tenant 'cf61953b-e41a-46b3-b500-663d279ea744' not found. This may happen if there are no active subscriptions for the tenant. Check to make sure you have the correct tenant ID. Check with your subscription administrator.`:
+
+- Confirm that you're using the latest version of MSAL.NET,
+- Confirm that the authority host set when building the confidential client application and the authority host you used with ADAL are similar. In particular, is it the same [cloud](msal-national-cloud.md)? (Azure Government, Azure China 21Vianet, Azure Germany).
+
+## Next steps
+
+Learn more about the [Differences between ADAL.NET and MSAL.NET apps](msal-net-differences-adal-net.md)
active-directory Msal Net Migration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/msal-net-migration.md
Title: Migrating to MSAL.NET
+ Title: Migrating to MSAL.NET and Microsoft.Identity.Web
-description: Learn about the differences between the Microsoft Authentication Library for .NET (MSAL.NET) and Azure AD Authentication Library for .NET (ADAL.NET) and how to migrate to MSAL.NET.
+description: Learn why and how to migrate from Azure AD Authentication Library for .NET (ADAL.NET) to Microsoft Authentication Library for .NET (MSAL.NET) or Microsoft.Identity.Web
Previously updated : 04/10/2019 Last updated : 06/08/2021
-#Customer intent: As an application developer, I want to learn about the differences between the ADAL.NET and MSAL.NET libraries so I can migrate my applications to MSAL.NET.
+#Customer intent: As an application developer, I want to learn why and how to migrate from ADAL.NET and MSAL.NET or Microsoft.Identity.Web libraries.
-# Migrating applications to MSAL.NET
+# Migrating applications to MSAL.NET or Microsoft.Identity.Web
-Both the Microsoft Authentication Library for .NET (MSAL.NET) and Azure AD Authentication Library for .NET (ADAL.NET) are used to authenticate Azure AD entities and request tokens from Azure AD. Up until now, most developers have worked with Azure AD for developers platform (v1.0) to authenticate Azure AD identities (work and school accounts) by requesting tokens using Azure AD Authentication Library (ADAL). Using MSAL:
+## Why migrate to MSAL.NET or Microsoft.Identity.Web
-- you can authenticate a broader set of Microsoft identities (Azure AD identities and Microsoft accounts, and social and local accounts through Azure AD B2C) as it uses the Microsoft identity platform,-- your users will get the best single-sign-on experience.-- your application can enable incremental consent, and supporting Conditional Access is easier-- you benefit from the innovation.
+Both the Microsoft Authentication Library for .NET (MSAL.NET) and Azure AD Authentication Library for .NET (ADAL.NET) are used to authenticate Azure AD entities and request tokens from Azure AD. Up until now, most developers have requested tokens from Azure AD for developers platform (v1.0) using Azure AD Authentication Library (ADAL). These tokens are used to authenticate Azure AD identities (work and school accounts).
-**MSAL.NET or Microsoft.Identity.Web are now the recommended auth libraries to use with the Microsoft identity platform**. No new features will be implemented on ADAL.NET. The efforts are focused on improving MSAL.
+MSAL comes with benefits over ADAL. Some of these benefits are listed below:
-This article describes the differences between the Microsoft Authentication Library for .NET (MSAL.NET) and Azure AD Authentication Library for .NET (ADAL.NET) and helps you migrate to MSAL.
+- You can authenticate a broader set of Microsoft identities: work or school accounts, personal Microsoft accounts, and social or local accounts with Azure AD B2C,
+- Your users will get the best single-sign-on experience,
+- Your application can enable incremental consent, Conditional Access,
+- You benefit from continuous innovation in term of security and resilience,
+- Your application implements the best practices in term of resilience and security.
+
+**MSAL.NET or Microsoft.Identity.Web are now the recommended auth libraries to use with the Microsoft identity platform**. No new features will be implemented on ADAL.NET. The efforts are focused on improving MSAL.NET. For details see the announcement: [Update your applications to use Microsoft Authentication Library and Microsoft Graph API](https://techcommunity.microsoft.com/t5/azure-active-directory-identity/update-your-applications-to-use-microsoft-authentication-library/ba-p/1257363)
## Should you migrate to MSAL.NET or to Microsoft.Identity.Web Before digging in the details of MSAL.NET vs ADAL.NET, you might want to check if you want to use MSAL.NET or a higher-level abstraction like [Microsoft.Identity.Web](microsoft-identity-web.md)
-For details about the decision tree below, read [Should I use MSAL.NET only? or a higher level abstraction?](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/wiki/Is-MSAL.NET-right-for-me%3F)
--
-## Differences between ADAL and MSAL apps
-
-In most cases you want to use MSAL.NET and the Microsoft identity platform, which is the latest generation of Microsoft authentication libraries. Using MSAL.NET, you acquire tokens for users signing-in to your application with Azure AD (work and school accounts), Microsoft (personal) accounts (MSA), or Azure AD B2C.
-
-If you are already familiar with the Azure AD for developers (v1.0) endpoint (and ADAL.NET), you might want to read [What's different about the Microsoft identity platform?](../azuread-dev/azure-ad-endpoint-comparison.md).
-
-However, you still need to use ADAL.NET if your application needs to sign in users with earlier versions of [Active Directory Federation Services (ADFS)](/windows-server/identity/active-directory-federation-services). For more information, see [ADFS support](https://aka.ms/msal-net-adfs-support).
-
-The following picture summarizes some of the differences between ADAL.NET and MSAL.NET for a public client application
-[![Screenshot showing some of the differences between ADAL.NET and MSAL.NET for a public client application.](./media/msal-compare-msaldotnet-and-adaldotnet/differences.png)](./media/msal-compare-msaldotnet-and-adaldotnet/differences.png#lightbox)
-
-And the following picture summarizes some of the differences between ADAL.NET and MSAL.NET for a confidential client application
-[![Screenshot showing some of the differences between ADAL.NET and MSAL.NET for a confidential client application.](./media/msal-net-migration/confidential-client-application.png)](./media/msal-net-migration/confidential-client-application.png#lightbox)
-
-### NuGet packages and Namespaces
-
-ADAL.NET is consumed from the [Microsoft.IdentityModel.Clients.ActiveDirectory](https://www.nuget.org/packages/Microsoft.IdentityModel.Clients.ActiveDirectory) NuGet package. the namespace to use is `Microsoft.IdentityModel.Clients.ActiveDirectory`.
-
-To use MSAL.NET you will need to add the [Microsoft.Identity.Client](https://www.nuget.org/packages/Microsoft.Identity.Client) NuGet package, and use the `Microsoft.Identity.Client` namespace. If you are building a confidential client application, you also want to check out [Microsoft.Identity.Web](https://www.nuget.org/packages/Microsoft.Identity.Web).
-
-### Scopes not resources
-
-ADAL.NET acquires tokens for *resources*, but MSAL.NET acquires tokens for *scopes*. A number of MSAL.NET AcquireToken overrides require a parameter called scopes(`IEnumerable<string> scopes`). This parameter is a simple list of strings that declare the desired permissions and resources that are requested. Well-known scopes are the [Microsoft Graph's scopes](/graph/permissions-reference).
-
-It's also possible in MSAL.NET to access v1.0 resources. See details in [Scopes for a v1.0 application](#scopes-for-a-web-api-accepting-v10-tokens).
-
-### Core classes
--- ADAL.NET uses [AuthenticationContext](https://github.com/AzureAD/azure-activedirectory-library-for-dotnet/wiki/AuthenticationContext:-the-connection-to-Azure-AD) as the representation of your connection to the Security Token Service (STS) or authorization server, through an Authority. On the contrary, MSAL.NET is designed around [client applications](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/wiki/Client-Applications). It provides two separate classes: `PublicClientApplication` and `ConfidentialClientApplication`--- Acquiring Tokens: ADAL.NET and MSAL.NET have the same authentication calls (`AcquireTokenAsync` and `AcquireTokenSilentAsync` for ADAL.NET, and `AcquireTokenInteractive` and `AcquireTokenSilent` in MSAL.NET) but with different parameters required. One difference is the fact that, in MSAL.NET, you no longer have to pass in the `ClientID` of your application in every AcquireTokenXX call. Indeed, the `ClientID` is set only once when building the (`IPublicClientApplication` or `IConfidentialClientApplication`).-
-### IAccount not IUser
-
-ADAL.NET manipulated users. However, a user is a human or a software agent, but it can possess/own/be responsible for one or more accounts in the Microsoft identity system (several Azure AD accounts, Azure AD B2C, Microsoft personal accounts).
-
-MSAL.NET 2.x now defines the concept of Account (through the IAccount interface). This breaking change provides the right semantics: the fact that the same user can have several accounts, in different Azure AD directories. Also MSAL.NET provides better information in guest scenarios, as home account information is provided.
-
-For more information about the differences between IUser and IAccount, see [MSAL.NET 2.x](https://aka.ms/msal-net-2-released).
-
-### Exceptions
-
-#### Interaction required exceptions
-
-MSAL.NET has more explicit exceptions. For example, when silent authentication fails in ADAL the procedure is to catch the exception and look for the `user_interaction_required` error code:
-
-```csharp
-catch(AdalException exception)
-{
- if (exception.ErrorCode == "user_interaction_required")
- {
- try
- {ΓÇ£try to authenticate interactivelyΓÇ¥}}
- }
-}
-```
-
-See details in [The recommended pattern to acquire a token](https://github.com/AzureAD/azure-activedirectory-library-for-dotnet/wiki/AcquireTokenSilentAsync-using-a-cached-token#recommended-pattern-to-acquire-a-token) with ADAL.NET
-
-Using MSAL.NET, you catch `MsalUiRequiredException` as described in [AcquireTokenSilent](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/wiki/AcquireTokenSilentAsync-using-a-cached-token).
-
-```csharp
-catch(MsalUiRequiredException exception)
-{
- try {ΓÇ£try to authenticate interactivelyΓÇ¥}
-}
-```
-
-#### Handling claim challenge exceptions
-
-In ADAL.NET, claim challenge exceptions are handled in the following way:
--- `AdalClaimChallengeException` is an exception (deriving from `AdalServiceException`) thrown by the service in case a resource requires more claims from the user (for instance two-factors authentication). The `Claims` member contains some JSON fragment with the claims, which are expected.-- Still in ADAL.NET, the public client application receiving this exception needs to call the `AcquireTokenInteractive` override having a claims parameter. This override of `AcquireTokenInteractive` does not even try to hit the cache as it is not necessary. The reason is that the token in the cache does not have the right claims (otherwise an `AdalClaimChallengeException` would not have been thrown). Therefore, there is no need to look at the cache. Note that the `ClaimChallengeException` can be received in a WebAPI doing OBO, whereas the `AcquireTokenInteractive` needs to be called in a public client application calling this web API.-- for details, including samples see Handling [AdalClaimChallengeException](https://github.com/AzureAD/azure-activedirectory-library-for-dotnet/wiki/Exceptions-in-ADAL.NET#handling-adalclaimchallengeexception)-
-In MSAL.NET, claim challenge exceptions are handled in the following way:
--- The `Claims` are surfaced in the `MsalServiceException`.-- There is a `.WithClaim(claims)` method that can apply to the `AcquireTokenInteractive` builder.-
-### Supported grants
-
-Not all the grants are yet supported in MSAL.NET and the v2.0 endpoint. The following is a summary comparing ADAL.NET and MSAL.NET's supported grants.
-
-#### Public client applications
-
-Here are the grants supported in ADAL.NET and MSAL.NET for Desktop and Mobile applications
-
-Grant | ADAL.NET | MSAL.NET
|-- | --
-Interactive | [Interactive Auth](https://github.com/AzureAD/azure-activedirectory-library-for-dotnet/wiki/Acquiring-tokens-interactivelyPublic-client-application-flows) | [Acquiring tokens interactively in MSAL.NET](scenario-desktop-acquire-token.md?tabs=dotnet#acquire-a-token-interactively)
-Integrated Windows Authentication | [Integrated authentication on Windows (Kerberos)](https://github.com/AzureAD/azure-activedirectory-library-for-dotnet/wiki/AcquireTokenSilentAsync-using-Integrated-authentication-on-Windows-(Kerberos)) | [Integrated Windows Authentication](scenario-desktop-acquire-token.md?tabs=dotnet#integrated-windows-authentication)
-Username / Password | [Acquiring tokens with username and password](https://github.com/AzureAD/azure-activedirectory-library-for-dotnet/wiki/Acquiring-tokens-with-username-and-password)| [Username Password Authentication](scenario-desktop-acquire-token.md?tabs=dotnet#username-and-password)
-Device code flow | [Device profile for devices without web browsers](https://github.com/AzureAD/azure-activedirectory-library-for-dotnet/wiki/Device-profile-for-devices-without-web-browsers) | [Device Code flow](scenario-desktop-acquire-token.md?tabs=dotnet#command-line-tool-without-a-web-browser)
-
-#### Confidential client applications
-
-Here are the grants supported in ADAL.NET, MSAL.NET, and Microsoft.Identity.Web for web applications, web APIs, and daemon applications:
-
-Type of App | Grant | ADAL.NET | MSAL.NET
| -- | -- | --
-Web app, web API, daemon | Client Credentials | [Client credential flows in ADAL.NET](https://github.com/AzureAD/azure-activedirectory-library-for-dotnet/wiki/Client-credential-flows) | [Client credential flows in MSAL.NET](scenario-daemon-acquire-token.md?tabs=dotnet#acquiretokenforclient-api)
-Web API | On behalf of | [Service to service calls on behalf of the user with ADAL.NET](https://github.com/AzureAD/azure-activedirectory-library-for-dotnet/wiki/Service-to-service-calls-on-behalf-of-the-user) | [On behalf of in MSAL.NET](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/wiki/on-behalf-of)
-Web app | Auth Code | [Acquiring tokens with authorization codes on web apps with ADAL.NET](https://github.com/AzureAD/azure-activedirectory-library-for-dotnet/wiki/Acquiring-tokens-with-authorization-codes-on-web-apps) | [Acquiring tokens with authorization codes on web apps with A MSAL.NET](scenario-web-app-call-api-acquire-token.md?tabs=aspnetcore)
-
-### Cache persistence
-
-ADAL.NET allows you to extend the `TokenCache` class to implement the desired persistence functionality on platforms without a secure storage (.NET Framework and .NET core) by using the `BeforeAccess`, and `BeforeWrite` methods. For details, see [Token Cache Serialization in ADAL.NET](https://github.com/AzureAD/azure-activedirectory-library-for-dotnet/wiki/Token-cache-serialization).
-
-MSAL.NET makes the token cache a sealed class, removing the ability to extend it. Therefore, your implementation of token cache persistence must be in the form of a helper class that interacts with the sealed token cache. This interaction is described in [Token Cache Serialization in MSAL.NET](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/wiki/token-cache-serialization). The serialization will be different for a public client application (See [Token cache for a public client application](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/wiki/token-cache-serialization#token-cache-for-a-public-client-application)), and for a confidential client application (See [Token cache for a web app or web API](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/wiki/token-cache-serialization#token-cache-for-a-public-client-application))
-
-## Signification of the common authority
-
-In v1.0, if you use the `https://login.microsoftonline.com/common` authority, you will allow users to sign in with any AAD account (for any organization). See [Authority Validation in ADAL.NET](https://github.com/AzureAD/azure-activedirectory-library-for-dotnet/wiki/AuthenticationContext:-the-connection-to-Azure-AD#authority-validation)
-
-If you use the `https://login.microsoftonline.com/common` authority in v2.0, you will allow users to sign in with any AAD organization or a Microsoft personal account (MSA). In MSAL.NET, if you want to restrict login to any AAD account (same behavior as with ADAL.NET), use `https://login.microsoftonline.com/organizations`. For details, see the `authority` parameter in [public client application](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/wiki/Client-Applications#publicclientapplication).
-
-## v1.0 and v2.0 tokens
-
-There are two versions of tokens:
-- v1.0 tokens-- v2.0 tokens-
-The v1.0 endpoint (used by ADAL) only emits v1.0 tokens.
-
-However, the v2.0 endpoint (used by MSAL) emits the version of the token that the web API accepts. A property of the application manifest of the web API enables developers to choose which version of token is accepted. See `accessTokenAcceptedVersion` in the [Application manifest](reference-app-manifest.md) reference documentation.
-
-For more information about v1.0 and v2.0 tokens, see [Azure Active Directory access tokens](access-tokens.md)
-
-## Scopes for a web API accepting v1.0 tokens
-
-OAuth2 permissions are permission scopes that a v1.0 web API (resource) application exposes to client applications. These permission scopes may be granted to client applications during consent. See the section about oauth2Permissions in [Azure Active Directory application manifest](./reference-app-manifest.md).
-
-### Scopes to request access to specific OAuth2 permissions of a v1.0 application
-
-If you want to acquire tokens for an application accepting v1.0 tokens (for instance the Microsoft Graph API, which is https://graph.microsoft.com), you'd need to create `scopes` by concatenating a desired resource identifier with a desired OAuth2 permission for that resource.
-
-For instance, to access in the name of the user a v1.0 web API which App ID URI is `ResourceId`, you'd want to use:
-
-```csharp
-var scopes = new [] { ResourceId+"/user_impersonation" };
-```
-
-If you want to read and write with MSAL.NET Azure Active Directory using the Microsoft Graph API (https://graph.microsoft.com/), you would create a list of scopes like in the following snippet:
-
-```csharp
-string ResourceId = "https://graph.microsoft.com/";
-string[] scopes = { ResourceId + "Directory.Read", ResourceId + "Directory.Write" }
-```
-
-#### Warning: Should you have one or two slashes in the scope corresponding to a v1.0 web API
-
-If you want to write the scope corresponding to the Azure Resource Manager API (https://management.core.windows.net/), request the following scope (note the two slashes).
-
-```csharp
-var scopes = new[] {"https://management.core.windows.net//user_impersonation"};
-var result = await app.AcquireTokenInteractive(scopes).ExecuteAsync();
-
-// then call the API: https://management.azure.com/subscriptions?api-version=2016-09-01
-```
-
-This is because the Resource Manager API expects a slash in its audience claim (`aud`), and then there is a slash to separate the API name from the scope.
-
-The logic used by Azure AD is the following:
-- For ADAL (v1.0) endpoint with a v1.0 access token (the only possible), aud=resource-- For MSAL (v2.0 endpoint) asking an access token for a resource accepting v2.0 tokens, aud=resource.AppId-- For MSAL (v2.0 endpoint) asking an access token for a resource accepting a v1.0 access token (which is the case above), Azure AD parses the desired audience from the requested scope by taking everything before the last slash and using it as the resource identifier. Therefore if https:\//database.windows.net expects an audience of "https://database.windows.net/", you'll need to request a scope of https:\//database.windows.net//.default. See also issue #[747](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/issues/747): Resource url's trailing slash is omitted, which caused sql auth failure #747--
-### Scopes to request access to all the permissions of a v1.0 application
-
-For instance, if you want to acquire a token for all the static scopes of a v1.0 application, one would need to use
-
-```csharp
-ResourceId = "someAppIDURI";
-var scopes = new [] { ResourceId+"/.default" };
-```
-
-### Scopes to request in the case of client credential flow / daemon app
-
-In the case of client credential flow, the scope to pass would also be `/.default`. This scope tells to Azure AD: "all the app-level permissions that the admin has consented to in the application registration.
-
-## ADAL to MSAL migration
-
-In ADAL.NET v2.X, the refresh tokens were exposed allowing you to develop solutions around the use of these tokens by caching them and using the `AcquireTokenByRefreshToken` methods provided by ADAL 2.x.
-Some of those solutions were used in scenarios such as:
-* Long running services that do actions including refreshing dashboards on behalf of the users whereas the users are no longer connected.
-* WebFarm scenarios for enabling the client to bring the RT to the web service (caching is done client side, encrypted cookie, and not server side)
-
-MSAL.NET does not expose refresh tokens, for security reasons: MSAL handles refreshing tokens for you.
-
-Fortunately, MSAL.NET now has an API that allows you to migrate your previous refresh tokens (acquired with ADAL) into the `IConfidentialClientApplication`:
-
-```csharp
-/// <summary>
-/// Acquires an access token from an existing refresh token and stores it and the refresh token into
-/// the application user token cache, where it will be available for further AcquireTokenSilent calls.
-/// This method can be used in migration to MSAL from ADAL v2 and in various integration
-/// scenarios where you have a RefreshToken available.
-/// (see https://aka.ms/msal-net-migration-adal2-msal2)
-/// </summary>
-/// <param name="scopes">Scope to request from the token endpoint.
-/// Setting this to null or empty will request an access token, refresh token and ID token with default scopes</param>
-/// <param name="refreshToken">The refresh token from ADAL 2.x</param>
-IByRefreshToken.AcquireTokenByRefreshToken(IEnumerable<string> scopes, string refreshToken);
-```
-
-With this method, you can provide the previously used refresh token along with any scopes (resources) you desire. The refresh token will be exchanged for a new one and cached into your application.
-
-As this method is intended for scenarios that are not typical, it is not readily accessible with the `IConfidentialClientApplication` without first casting it to `IByRefreshToken`.
-
-This code snippet shows some migration code in a confidential client application. `GetCachedRefreshTokenForSignedInUser` retrieve the refresh token that was stored in some storage by a previous version of the application that used to leverage ADAL 2.x. `GetTokenCacheForSignedInUser` deserializes a cache for the signed-in user (as confidential client applications should have one cache per user).
-
-```csharp
-TokenCache userCache = GetTokenCacheForSignedInUser();
-string rt = GetCachedRefreshTokenForSignedInUser();
-
-IConfidentialClientApplication app;
-app = ConfidentialClientApplicationBuilder.Create(clientId)
- .WithAuthority(Authority)
- .WithRedirectUri(RedirectUri)
- .WithClientSecret(ClientSecret)
- .Build();
-IByRefreshToken appRt = app as IByRefreshToken;
-
-AuthenticationResult result = await appRt.AcquireTokenByRefreshToken(null, rt)
- .ExecuteAsync()
- .ConfigureAwait(false);
-```
+For details about the decision tree below, read [MSAL.NET or Microsoft.Identity.Web?](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/wiki/MSAL.NET-or-Microsoft.Identity.Web)
-You will see an access token and ID token returned in your AuthenticationResult while your new refresh token is stored in the cache.
+!["Block diagram explaining how to choose if you need to use MSAL.NET and Microsoft.Identity.Web or both when migrating from ADAL.NET"](media/msal-net-migration/decision-diagram.png)
-You can also use this method for various integration scenarios where you have a refresh token available.
+<!-- 1P
+## Examples of 1P Migrations
+[See examples](https://identitydivision.visualstudio.com/DevEx/_wiki/wikis/DevEx.wiki/20413/1P-ADAL.NET-to-MSAL.NET-migration-examples) of other 1P teams who have already, or are currently, migrating from ADAL to one of the MSAL+ solutions above. See their code, and in some cases read about their migration story.
+ -->
+
## Next steps
-You can find more information about the scopes in [Scopes, permissions, and consent in the Microsoft identity platform](v2-permissions-and-consent.md)
+- Learn how to [migrate confidential client applications built on top of ASP.NET MVC or .NET classic from ADAL.NET to MSAL.NET](msal-net-migration-confidential-client.md).
+- Learn more about the [Differences between ADAL.NET and MSAL.NET apps](msal-net-differences-adal-net.md).
+- Learn how to migrate confidential client applications built on top of ASP.NET Core from ADAL.NET to Microsoft.Identity.Web:
+ - [Web apps](https://github.com/AzureAD/microsoft-identity-web/wiki/web-apps#migrating-from-previous-versions--adding-authentication)
+ - [Web APIs](https://github.com/AzureAD/microsoft-identity-web/wiki/web-apis)
active-directory Reply Url https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/reply-url.md
A redirect URI, or reply URL, is the location where the authorization server sen
* The redirect URI is case-sensitive. Its case must match the case of the URL path of your running application. For example, if your application includes as part of its path `.../abc/response-oidc`, do not specify `.../ABC/response-oidc` in the redirect URI. Because the web browser treats paths as case-sensitive, cookies associated with `.../abc/response-oidc` may be excluded if redirected to the case-mismatched `.../ABC/response-oidc` URL.
+* A Redirect Uri without a path segment appends a trailing slash to the URI in the response. For e.g. URIs such as https://contoso.com and http://localhost:7071 will return as https://contoso.com/ and http://localhost:7071/ respectively. This is applicable only when the response mode is either query or fragment.
+
+* Redirect Uris containing path segment do not append a trailing slash. (Eg. https://contoso.com/abc, https://contoso.com/abc/response-oidc will be used as it is in the response)
+ ## Maximum number of redirect URIs This table shows the maximum number of redirect URIs you can add to an app registration in the Microsoft identity platform.
active-directory V2 Oauth2 On Behalf Of Flow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/v2-oauth2-on-behalf-of-flow.md
A service-to-service request for a SAML assertion contains the following paramet
| assertion |required | The value of the access token used in the request.| | client_id |required | The app ID assigned to the calling service during registration with Azure AD. To find the app ID in the Azure portal, select **Active Directory**, choose the directory, and then select the application name. | | client_secret |required | The key registered for the calling service in Azure AD. This value should have been noted at the time of registration. |
-| resource |required | The app ID URI of the receiving service (secured resource). This is the resource that will be the Audience of the SAML token. To find the app ID URI in the Azure portal, select **Active Directory** and choose the directory. Select the application name, choose **All settings**, and then select **Properties**. |
+| scope |required | A space-separated list of scopes for the token request. For more information, see [scopes](v2-permissions-and-consent.md). Eg. https://testapp.contoso.com/user_impersonation openid |
| requested_token_use |required | Specifies how the request should be processed. In the On-Behalf-Of flow, the value must be **on_behalf_of**. | | requested_token_type | required | Specifies the type of token requested. The value can be **urn:ietf:params:oauth:token-type:saml2** or **urn:ietf:params:oauth:token-type:saml1** depending on the requirements of the accessed resource. |
Learn more about the OAuth 2.0 protocol and another way to perform service to se
* [OAuth 2.0 client credentials grant in Microsoft identity platform](v2-oauth2-client-creds-grant-flow.md) * [OAuth 2.0 code flow in Microsoft identity platform](v2-oauth2-auth-code-flow.md)
-* [Using the `/.default` scope](v2-permissions-and-consent.md#the-default-scope)
+* [Using the `/.default` scope](v2-permissions-and-consent.md#the-default-scope)
active-directory Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/troubleshoot.md
If you accidentally deleted the `aad-extensions-app`, you have 30 days to recove
You should now see the restored app in the Azure portal.
-## A guest user was invited succesfully but the email attribute is not populating
+## A guest user was invited successfully but the email attribute is not populating
Let's say you inadvertently invite a guest user with an email address that matches a user object already in your directory. The guest user object is created, but the email address is added to the `otherMail` property instead of to the `mail` or `proxyAddresses` properties. To avoid this issue, you can search for conflicting user objects in your Azure AD directory by using these PowerShell steps:
active-directory Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/whats-new-archive.md
Tenant administrators can now use Staged Rollout to deploy Email Sign-In with Pr
**Service category:** Reporting **Product capability:** Monitoring & Reporting
-With the initial preview release of the Sign-in Diagnostic, admins can now review user sign-ins. Admins can receive contextual, specific, and relevant details and guidance on what happened during a sign-in and how to fix problems. The diagnostic is available in both the Azure AD level, and Conditional Access Diagnose and Solve blades. The diagnostic scenarios covered in this release are Conditional Access, Multi-Factor Authentication, and successful sign-in.
+With the initial preview release of the Sign-in Diagnostic, admins can now review user sign-ins. Admins can receive contextual, specific, and relevant details and guidance on what happened during a sign-in and how to fix problems. The diagnostic is available in both the Azure AD level, and Conditional Access Diagnose and Solve blades. The diagnostic scenarios covered in this release are Conditional Access, Azure Active Directory Multi-factor Authentication, and successful sign-in.
For more information, see [What is sign-in diagnostic in Azure AD?](../reports-monitoring/overview-sign-in-diagnostics.md).
Manually created connected organizations will have a default setting of "configu
Risk-based Conditional Access and risk detection features of Identity Protection are now available in [Azure AD B2C](../..//active-directory-b2c/conditional-access-identity-protection-overview.md). With these advanced security features, customers can now: - Leverage intelligent insights to assess risk with B2C apps and end user accounts. Detections include atypical travel, anonymous IP addresses, malware-linked IP addresses, and Azure AD threat intelligence. Portal and API-based reports are also available.-- Automatically address risks by configuring adaptive authentication policies for B2C users. App developers and administrators can mitigate real-time risk by requiring multi-factor authentication (MFA) or blocking access depending on the user risk level detected, with additional controls available based on location, group, and app.
+- Automatically address risks by configuring adaptive authentication policies for B2C users. App developers and administrators can mitigate real-time risk by requiring Azure Active Directory Multi-factor Authentication (MFA) or blocking access depending on the user risk level detected, with additional controls available based on location, group, and app.
- Integrate with Azure AD B2C user flows and custom policies. Conditions can be triggered from built-in user flows in Azure AD B2C or can be incorporated into B2C custom policies. As with other aspects of the B2C user flow, end user experience messaging can be customized. Customization is according to the organizationΓÇÖs voice, brand, and mitigation alternatives.
MSAL.js version 2.x now includes support for the authorization code flow for sin
-### Updates to Remember Multi-Factor Authentication (MFA) on a trusted device setting
+### Updates to Remember Azure Active Directory Multi-factor Authentication (MFA) on a trusted device setting
**Type:** Changed feature **Service category:** MFA **Product capability:** Identity Security & Protection
-We've recently updated the [remember Multi-Factor Authentication (MFA)](../authentication/howto-mfa-mfasettings.md#remember-multi-factor-authentication) on a trusted device feature to extend authentication for up to 365 days. Azure Active Directory (Azure AD) Premium licenses, can also use the [Conditional Access ΓÇô Sign-in Frequency policy](../conditional-access/howto-conditional-access-session-lifetime.md#user-sign-in-frequency) that provides more flexibility for reauthentication settings.
+We've recently updated the [remember Azure Active Directory Multi-factor Authentication (MFA)](../authentication/howto-mfa-mfasettings.md#remember-multi-factor-authentication) on a trusted device feature to extend authentication for up to 365 days. Azure Active Directory (Azure AD) Premium licenses, can also use the [Conditional Access ΓÇô Sign-in Frequency policy](../conditional-access/howto-conditional-access-session-lifetime.md#user-sign-in-frequency) that provides more flexibility for reauthentication settings.
For the optimal user experience, we recommend using Conditional Access sign-in frequency to extend session lifetimes on trusted devices, locations, or low-risk sessions as an alternative to the remember MFA on a trusted device setting. To get started, review our [latest guidance on optimizing the reauthentication experience](../authentication/concepts-azure-multi-factor-authentication-prompts-session-lifetime.md).
Continuous access evaluation (CAE) is now available in public preview for Azure
**Service category:** User Access Management **Product capability:** Entitlement Management
-Administrators can now require that users requesting an access package answer additional questions beyond just business justification in Azure AD Entitlement management's My Access portal. The users' answers will then be shown to the approvers to help them make a more accurate access approval decision. To learn more, see [Collect additional requestor information for approval (preview)](../governance/entitlement-management-access-package-approval-policy.md#collect-additional-requestor-information-for-approval-preview).
+Administrators can now require that users requesting an access package answer additional questions beyond just business justification in Azure AD Entitlement management's My Access portal. The users' answers will then be shown to the approvers to help them make a more accurate access approval decision. To learn more, see [Collect additional requestor information for approval](../governance/entitlement-management-access-package-approval-policy.md#collect-additional-requestor-information-for-approval).
This experience will be changed to display only the resources currently added in
## August 2020
-### Updates to Azure Multi-Factor Authentication Server firewall requirements
+### Updates to Azure Active Directory Multi-factor Authentication Server firewall requirements
**Type:** Plan for change **Service category:** MFA
This experience will be changed to display only the resources currently added in
Starting 1 October 2020, Azure MFA Server firewall requirements will require additional IP ranges.
-If you have outbound firewall rules in your organization, update the rules so that your MFA servers can communicate with all the necessary IP ranges. The IP ranges are documented in [Azure Multi-Factor Authentication Server firewall requirements](../authentication/howto-mfaserver-deploy.md#azure-multi-factor-authentication-server-firewall-requirements).
+If you have outbound firewall rules in your organization, update the rules so that your MFA servers can communicate with all the necessary IP ranges. The IP ranges are documented in [Azure Active Directory Multi-factor Authentication Server firewall requirements](../authentication/howto-mfaserver-deploy.md#azure-multi-factor-authentication-server-firewall-requirements).
You can now view role assignments across all scopes for a role in the "Roles and
-### Azure Multi-Factor Authentication Software Development (Azure MFA SDK) Deprecation
+### Azure Active Directory Multi-factor Authentication Software Development (Azure MFA SDK) Deprecation
**Type:** Deprecated **Service category:** MFA **Product capability:** Identity Security & Protection
-The Azure Multi-Factor Authentication Software Development (Azure MFA SDK) reached the end of life on November 14th, 2018, as first announced in November 2017. Microsoft will be shutting down the SDK service effective on September 30th, 2020. Any calls made to the SDK will fail.
+The Azure Active Directory Multi-factor Authentication Software Development (Azure MFA SDK) reached the end of life on November 14th, 2018, as first announced in November 2017. Microsoft will be shutting down the SDK service effective on September 30th, 2020. Any calls made to the SDK will fail.
If your organization is using the Azure MFA SDK, you need to migrate by September 30th, 2020: - Azure MFA SDK for MIM: If you use the SDK with MIM, you should migrate to Azure MFA Server and activate Privileged Access Management (PAM) following these [instructions](/microsoft-identity-manager/working-with-mfaserver-for-mim).
We've released an updated version of Azure AD Connect for auto-upgrade customers
-### Azure Multi-Factor Authentication (MFA) Server, version 8.0.2 is now available
+### Azure Active Directory Multi-factor Authentication (MFA) Server, version 8.0.2 is now available
**Type:** Fixed **Service category:** MFA
New user interface changes are coming to the design of the **Add from the galler
**Service category:** MFA **Product capability:** Identity Security & Protection
-We're removing the MFA server IP address from the [Office 365 IP Address and URL Web service](/office365/enterprise/office-365-ip-web-service). If you currently rely on these pages to update your firewall settings, you must make sure you're also including the list of IP addresses documented in the **Azure Multi-Factor Authentication Server firewall requirements** section of the [Getting started with the Azure Multi-Factor Authentication Server](../authentication/howto-mfaserver-deploy.md#azure-multi-factor-authentication-server-firewall-requirements) article.
+We're removing the MFA server IP address from the [Office 365 IP Address and URL Web service](/office365/enterprise/office-365-ip-web-service). If you currently rely on these pages to update your firewall settings, you must make sure you're also including the list of IP addresses documented in the **Azure Active Directory Multi-factor Authentication Server firewall requirements** section of the [Getting started with the Azure Active Directory Multi-factor Authentication Server](../authentication/howto-mfaserver-deploy.md#azure-multi-factor-authentication-server-firewall-requirements) article.
For more information about setting up your company branding, see [Add branding t
-### Azure Multi-Factor Authentication (MFA) Server is no longer available for new deployments
+### Azure Active Directory Multi-factor Authentication (MFA) Server is no longer available for new deployments
**Type:** Deprecated **Service category:** MFA
For more information about setting up your company branding, see [Add branding t
As of July 1, 2019, Microsoft will no longer offer MFA Server for new deployments. New customers who want to require multi-factor authentication in their organization must now use cloud-based Azure AD Multi-Factor Authentication. Customers who activated MFA Server prior to July 1 won't see a change. You'll still be able to download the latest version, get future updates, and generate activation credentials.
-For more information, see [Getting started with the Azure Multi-Factor Authentication Server](../authentication/howto-mfaserver-deploy.md). For more information about cloud-based Azure AD Multi-Factor Authentication, see [Planning a cloud-based Azure AD Multi-Factor Authentication deployment](../authentication/howto-mfa-getstarted.md).
+For more information, see [Getting started with the Azure Active Directory Multi-factor Authentication Server](../authentication/howto-mfaserver-deploy.md). For more information about cloud-based Azure AD Multi-Factor Authentication, see [Planning a cloud-based Azure AD Multi-Factor Authentication deployment](../authentication/howto-mfa-getstarted.md).
For more information, see:
**Service category:** Multi-factor authentication **Product capability:** User authentication
-The Network Policy Server extension for Azure AD Multi-Factor Authentication adds cloud-based Multi-Factor Authentication capabilities to your authentication infrastructure by using your existing servers. With the Network Policy Server extension, you can add phone call, text message, or phone app verification to your existing authentication flow. You don't have to install, configure, and maintain new servers.
+The Network Policy Server extension for Azure Active Directory (Azure AD) Multi-Factor Authentication adds cloud-based Multi-Factor Authentication capabilities to your authentication infrastructure by using your existing servers. With the Network Policy Server extension, you can add phone call, text message, or phone app verification to your existing authentication flow. You don't have to install, configure, and maintain new servers.
-This extension was created for organizations that want to protect virtual private network connections without deploying the Azure Multi-Factor Authentication Server. The Network Policy Server extension acts as an adapter between RADIUS and cloud-based Azure AD Multi-Factor Authentication to provide a second factor of authentication for federated or synced users.
+This extension was created for organizations that want to protect virtual private network connections without deploying the Azure Active Directory Multi-factor Authentication Server. The Network Policy Server extension acts as an adapter between RADIUS and cloud-based Azure AD Multi-Factor Authentication to provide a second factor of authentication for federated or synced users.
For more information, see [Integrate your existing Network Policy Server infrastructure with Azure AD Multi-Factor Authentication](../authentication/howto-mfa-nps-extension.md).
Due to a service issue, this functionality was temporarily disabled. The issue w
**Service category:** Multi-factor authentication **Product capability:** Identity security and protection
-Multi-factor authentication (MFA) is an essential part of protecting your organization. To make credentials more adaptive and the experience more seamless, the following features were added:
+Azure Active Directory (Azure AD) Multi-factor authentication (MFA) is an essential part of protecting your organization. To make credentials more adaptive and the experience more seamless, the following features were added:
- Multi-factor challenge results are directly integrated into the Azure AD sign-in report, which includes programmatic access to MFA results. - The MFA configuration is more deeply integrated into the Azure AD configuration experience in the Azure portal.
active-directory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/whats-new.md
For more information, see [Enable support for TLS 1.2 in your environment for Az
**Service category:** Other **Product capability:** Entitlement Management
-For organizations using multi-geo SharePoint Online, you can now include sites from specific multi-geo environments to your Entitlement management access packages. [Learn more](../governance/entitlement-management-catalog-create.md#add-a-multi-geo-sharepoint-site-preview).
+For organizations using multi-geo SharePoint Online, you can now include sites from specific multi-geo environments to your Entitlement management access packages. [Learn more](../governance/entitlement-management-catalog-create.md#add-a-multi-geo-sharepoint-site).
active-directory Entitlement Management Access Package Approval Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/governance/entitlement-management-access-package-approval-policy.md
#Customer intent: As an administrator, I want detailed information about how I can edit an access package so that requestors have the resources they need to perform their job.
-# Change approval and requestor information (preview) settings for an access package in Azure AD entitlement management
+# Change approval and requestor information settings for an access package in Azure AD entitlement management
As an access package manager, you can change the approval and requestor information settings for an access package at any time by editing an existing policy or adding a new policy.
For example, if you listed Alice and Bob as the first stage approver(s), list Ca
1. Click **Next**.
-## Collect additional requestor information for approval (preview)
+## Collect additional requestor information for approval
In order to make sure users are getting access to the right access packages, you can require requestors to answer custom text field or multiple choice questions at the time of request. There is a limit of 20 questions per policy and a limit of 25 answers for multiple choice questions. The questions will then be shown to approvers to help them make a decision.
In order to make sure users are getting access to the right access packages, you
1. Select the **Answer format** in which you would like requestors to answer. Answer formats include: *short text*, *multiple choice*, and *long text*.
- ![Access package - Policy- Select view and edit multiple choice answer format](./media/entitlement-management-access-package-approval-policy/answer-format-view-edit.png)
+ ![Access package - Policy- Select Edit and localize multiple choice answer format](./media/entitlement-management-access-package-approval-policy/answer-format-view-edit.png)
-1. If selecting multiple choice, click on the **view and edit** button to configure the answer options.
- 1. After selecting view and edit the **View/edit question** pane will open.
+1. If selecting multiple choice, click on the **Edit and localize** button to configure the answer options.
+ 1. After selecting Edit and localize the **View/edit question** pane will open.
1. Type in the response options you wish to give the requestor when answering the question in the **Answer values** boxes.
- 1. Type in as many responses as you need then click **Save**.
+ 1. Type in as many responses as you need.
+ 1. If you would like to add your own localization for the multiple choice options, select the **Optional language code** for the language in which you want to localize a specific option.
+ 1. In the language you configured, type the option in the Localized text box.
+ 1. Once you have added all of the localizations needed for each multiple choice option, click **Save**.
![Access package - Policy- Enter multiple choice options](./media/entitlement-management-access-package-approval-policy/answer-multiple-choice.png)
In order to make sure users are getting access to the right access packages, you
1. Fill out the remaining tabs (e.g., Lifecycle) based on your needs.
-After you have configured requestor information in your access package policy, can view the requestor's responses to the questions. For guidance on seeing requestor information, see [View requestor's answers to questions (Preview)](entitlement-management-request-approve.md#view-requestors-answers-to-questions-preview).
+After you have configured requestor information in your access package policy, can view the requestor's responses to the questions. For guidance on seeing requestor information, see [View requestor's answers to questions](entitlement-management-request-approve.md#view-requestors-answers-to-questions).
## Next steps - [Change lifecycle settings for an access package](entitlement-management-access-package-lifecycle-policy.md)
active-directory Entitlement Management Access Package Assignments https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/governance/entitlement-management-access-package-assignments.md
na
ms.devlang: na Previously updated : 06/18/2020 Last updated : 04/12/2021
In some cases, you might want to directly assign specific users to an access pac
![Assignments - Add user to access package](./media/entitlement-management-access-package-assignments/assignments-add-user.png)
-1. Click **Add users** to select the users you want to assign this access package to.
+1. In the **Select policy** list, select a policy that the users' future requests and lifecycle will be governed and tracked by. If you want the selected users to have different policy settings, you can click **Create new policy** to add a new policy.
-1. In the **Select policy** list, select a policy that the users' future requests and lifecycle will be governed and tracked by. If you want the selected users to have different policy settings, you can click **Create new policy** to add a new policy.
+1. Once you select a policy, youΓÇÖll be able to Add users to select the users you want to assign this access package to, under the chosen policy.
+
+ > [!NOTE]
+ > If you select a policy with questions, you can only assign one user at a time.
1. Set the date and time you want the selected users' assignment to start and end. If an end date is not provided, the policy's lifecycle settings will be used.
-1. Optionally provide a justification for your direct assignment for record keeping.
+1. Optionally provide a justification for your direct assignment for record keeping.
+
+1. If the selected policy includes additional requestor information, click **View questions** to answer them on behalf of the users, then click **Save**.
+
+ ![Assignments - click view questions](./media/entitlement-management-access-package-assignments/assignments-view-questions.png)
+
+ ![Assignments - questions pane](./media/entitlement-management-access-package-assignments/assignments-questions-pane.png)
1. Click **Add** to directly assign the selected users to the access package.
You can also directly assign a user to an access package using Microsoft Graph.
### Removing an assignment programmatically
-You can also remove an assignment of a user to an access package using Microsoft Graph. A user in an appropriate role with an application that has the delegated `EntitlementManagement.ReadWrite.All` permission, or an application with that appplication permission, can call the API to [create an accessPackageAssignmentRequest](/graph/api/accesspackageassignmentrequest-post?view=graph-rest-beta&preserve-view=true). In this request, the value of the `requestType` property should be `AdminRemove`, and the `accessPackageAssignment` property is a structure that contains the `id` property identifying the `accessPackageAssignment` being removed.
+You can also remove an assignment of a user to an access package using Microsoft Graph. A user in an appropriate role with an application that has the delegated `EntitlementManagement.ReadWrite.All` permission, or an application with that application permission, can call the API to [create an accessPackageAssignmentRequest](/graph/api/accesspackageassignmentrequest-post?view=graph-rest-beta&preserve-view=true). In this request, the value of the `requestType` property should be `AdminRemove`, and the `accessPackageAssignment` property is a structure that contains the `id` property identifying the `accessPackageAssignment` being removed.
## Next steps
active-directory Entitlement Management Access Package Request Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/governance/entitlement-management-access-package-request-policy.md
Follow these steps if you want to allow users not in your directory to request t
1. Once you've selected all your connected organizations, click **Select**. > [!NOTE]
- > All users from the selected connected organizations will be able to request this access package. This includes users in Azure AD from all subdomains associated with the organization, unless those domains are blocked by the Azure B2B allow or deny list. For more information, see [Allow or block invitations to B2B users from specific organizations](../external-identities/allow-deny-list.md).
+ > All users from the selected connected organizations will be able to request this access package. This includes users in Azure AD from all subdomains associated with the organization, unless those domains are blocked by the Azure B2B allow or blocklist. For more information, see [Allow or block invitations to B2B users from specific organizations](../external-identities/allow-deny-list.md).
1. If you want to require approval, use the steps in [Change approval settings for an access package in Azure AD entitlement management](entitlement-management-access-package-approval-policy.md) to configure approval settings.
To change the request and approval settings for an access package, you need to o
1. Click **Next**.
-1. If you want to require requestors to provide additional information when requesting access to an access package, use the steps in [Change approval and requestor information (preview) settings for an access package in Azure AD entitlement management](entitlement-management-access-package-approval-policy.md#collect-additional-requestor-information-for-approval-preview)
- to configure requestor information (preview).
+1. If you want to require requestors to provide additional information when requesting access to an access package, use the steps in [Change approval and requestor information settings for an access package in Azure AD entitlement management](entitlement-management-access-package-approval-policy.md#collect-additional-requestor-information-for-approval)
+ to configure requestor information.
1. Configure lifecycle settings.
active-directory Entitlement Management Access Reviews Create https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/governance/entitlement-management-access-reviews-create.md
This setting determines how often access reviews will occur.
1. Set the **Duration** to define how many days each review of the recurring series will be open for input from reviewers. For example, you might schedule an annual review that starts on January 1st and is open for review for 30 days so that reviewers have until the end of the month to respond.
-1. Next to **Reviewers**, select **Self-review** if you want users to perform their own access review or select **Specific reviewer(s)** if you want to designate a reviewer.
+1. Next to **Reviewers**, select **Self-review** if you want users to perform their own access review or select **Specific reviewer(s)** if you want to designate a reviewer. You can also select **Manager (Preview)** if you want to designate the revieweeΓÇÖs manager to be the reviewer. If you select this option, you need to add a **fallback** to forward the review to in case the manager cannot be found in the system.
![Select Add reviewers](./media/entitlement-management-access-reviews/access-reviews-add-reviewer.png)
This setting determines how often access reviews will occur.
![Specify the reviewers](./media/entitlement-management-access-reviews/access-reviews-select-reviewer.png)
+1. If you selectedΓÇ»**Manager (Preview)**, specify the fallback reviewer:
+ 1. SelectΓÇ»**Add fallback reviewers**.
+ 1. In the Select fallback reviewers pane, search for and select the user(s) you want to be fallback reviewer(s) for the reviewee’s manager.
+ 1. When you've selected your fallback reviewer(s), click the **Select** button.
+
+ ![Add the fallback reviewers](./media/entitlement-management-access-reviews/access-reviews-add-fallback-manager.png)
+
+1. Click **Show advanced access review settings (Preview)** to show additional settings.
+
+ ![Show the advanced review settings](./media/entitlement-management-access-reviews/access-reviews-advanced-settings.png)
+ 1. Click **Review + Create** if you are creating a new access package or **Update** if you are editing an access package, at the bottom of the page.
+> [!NOTE]
+> In Azure AD Entitlement Management, the result of an access package review is always auto-applied to the users assigned to the package, according to the setting selected in **If reviewers donΓÇÖt respond**. When the review setting of **If reviewers donΓÇÖt respond** is set to **No change**, this is equivalent to the system approving continued access for the users being reviewed.
+ ## View the status of the access review After the start date, an access review will be listed in the **Access reviews** section. Follow these steps to view the status of an access review:
active-directory Entitlement Management Catalog Create https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/governance/entitlement-management-catalog-create.md
To include resources in an access package, the resources must exist in a catalog
These resources can now be included in access packages within the catalog.
-### Add a Multi-geo SharePoint Site (Preview)
+### Add a Multi-geo SharePoint Site
1. If you have [Multi-Geo](/microsoft-365/enterprise/multi-geo-capabilities-in-onedrive-and-sharepoint-online-in-microsoft-365) enabled for SharePoint, select the environment you would like to select sites from.
- :::image type="content" source="media/entitlement-management-catalog-create/sharepoint-multigeo-select.png" alt-text="Access package - Add resource roles - Select SharePoint Multi-geo sites":::
+ :::image type="content" source="media/entitlement-management-catalog-create/sharepoint-multi-geo-select.png" alt-text="Access package - Add resource roles - Select SharePoint Multi-geo sites":::
1. Then select the sites you would like to be added to the catalog.
active-directory Entitlement Management Request Approve https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/governance/entitlement-management-request-approve.md
If you don't have the email, you can find the access requests pending your appro
1. On the **Pending** tab, find the request.
-## View requestor's answers to questions (Preview)
+## View requestor's answers to questions
1. Navigate to the **Approvals** tab in My Access.
active-directory Howto Assign Access Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/managed-identities-azure-resources/howto-assign-access-portal.md
editor: + ms.devlang: na na Previously updated : 11/03/2020 Last updated : 06/24/2021
After you've enabled managed identity on an Azure resource, such as an [Azure VM
2. Navigate to the desired resource on which you want to modify access control. In this example, we are giving an Azure virtual machine access to a storage account, so we navigate to the storage account.
-3. Select the **Access control (IAM)** page of the resource, and select **+ Add role assignment**. Then specify the **Role**, **Assign access to**, and specify the corresponding **Subscription**. Under the search criteria area, you should see the resource. Select the resource, and select **Save**.
+1. Select **Access control (IAM)**.
- ![Access control (IAM) screenshot](./media/msi-howto-assign-access-portal/assign-access-control-iam-blade-before.png)
+1. Select **Add** > **Add role assignment** to open the Add role assignment page.
+
+1. Select the role and managed identity. For detailed steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
+
+ ![Add role assignment page in Azure portal.](../../../includes/role-based-access-control/media/add-role-assignment-page.png)
## Next steps
active-directory Tutorial Linux Vm Access Storage Access Key https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/managed-identities-azure-resources/tutorial-linux-vm-access-storage-access-key.md
editor: daveba + ms.devlang: na
Later we will upload and download a file to the new storage account. Because fil
Azure Storage does not natively support Azure AD authentication. However, you can use your VM's system-assigned managed identity to retrieve a storage SAS from Resource Manager, then use the SAS to access storage. In this step, you grant your VM's system-assigned managed identity access to your storage account SAS. Grant access by assigning the [Storage Account Contributor](../../role-based-access-control/built-in-roles.md#storage-account-contributor) role to the managed-identity at the scope of the resource group that contains your storage account.
-For detailed steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).ΓÇ¥
+For detailed steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
>[!NOTE] > For more information on the various roles that you can use to grant permissions to storage review [Authorize access to blobs and queues using Azure Active Directory](../../storage/common/storage-auth-aad.md#assign-azure-roles-for-access-rights)
active-directory Tutorial Vm Windows Access Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/managed-identities-azure-resources/tutorial-vm-windows-access-storage.md
editor: daveba + ms.devlang: na na Previously updated : 01/14/2020 Last updated : 06/24/2021
Files require blob storage so you need to create a blob container in which to st
This section shows how to grant your VM access to an Azure Storage container. You can use the VM's system-assigned managed identity to retrieve the data in the Azure storage blob. 1. Navigate back to your newly created storage account.
-2. Click the **Access control (IAM)** link in the left panel.
-3. Click **+ Add role assignment** on top of the page to add a new role assignment for your VM.
-4. Under **Role**, from the dropdown, select **Storage Blob Data Reader**.
-5. In the next dropdown, under **Assign access to**, choose **Virtual Machine**.
-6. Next, ensure the proper subscription is listed in **Subscription** dropdown and then set **Resource Group** to **All resource groups**.
-7. Under **Select**, choose your VM and then click **Save**.
-
- ![Assign permissions](./media/tutorial-linux-vm-access-storage/access-storage-perms.png)
+1. Click **Access control (IAM)**.
+1. Click **Add** > **Add role assignment** to open the Add role assignment page.
+1. Assign the following role. For detailed steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
+
+ | Setting | Value |
+ | | |
+ | Role | Storage Blob Data Reader |
+ | Assign access to | Managed identity |
+ | System-assigned | Virtual Machine |
+ | Select | &lt;your virtual machine&gt; |
+
+ ![Add role assignment page in Azure portal.](../../../includes/role-based-access-control/media/add-role-assignment-page.png)
## Access data 
active-directory Tutorial Windows Vm Access Storage Sas https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/managed-identities-azure-resources/tutorial-windows-vm-access-storage-sas.md
ms.devlang: na
na Previously updated : 12/15/2020 Last updated : 06/24/2021 -+ # Tutorial: Use a Windows VM system-assigned managed identity to access Azure Storage via a SAS credential
Later we will upload and download a file to the new storage account. Because fil
Azure Storage does not natively support Azure AD authentication. However, you can use a managed identity to retrieve a storage SAS from Resource Manager, then use the SAS to access storage. In this step, you grant your VM's system-assigned managed identity access to your storage account SAS. 1. Navigate back to your newly created storage account.  
-2. Click the **Access control (IAM)** link in the left panel.
-3. Click **+ Add role assignment** on top of the page to add a new role assignment for your VM
-4. Set **Role** to "Storage Account Contributor", on the right side of the page.
-5. In the next dropdown, set **Assign access to** the resource "Virtual Machine".
-6. Next, ensure the proper subscription is listed in **Subscription** dropdown, then set **Resource Group** to "All resource groups".
-7. Finally, under **Select** choose your Windows Virtual Machine in the dropdown, then click **Save**.
-
- ![Alt image text](./media/msi-tutorial-linux-vm-access-storage/msi-storage-role-sas.png)
+1. Click **Access control (IAM)**.
+1. Click **Add** > **Add role assignment** to open the Add role assignment page.
+1. Assign the following role. For detailed steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
+
+ | Setting | Value |
+ | | |
+ | Role | Storage Account Contributor |
+ | Assign access to | Managed identity |
+ | System-assigned | Virtual Machine |
+ | Select | &lt;your Windows virtual machine&gt; |
+
+ ![Add role assignment page in Azure portal.](../../../includes/role-based-access-control/media/add-role-assignment-page.png)
## Get an access token using the VM's identity and use it to call Azure Resource Manager 
active-directory Pim Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/privileged-identity-management/pim-configure.md
Previously updated : 06/15/2021 Last updated : 06/25/2021
Privileged Identity Management provides time-based and approval-based role activ
- Get **notifications** when privileged roles are activated - Conduct **access reviews** to ensure users still need roles - Download **audit history** for internal or external audit
+- Prevents removal of the **last active Global Administrator** role assignment
## What can I do with it?
For Azure AD roles in Privileged Identity Management, only a user who is in the
For Azure resource roles in Privileged Identity Management, only a subscription administrator, a resource Owner, or a resource User Access administrator can manage assignments for other administrators. Users who are Privileged Role Administrators, Security Administrators, or Security Readers do not by default have access to view assignments to Azure resource roles in Privileged Identity Management.
+## Terminology
+
+To better understand Privileged Identity Management and its documentation, you should review the following terms.
+
+| Term or concept | Role assignment category | Description |
+| | | |
+| eligible | Type | A role assignment that requires a user to perform one or more actions to use the role. If a user has been made eligible for a role, that means they can activate the role when they need to perform privileged tasks. There's no difference in the access given to someone with a permanent versus an eligible role assignment. The only difference is that some people don't need that access all the time. |
+| active | Type | A role assignment that doesn't require a user to perform any action to use the role. Users assigned as active have the privileges assigned to the role. |
+| activate | | The process of performing one or more actions to use a role that a user is eligible for. Actions might include performing a multi-factor authentication (MFA) check, providing a business justification, or requesting approval from designated approvers. |
+| assigned | State | A user that has an active role assignment. |
+| activated | State | A user that has an eligible role assignment, performed the actions to activate the role, and is now active. Once activated, the user can use the role for a pre-configured period of time before they need to activate again. |
+| permanent eligible | Duration | A role assignment where a user is always eligible to activate the role. |
+| permanent active | Duration | A role assignment where a user can always use the role without performing any actions. |
+| time-bound eligible | Duration | A role assignment where a user is eligible to activate the role only within start and end dates. |
+| time-bound active | Duration | A role assignment where a user can use the role only within start and end dates. |
+| just-in-time (JIT) access | | A model in which users receive temporary permissions to perform privileged tasks, which prevents malicious or unauthorized users from gaining access after the permissions have expired. Access is granted only when users need it. |
+| principle of least privilege access | | A recommended security practice in which every user is provided with only the minimum privileges needed to accomplish the tasks they are authorized to perform. This practice minimizes the number of Global Administrators and instead uses specific administrator roles for certain scenarios. |
+ ## Extend and renew assignments After you set up your time-bound owner or member assignments, the first question you might ask is what happens if an assignment expires? In this new version, we provide two options for this scenario:
active-directory Admin Units Assign Roles https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/admin-units-assign-roles.md
User Administrator | Can manage all aspects of users and groups, including res
The following security principals can be assigned to a role with an administrative unit scope: * Users
-* Role-assignable cloud groups (preview)
+* Role-assignable Azure AD groups (preview)
* Service Principal Name (SPN) ## Assign a scoped role
Body
## Next steps -- [Use cloud groups to manage role assignments](groups-concept.md)-- [Troubleshoot roles assigned to cloud groups](groups-faq-troubleshooting.md)
+- [Use Azure AD groups to manage role assignments](groups-concept.md)
+- [Troubleshoot Azure AD roles assigned to groups](groups-faq-troubleshooting.md)
active-directory Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/best-practices.md
Microsoft recommends that you keep two break glass accounts that are permanently
If you have an external governance system that takes advantage of groups, then you should consider assigning roles to Azure AD groups, instead of individual users. You can also manage role-assignable groups in PIM to ensure that there are no standing owners or members in these privileged groups. For more information, see [Management capabilities for privileged access Azure AD groups](../privileged-identity-management/groups-features.md).
-You can assign an owner to role-assignable groups. That owner decides who is added to or removed from the group, so indirectly, decides who gets the role assignment. In this way, a Global Administrator or Privileged Role Administrator can delegate role management on a per-role basis by using groups. For more information, see [Use cloud groups to manage role assignments in Azure Active Directory](groups-concept.md).
+You can assign an owner to role-assignable groups. That owner decides who is added to or removed from the group, so indirectly, decides who gets the role assignment. In this way, a Global Administrator or Privileged Role Administrator can delegate role management on a per-role basis by using groups. For more information, see [Use Azure AD groups to manage role assignments](groups-concept.md).
## 7. Activate multiple roles at once using privileged access groups
active-directory Groups Assign Role https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/groups-assign-role.md
Title: Assign a role to a cloud group in Azure Active Directory | Microsoft Docs
-description: Assign an Azure AD role to a role-assignable group in the Azure portal, PowerShell, or Graph API.
+ Title: Assign Azure AD roles to groups - Azure Active Directory
+description: Assign Azure AD roles to role-assignable groups in the Azure portal, PowerShell, or Graph API.
-# Assign a role to a cloud group in Azure Active Directory
+# Assign Azure AD roles to groups
This section describes how an IT admin can assign Azure Active Directory (Azure AD) role to an Azure AD group.
POST https://graph.microsoft.com/beta/roleManagement/directory/roleAssignments
``` ## Next steps -- [Use cloud groups to manage role assignments](groups-concept.md)-- [Troubleshooting roles assigned to cloud groups](groups-faq-troubleshooting.md)
+- [Use Azure AD groups to manage role assignments](groups-concept.md)
+- [Troubleshoot Azure AD roles assigned to groups](groups-faq-troubleshooting.md)
active-directory Groups Concept https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/groups-concept.md
Title: Use cloud groups to manage role assignments in Azure Active Directory | Microsoft Docs
-description: Preview custom Azure AD roles for delegating identity management. Manage Azure role assignments in the Azure portal, PowerShell, or Graph API.
+ Title: Use Azure AD groups to manage role assignments (preview) - Azure Active Directory
+description: Use Azure AD groups to simplify role assignment management in Azure Active Directory.
Previously updated : 05/05/2021 Last updated : 06/24/2021
-# Use cloud groups to manage role assignments in Azure Active Directory (preview)
+# Use Azure AD groups to manage role assignments (preview)
-Azure Active Directory (Azure AD) is introducing a public preview in which you can assign a cloud group to Azure AD built-in roles. With this feature, you can use groups to grant admin access in Azure AD with minimal effort from your Global Administrators and Privileged Role Administrators.
+> [!IMPORTANT]
+> Role-assignable groups is currently in PREVIEW.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-Consider this example: Contoso has hired people across geographies to manage and reset passwords for employees in its Azure AD organization. Instead of asking a Privileged Role Administrator or Global Administrator to assign the Helpdesk Administrator role to each person individually, they can create a Contoso_Helpdesk_Administrators group and assign it to the role. When people join the group, they are assigned the role indirectly. Your existing governance workflow can then take care of the approval process and auditing of the groupΓÇÖs membership to ensure that only legitimate users are members of the group and are thus assigned to the Helpdesk Administrator role.
+Azure Active Directory (Azure AD) lets you target Azure AD groups for role assignments. Assigning roles to groups can simplify the management of role assignments in Azure AD with minimal effort from your Global Administrators and Privileged Role Administrators.
-## How this feature works
+## Why assign roles to groups?
-Create a new Microsoft 365 or security group with the ΓÇÿisAssignableToRoleΓÇÖ property set to ΓÇÿtrueΓÇÖ. You could also enable this property when creating a group in the Azure portal by turning on **Azure AD roles can be assigned to the group**. Either way, you can then assign the group to one or more Azure AD roles in the same way as you assign roles to users. A maximum of 300 role-assignable groups can be created in a single Azure AD organization (tenant).
+Consider the example where the Contoso company has hired people across geographies to manage and reset passwords for employees in its Azure AD organization. Instead of asking a Privileged Role Administrator or Global Administrator to assign the Helpdesk Administrator role to each person individually, they can create a Contoso_Helpdesk_Administrators group and assign the role to the group. When people join the group, they are assigned the role indirectly. Your existing governance workflow can then take care of the approval process and auditing of the group's membership to ensure that only legitimate users are members of the group and are thus assigned the Helpdesk Administrator role.
-If you do not want members of the group to have standing access to the role, you can use Azure AD Privileged Identity Management. Assign a group as an eligible member of an Azure AD role. Each member of the group is then eligible to have their assignment activated for the role that the group is assigned to. They can then activate their role assignment for a fixed time duration.
+## How role assignments to groups work
-> [!Note]
-> You must be on updated version of Privileged Identity Management to be able to assign a group to Azure AD role via PIM. You could be on older version of PIM because your Azure AD organization leverages the Privileged Identity Management API. Please reach out to the alias pim_preview@microsoft.com to move your organization and update your API. Learn more at [Azure AD roles and features in PIM](../privileged-identity-management/azure-ad-roles-features.md).
+To assign a role to a group, you must create a new security or Microsoft 365 group with the `isAssignableToRole` property set to `true`. In the Azure portal, you set the **Azure AD roles can be assigned to the group** option to **Yes**. Either way, you can then assign one or more Azure AD roles to the group in the same way as you assign roles to users.
+
+![Screenshot of the Roles and administrators page](./media/groups-concept/role-assignable-group.png)
+
+## Restrictions for role-assignable groups
+
+Role-assignable groups have the following restrictions:
+
+- You can only set the `isAssignableToRole` property or the **Azure AD roles can be assigned to the group** option for new groups.
+- The `isAssignableToRole` property is **immutable**. Once a group is created with this property set, it can't be changed.
+- You can't make an existing group a role-assignable group.
+- A maximum of 300 role-assignable groups can be created in a single Azure AD organization (tenant).
+
+## How are role-assignable groups protected?
-## Why we enforce creation of a special group for assigning it to a role
+If a group is assigned a role, any IT administrator who can manage group membership could also indirectly manage the membership of that role. For example, assume that a group named Contoso_User_Administrators is assigned the User Administrator role. An Exchange administrator who can modify group membership could add themselves to the Contoso_User_Administrators group and in that way become a User Administrator. As you can see, an administrator could elevate their privilege in a way you did not intend.
-If a group is assigned a role, any IT admin who can manage group membership could also indirectly manage the membership of that role. For example, assume that a group Contoso_User_Administrators is assigned to User Administrator role. An Exchange Administrator who can modify group membership could add themselves to the Contoso_User_Administrators group and in that way become a User Administrator. As you can see, an admin could elevate their privilege in a way you did not intend.
+Only groups that have the `isAssignableToRole` property set to `true` at creation time can be assigned a role. This property is immutable. Once a group is created with this property set, it can't be changed. You can't set the property on an existing group.
-Azure AD allows you to protect a group assigned to a role by using a new property called isAssignableToRole for groups. Only cloud groups that had the isAssignableToRole property set to ΓÇÿtrueΓÇÖ at creation time can be assigned to a role. This property is immutable; once a group is created with this property set to ΓÇÿtrueΓÇÖ, it canΓÇÖt be changed. You can't set the property on an existing group. We designed how groups are assigned to roles to help prevent potential breaches from happening:
+Role-assignable groups are designed to help prevent potential breaches by having the following restrictions:
-- Only Global Administrators and Privileged Role Administrators can create a role-assignable group (with the "isAssignableToRole" property enabled).-- It can't be an Azure AD dynamic group; that is, it must have a membership type of "Assigned." Automated population of dynamic groups could lead to an unwanted account being added to the group and thus assigned to the role.
+- Only Global Administrators and Privileged Role Administrators can create a role-assignable group.
+- The membership type for role-assignable groups must be Assigned and can't be an Azure AD dynamic group. Automated population of dynamic groups could lead to an unwanted account being added to the group and thus assigned to the role.
- By default, only Global Administrators and Privileged Role Administrators can manage the membership of a role-assignable group, but you can delegate the management of role-assignable groups by adding group owners. - To prevent elevation of privilege, only a Privileged Authentication Administrator or a Global Administrator can change the credentials or reset MFA for members and owners of a role-assignable group.-- No nesting. A group can't be added as a member of a role-assignable group.
+- Group nesting is not supported. A group can't be added as a member of a role-assignable group.
-## Limitations
+## Use PIM to make a group eligible for a role assignment
-The following scenarios are not supported right now:
+If you do not want members of the group to have standing access to a role, you can use [Azure AD Privileged Identity Management (PIM)](../privileged-identity-management/pim-configure.md) to make a group eligible for a role assignment. Each member of the group is then eligible to activate the role assignment for a fixed time duration.
-- Assign on-premises groups to Azure AD roles (built-in or custom)
+> [!Note]
+> You must be using an updated version of PIM to be able to assign a Azure AD role to a group. You could be using an older version of PIM because your Azure AD organization leverages the PIM API. Send email to pim_preview@microsoft.com to move your organization and update your API. For more information, see [Azure AD roles and features in PIM](../privileged-identity-management/azure-ad-roles-features.md).
+
+## Scenarios not supported
+
+The following scenarios are not supported:
+
+- Assign Azure AD roles (built-in or custom) to on-premises groups.
## Known issues -- *Azure AD P2 licensed customers only*: Don't assign a group as Active to a role through both Azure AD and Privileged Identity Management (PIM). Specifically, don't assign a role to a role-assignable group when it's being created *and* assign a role to the group using PIM later. This will lead to issues where users canΓÇÖt see their active role assignments in the PIM as well as the inability to remove that PIM assignment. Eligible assignments are not affected in this scenario. If you do attempt to make this assignment, you might see unexpected behavior such as:
- - End time for the role assignment might display incorrectly.
- - In the PIM portal, **My Roles** can show only one role assignment regardless of how many methods by which the assignment is granted (through one or more groups and directly).
-- *Azure AD P2 licensed customers only* Even after deleting the group, it is still shown an eligible member of the role in PIM UI. Functionally there's no problem; it's just a cache issue in the Azure portal. -- Use the new [Exchange Admin Center](https://admin.exchange.microsoft.com/) for role assignments via group membership. The old Exchange Admin Center doesnΓÇÖt support this feature yet. Exchange PowerShell cmdlets will work as expected.
+The following are known issues with role-assignable groups:
+
+- *Azure AD P2 licensed customers only*: Even after deleting the group, it is still shown an eligible member of the role in PIM UI. Functionally there's no problem; it's just a cache issue in the Azure portal.
+- Use the new [Exchange admin center](https://admin.exchange.microsoft.com/) for role assignments via group membership. The old Exchange admin center doesn't support this feature yet. Exchange PowerShell cmdlets will work as expected.
- Azure Information Protection Portal (the classic portal) doesn't recognize role membership via group yet. You can [migrate to the unified sensitivity labeling platform](/azure/information-protection/configure-policy-migrate-labels) and then use the Office 365 Security & Compliance center to use group assignments to manage roles.-- [Apps Admin Center](https://config.office.com/) doesn't support this feature yet. Assign users directly to Office Apps Administrator role.
+- [Apps admin center](https://config.office.com/) doesn't support this feature yet. Assign users directly to Office Apps Administrator role.
- [Microsoft 365 Compliance Center](https://compliance.microsoft.com/) doesn't support this feature yet. Assign users directly to appropriate Azure AD roles to use this portal.
-We are fixing these issues.
- ## License requirements
-Using this feature requires an Azure AD Premium P1 license. To also use Privileged Identity Management for just-in-time role activation requires an Azure AD Premium P2 license. To find the right license for your requirements, see [Comparing generally available features of the Free and Premium editions](https://azure.microsoft.com/pricing/details/active-directory/).
+Using this feature requires an Azure AD Premium P1 license. To also use Privileged Identity Management for just-in-time role activation, requires an Azure AD Premium P2 license. To find the right license for your requirements, see [Comparing generally available features of the Free and Premium editions](https://azure.microsoft.com/pricing/details/active-directory/).
## Next steps - [Create a role-assignable group](groups-create-eligible.md)-- [Assign a role to a role-assignable group](groups-assign-role.md)
+- [Assign Azure AD roles to groups](groups-assign-role.md)
active-directory Groups Create Eligible https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/groups-create-eligible.md
# Create a role-assignable group in Azure Active Directory
-You can only assign a role to a group that was created with the ΓÇÿisAssignableToRoleΓÇÖ property set to True, or was created in the Azure portal with **Azure AD roles can be assigned to the group** turned on. This group attribute makes the group one that can be assigned to a role in Azure Active Directory (Azure AD). This article describes how to create this special kind of group. **Note:** A group with isAssignableToRole property set to true cannot be of dynamic membership type. For more information, see [Using a group to manage Azure AD role assignments](groups-concept.md).
+You can only assign a role to a group that was created with the ΓÇÿisAssignableToRoleΓÇÖ property set to True, or was created in the Azure portal with **Azure AD roles can be assigned to the group** turned on. This group attribute makes the group one that can be assigned to a role in Azure Active Directory (Azure AD). This article describes how to create this special kind of group. **Note:** A group with isAssignableToRole property set to true cannot be of dynamic membership type. For more information, see [Use Azure AD groups to manage role assignments](groups-concept.md).
## Prerequisites
For this type of group, `isPublic` will always be false and `isSecurityEnabled`
## Next steps -- [Assign a role to a cloud group](groups-assign-role.md)-- [Use cloud groups to manage role assignments](groups-concept.md)-- [Troubleshooting roles assigned to cloud groups](groups-faq-troubleshooting.md)
+- [Assign Azure AD roles to groups](groups-assign-role.md)
+- [Use Azure AD groups to manage role assignments](groups-concept.md)
+- [Troubleshoot Azure AD roles assigned to groups](groups-faq-troubleshooting.md)
active-directory Groups Faq Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/groups-faq-troubleshooting.md
Title: Troubleshooting roles assigned to cloud group FAQ - Azure Active Directory | Microsoft Docs
+ Title: Troubleshoot Azure AD roles assigned to groups - Azure Active Directory
description: Learn some common questions and troubleshooting tips for assigning roles to groups in Azure Active Directory.
-# Troubleshooting roles assigned to cloud groups
+# Troubleshoot Azure AD roles assigned to groups
-Here are some common questions and troubleshooting tips for assigning roles to groups in Azure Active Directory (Azure AD).
+Here are some common questions and troubleshooting tips for assigning Azure Active Directory (Azure AD) roles to Azure AD groups.
**Q:** I'm a Groups Administrator but I can't see the **Azure AD roles can be assigned to the group** switch.
User | Catalog owner | Only if group owner | Only if group owner | Only if app o
- In Azure AD Premium P1 licensed organizations: Select the gear icon. A pane opens that can give this information. - In Azure AD Premium P2 licensed organizations: You'll find direct and inherited license information in the **Membership** column.
-**Q:** Why do we enforce creating a new cloud group for assigning it to role?
+**Q:** Why do we enforce creating a new group for assigning it to role?
**A:** If you assign an existing group to a role, the existing group owner could add other members to this group without the new members realizing that they'll have the role. Because role-assignable groups are powerful, we're putting lots of restrictions to protect them. You don't want changes to the group that would be surprising to the person managing the group. ## Next steps -- [Use cloud groups to manage role assignments](groups-concept.md)
+- [Use Azure AD groups to manage role assignments](groups-concept.md)
- [Create a role-assignable group](groups-create-eligible.md)
active-directory Groups Pim Eligible https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/groups-pim-eligible.md
https://graph.microsoft.com/beta/privilegedAccess/aadroles/roleAssignmentRequest
## Next steps -- [Use cloud groups to manage role assignments](groups-concept.md)-- [Troubleshooting roles assigned to cloud groups](groups-faq-troubleshooting.md)
+- [Use Azure AD groups to manage role assignments](groups-concept.md)
+- [Troubleshoot Azure AD roles assigned to groups](groups-faq-troubleshooting.md)
- [Configure Azure AD admin role settings in Privileged Identity Management](../privileged-identity-management/pim-how-to-change-default-settings.md) - [Assign Azure resource roles in Privileged Identity Management](../privileged-identity-management/pim-resource-roles-assign-roles.md)
active-directory Groups Remove Assignment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/groups-remove-assignment.md
DELETE https://graph.microsoft.com/beta/roleManagement/directory/roleAssignments
## Next steps -- [Use cloud groups to manage role assignments](groups-concept.md)-- [Troubleshooting roles assigned to cloud groups](groups-faq-troubleshooting.md)
+- [Use Azure AD groups to manage role assignments](groups-concept.md)
+- [Troubleshoot Azure AD roles assigned to groups](groups-faq-troubleshooting.md)
active-directory Groups View Assignments https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/groups-view-assignments.md
GET https://graph.microsoft.com/beta/roleManagement/directory/roleAssignments?$f
## Next steps -- [Use cloud groups to manage role assignments](groups-concept.md)-- [Troubleshooting roles assigned to cloud groups](groups-faq-troubleshooting.md)
+- [Use Azure AD groups to manage role assignments](groups-concept.md)
+- [Troubleshoot Azure AD roles assigned to groups](groups-faq-troubleshooting.md)
active-directory Cornerstone Ondemand Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/cornerstone-ondemand-tutorial.md
Title: 'Tutorial: Azure Active Directory Single sign-on (SSO) integration with Cornerstone Single Sign-On | Microsoft Docs'
+ Title: 'Tutorial: Azure Active Directory Single sign-on (SSO) integration with Cornerstone | Microsoft Docs'
description: Learn how to configure single sign-on between Azure Active Directory and Cornerstone Single Sign-On.
Previously updated : 05/27/2021 Last updated : 06/24/2021
-# Tutorial: Azure Active Directory Single sign-on (SSO) integration with Cornerstone Single Sign-On
+# Tutorial: Azure Active Directory Single sign-on (SSO) integration with Cornerstone
-In this tutorial, you'll learn how to integrate Cornerstone Single Sign-On with Azure Active Directory (Azure AD). When you integrate Cornerstone Single Sign-On with Azure AD, you can:
+In this tutorial, you'll learn how to set up the single sign-on integration between Cornerstone and Azure Active Directory (Azure AD). When you integrate Cornerstone with Azure AD, you can:
-* Control in Azure AD who has access to Cornerstone Single Sign-On.
-* Enable your users to be automatically signed-in to Cornerstone Single Sign-On with their Azure AD accounts.
+* Control in Azure AD who has SSO access to Cornerstone.
+* Enable your users to be automatically signed-in to Cornerstone with their Azure AD accounts.
* Manage your accounts in one central location - the Azure portal. ## Prerequisites
In this tutorial, you'll learn how to integrate Cornerstone Single Sign-On with
To get started, you need the following items: * An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
-* Cornerstone single sign-on (SSO) enabled subscription.
+* Enabled SSO in Cornerstone.
> [!NOTE] > This integration is also available to use from Azure AD US Government Cloud environment. You can find this application in the Azure AD US Government Cloud Application Gallery and configure it in the same way as you do from public cloud.
To get started, you need the following items:
In this tutorial, you configure and test Azure AD SSO in a test environment.
-* Cornerstone Single Sign-On supports **SP** initiated SSO.
+* Cornerstone supports **SP** initiated SSO.
-* If you are integrating one or multiple products from this particular list then you should use this Cornerstone OnDemand Single Sign-On app from the Gallery.
+* If you are integrating one or multiple products from this particular list then you should use this Cornerstone Single Sign-On app from the Gallery.
We offer solutions for :
In this tutorial, you configure and test Azure AD SSO in a test environment.
## Adding Cornerstone Single Sign-On from the gallery
-To configure the integration of Cornerstone Single Sign-On into Azure AD, you need to add Cornerstone Single Sign-On from the gallery to your list of managed SaaS apps.
+To configure the Azure AD SSO integration with Cornerstone, you need to...
1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account. 1. On the left navigation pane, select the **Azure Active Directory** service.
To configure the integration of Cornerstone Single Sign-On into Azure AD, you ne
1. In the **Add from the gallery** section, type **Cornerstone Single Sign-On** in the search box. 1. Select **Cornerstone Single Sign-On** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD SSO for Cornerstone Single Sign-On
+## Configure and test Azure AD SSO for Cornerstone
-Configure and test Azure AD SSO with Cornerstone Single Sign-On using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Cornerstone Single Sign-On.
+Configure and test Azure AD SSO with Cornerstone using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Cornerstone.
-To configure and test Azure AD SSO with Cornerstone Single Sign-On, perform the following steps:
+To configure and test Azure AD SSO with Cornerstone, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon. 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
-2. **[Configure Cornerstone Single Sign-On SSO](#configure-cornerstone-single-sign-on-sso)** - to configure the Single Sign-On settings on application side.
+2. **[Configure Cornerstone Single Sign-On](#configure-cornerstone-single-sign-on)** - to configure the SSO in Cornerstone.
1. **[Create Cornerstone Single Sign-On test user](#create-cornerstone-single-sign-on-test-user)** - to have a counterpart of B.Simon in Cornerstone that is linked to the Azure AD representation of user. 3. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+4. **[Test SSO for Cornerstone (Mobile)](#test-sso-for-cornerstone-mobile)** - to verify whether the configuration works.
## Configure Azure AD SSO
Follow these steps to enable Azure AD SSO in the Azure portal.
`https://<PORTAL_NAME>.csod.com/samldefault.aspx?ouid=<OUID>` > [!NOTE]
- > These values are not real. Update these values with the actual Reply URL, Identifier and Sign on URL. You need to reach out to your cornerstone consulting team or to your partner to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > These values are not real. Update these values with the actual Reply URL, Identifier and Sign on URL. Please reach out to your Cornerstone implementation project team to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
4. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
In this section, you'll create a test user in the Azure portal called B.Simon.
### Assign the Azure AD test user
-In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Cornerstone Single Sign-On.
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Cornerstone.
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. 1. In the applications list, select **Cornerstone Single Sign-On**.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected. 1. In the **Add Assignment** dialog, click the **Assign** button.
-## Configure Cornerstone Single Sign-On SSO
+## Configure Cornerstone Single Sign-On
-To configure single sign-on on **Cornerstone Single Sign-On** side, you need to reach out to your cornerstone consulting team or to your partner. They set this setting to have the SAML SSO connection set properly on both sides.
+To configure SSO in Cornerstone, you need to reach out to your Cornerstone implementation project team. They set this setting to have the SAML SSO connection set properly on both sides.
### Create Cornerstone Single Sign-On test user
-In this section, you create a user called Britta Simon in Cornerstone. Work with your cornerstone consulting team or reach out to your partner to add the users in the Cornerstone Single Sign-On platform. Users must be created and activated before you use single sign-on.
+In this section, you create a user called Britta Simon in Cornerstone. Please work with your Cornerstone implementation project team to add the users in Cornerstone. Users must be created and activated before you use single sign-on.
## Test SSO In this section, you test your Azure AD single sign-on configuration with following options.
-* Click on **Test this application** in Azure portal. This will redirect to Cornerstone Single Sign-On Sign-on URL where you can initiate the login flow.
+* Click on **Test this application** in Azure portal. This will redirect to Cornerstone Sign-on URL where you can initiate the login flow.
-* Go to Cornerstone Single Sign-On Sign-on URL directly and initiate the login flow from there.
+* Go to Cornerstone Sign-on URL directly and initiate the login flow from there.
-* You can use Microsoft My Apps. When you click the Cornerstone Single Sign-On tile in the My Apps, this will redirect to Cornerstone Single Sign-On Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+* You can use Microsoft My Apps. When you click the Cornerstone Single Sign-On tile in the My Apps, this will redirect to Cornerstone Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Test SSO for Cornerstone (Mobile)
+
+1. In a different browser window, log in to your Cornerstone website as an administrator and perform the following steps.
+
+ a. Go to the **Admin -> Tools -> CORE FUNCTIONS -> Core Preferences -> Authentication Preferences**.
+
+ ![screeenshot for Authentication Preferences mobile appilcation Cornerstone.](./media/cornerstone-ondemand-tutorial/division-mobile.png)
+
+ b. Search the **Division Name** by giving the Division Name in the search box.
+
+ c. Click the **Division Name** in the results.
+
+ d. From the SAML/IDP server URL dropdown, select the appropriate SAML/IDP server that should be used for user Authentication.
+
+ ![screeenshot for Other credentials validated against client SAML/IDP server.](./media/cornerstone-ondemand-tutorial/other-credentials.png)
+
+ e. Click **Save**.
+
+1. Go to **Admin > Tools > Core Functions > Core Preferences > Mobile**.
+
+ a. Select the appropriate **Division OU**.
+
+ b. Select **Allow users** in this OU to access the Cornerstone Learn app on their mobile and tablet device and checkbox in Enable Mobile Access.
+
+ c. Click **Save**.
+
+2. Open **Cornerstone Learn** mobile application. On the sign in page, enter the portal name.
+
+ ![screeenshot for mobile appilcation Cornerstone.](./media/cornerstone-ondemand-tutorial/welcome-mobile.png)
+
+3. Click **Alternative Login** and then click **SSO**.
+
+ ![screeenshot for mobile appilcation Alternative Login.](./media/cornerstone-ondemand-tutorial/sso-mobile.png)
+
+4. . Enter your **Azure AD credentials** to sign into the Cornerstone application and click **Next**.
+
+ ![screeenshot for mobile appilcation Azure AD credentials.](./media/cornerstone-ondemand-tutorial/credentials-mobile.png)
+
+5. Finally after successful sign in, the application homepage will be displayed as shown below.
+
+ ![screeenshot for mobile appilcation home page.](./media/cornerstone-ondemand-tutorial/home-page-mobile.png)
## Next steps
active-directory Enable Your Tenant Verifiable Credentials https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/verifiable-credentials/enable-your-tenant-verifiable-credentials.md
description: In this tutorial, you build the environment needed to deploy verifi
documentationCenter: '' + Previously updated : 05/18/2021 Last updated : 06/24/2021
Before creating our first verifiable credential, we need to create a Blob Storag
Before creating the credential, we need to first give the signed in user the correct role assignment so they can access the files in Storage Blob. 1. Navigate to **Storage** > **Container**.
-2. Choose **Access Control (IAM)** from the menu on the left.
-3. Choose **Role Assignments**.
-4. Select **Add**.
-5. In the **Role** section, choose **Storage Blob Data Reader**.
-6. Under **Assign access to** choose **User, group, or service principle**.
-7. In **Select**: Choose the account that you are using to perform these steps.
-8. Select **Save** to complete the role assignment.
-
+1. Select **Access control (IAM)**.
+1. Select **Add** > **Add role assignment** to open the Add role assignment page.
+1. Assign the following role. For detailed steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
+
+ | Setting | Value |
+ | | |
+ | Role | Storage Blob Data Reader |
+ | Assign access to | User, group, or service principle |
+ | Select | &lt;account that you are using to perform these steps&gt; |
- ![Add a role assignment](media/enable-your-tenant-verifiable-credentials/role-assignment.png)
+ ![Add role assignment page in Azure portal.](../../../includes/role-based-access-control/media/add-role-assignment-page.png)
>[!IMPORTANT] >By default, container creators get the **Owner** role assigned. The **Owner** role is not enough on its own. Your account needs the **Storage Blob Data Reader** role. For more information review [Use the Azure portal to assign an Azure role for access to blob and queue data](../../storage/common/storage-auth-aad-rbac-portal.md)
aks Concepts Clusters Workloads https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/concepts-clusters-workloads.md
Kubernetes uses *pods* to run an instance of your application. A pod represents
Pods typically have a 1:1 mapping with a container. In advanced scenarios, a pod may contain multiple containers. Multi-container pods are scheduled together on the same node, and allow containers to share related resources.
-When you create a pod, you can define *resource requests* to request a certain amount of CPU or memory resources. The Kubernetes Scheduler tries meet the request by scheduling the pods to run on a node with available resources. You can also specify maximum resource limits to prevent a pod from consuming too much compute resource from the underlying node. Best practice is to include resource limits for all pods to help the Kubernetes Scheduler identify necessary, permitted resources.
+When you create a pod, you can define *resource requests* to request a certain amount of CPU or memory resources. The Kubernetes Scheduler tries to meet the request by scheduling the pods to run on a node with available resources. You can also specify maximum resource limits to prevent a pod from consuming too much compute resource from the underlying node. Best practice is to include resource limits for all pods to help the Kubernetes Scheduler identify necessary, permitted resources.
For more information, see [Kubernetes pods][kubernetes-pods] and [Kubernetes pod lifecycle][kubernetes-pod-lifecycle].
Replicas in a StatefulSet are scheduled and run across any available node in an
### DaemonSets
-For specific log collection or monitoring, you may need to run a pod on all, or selected, nodes. You can use *DaemonSet* deploy one or more identical pods, but the DaemonSet Controller ensures that each node specified runs an instance of the pod.
+For specific log collection or monitoring, you may need to run a pod on all, or selected, nodes. You can use *DaemonSet* deploy on one or more identical pods, but the DaemonSet Controller ensures that each node specified runs an instance of the pod.
The DaemonSet Controller can schedule pods on nodes early in the cluster boot process, before the default Kubernetes scheduler has started. This ability ensures that the pods in a DaemonSet are started before traditional pods in a Deployment or StatefulSet are scheduled.
aks Use Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/use-managed-identity.md
az aks create \
--service-cidr 10.2.0.0/24 \ --enable-managed-identity \ --assign-identity <identity-id> \
- --assign-kubelet-identity <kubelet-identity-id> \
+ --assign-kubelet-identity <kubelet-identity-id>
``` A successful cluster creation using your own kubelet managed identity contains the following output:
app-service Configure Language Dotnet Framework https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/configure-language-dotnet-framework.md
If you configure an app setting with the same name in App Service and in *web.co
## Deploy multi-project solutions
-When a Visual Studio solution includes multiple projects, the Visual Studio publish process already includes selecting the project to deploy. When you deploy to the App Service deployment engine, such as with Git or with ZIP deploy, with build automation turned on, the App Service deployment engine picks the first Web Site or Web Application Project it finds as the App Service app. You can specify which project App Service should use by specifying the `PROJECT` app setting. For example, run the following in the [Cloud Shell](https://shell.azure.com):
+When a Visual Studio solution includes multiple projects, the Visual Studio publish process already includes selecting the project to deploy. When you deploy to the App Service deployment engine, such as with Git, or with ZIP deploy [with build automation enabled](deploy-zip.md#enable-build-automation), the App Service deployment engine picks the first Web Site or Web Application Project it finds as the App Service app. You can specify which project App Service should use by specifying the `PROJECT` app setting. For example, run the following in the [Cloud Shell](https://shell.azure.com):
```azurecli-interactive az webapp config appsettings set --resource-group <resource-group-name> --name <app-name> --settings PROJECT="<project-name>/<project-name>.csproj"
app-service Configure Language Dotnetcore https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/configure-language-dotnetcore.md
az webapp config set --name <app-name> --resource-group <resource-group-name> --
## Customize build automation
-If you deploy your app using Git or zip packages with build automation turned on, the App Service build automation steps through the following sequence:
+If you deploy your app using Git, or zip packages [with build automation enabled](deploy-zip.md#enable-build-automation), the App Service build automation steps through the following sequence:
1. Run custom script if specified by `PRE_BUILD_SCRIPT_PATH`. 1. Run `dotnet restore` to restore NuGet dependencies.
az webapp config appsettings set --name <app-name> --resource-group <resource-gr
## Deploy multi-project solutions
-When a Visual Studio solution includes multiple projects, the Visual Studio publish process already includes selecting the project to deploy. When you deploy to the App Service deployment engine, such as with Git or with ZIP deploy, with build automation turned on, the App Service deployment engine picks the first Web Site or Web Application Project it finds as the App Service app. You can specify which project App Service should use by specifying the `PROJECT` app setting. For example, run the following in the [Cloud Shell](https://shell.azure.com):
+When a Visual Studio solution includes multiple projects, the Visual Studio publish process already includes selecting the project to deploy. When you deploy to the App Service deployment engine, such as with Git, or with ZIP deploy [with build automation enabled](deploy-zip.md#enable-build-automation), the App Service deployment engine picks the first Web Site or Web Application Project it finds as the App Service app. You can specify which project App Service should use by specifying the `PROJECT` app setting. For example, run the following in the [Cloud Shell](https://shell.azure.com):
```azurecli-interactive az webapp config appsettings set --resource-group <resource-group-name> --name <app-name> --settings PROJECT="<project-name>/<project-name>.csproj"
app-service Configure Language Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/configure-language-nodejs.md
zone_pivot_groups: app-service-platform-windows-linux
# Configure a Node.js app for Azure App Service
-Node.js apps must be deployed with all the required NPM dependencies. The App Service deployment engine automatically runs `npm install --production` for you when you deploy a [Git repository](deploy-local-git.md), or a [Zip package](deploy-zip.md) with build automation enabled. If you deploy your files using [FTP/S](deploy-ftp.md), however, you need to upload the required packages manually.
+Node.js apps must be deployed with all the required NPM dependencies. The App Service deployment engine automatically runs `npm install --production` for you when you deploy a [Git repository](deploy-local-git.md), or a [Zip package](deploy-zip.md) [with build automation enabled](deploy-zip.md#enable-build-automation). If you deploy your files using [FTP/S](deploy-ftp.md), however, you need to upload the required packages manually.
This guide provides key concepts and instructions for Node.js developers who deploy to App Service. If you've never used Azure App Service, follow the [Node.js quickstart](quickstart-nodejs.md) and [Node.js with MongoDB tutorial](tutorial-nodejs-mongodb-app.md) first.
app.listen(port, () => {
## Customize build automation
-If you deploy your app using Git or zip packages with build automation turned on, the App Service build automation steps through the following sequence:
+If you deploy your app using Git, or zip packages [with build automation enabled](deploy-zip.md#enable-build-automation), the App Service build automation steps through the following sequence:
1. Run custom script if specified by `PRE_BUILD_SCRIPT_PATH`. 1. Run `npm install` without any flags, which includes npm `preinstall` and `postinstall` scripts and also installs `devDependencies`.
process.env.NODE_ENV
## Run Grunt/Bower/Gulp
-By default, App Service build automation runs `npm install --production` when it recognizes a Node.js app is deployed through Git or Zip deployment with build automation enabled. If your app requires any of the popular automation tools, such as Grunt, Bower, or Gulp, you need to supply a [custom deployment script](https://github.com/projectkudu/kudu/wiki/Custom-Deployment-Script) to run it.
+By default, App Service build automation runs `npm install --production` when it recognizes a Node.js app is deployed through Git, or through Zip deployment [with build automation enabled](deploy-zip.md#enable-build-automation). If your app requires any of the popular automation tools, such as Grunt, Bower, or Gulp, you need to supply a [custom deployment script](https://github.com/projectkudu/kudu/wiki/Custom-Deployment-Script) to run it.
To enable your repository to run these tools, you need to add them to the dependencies in *package.json.* For example:
app-service Configure Language Php https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/configure-language-php.md
if [ -e "$DEPLOYMENT_TARGET/composer.json" ]; then
fi ```
-Commit all your changes and deploy your code using Git, or Zip deploy with build automation enabled. Composer should now be running as part of deployment automation.
+Commit all your changes and deploy your code using Git, or Zip deploy [with build automation enabled](deploy-zip.md#enable-build-automation). Composer should now be running as part of deployment automation.
## Run Grunt/Bower/Gulp
-If you want App Service to run popular automation tools at deployment time, such as Grunt, Bower, or Gulp, you need to supply a [custom deployment script](https://github.com/projectkudu/kudu/wiki/Custom-Deployment-Script). App Service runs this script when you deploy with Git, or with [Zip deployment](deploy-zip.md) with build automation enabled.
+If you want App Service to run popular automation tools at deployment time, such as Grunt, Bower, or Gulp, you need to supply a [custom deployment script](https://github.com/projectkudu/kudu/wiki/Custom-Deployment-Script). App Service runs this script when you deploy with Git, or with [Zip deployment](deploy-zip.md) with [with build automation enabled](deploy-zip.md#enable-build-automation).
To enable your repository to run these tools, you need to add them to the dependencies in *package.json.* For example:
fi
## Customize build automation
-If you deploy your app using Git or zip packages with build automation turned on, the App Service build automation steps through the following sequence:
+If you deploy your app using Git, or using zip packages [with build automation enabled](deploy-zip.md#enable-build-automation), the App Service build automation steps through the following sequence:
1. Run custom script if specified by `PRE_BUILD_SCRIPT_PATH`. 1. Run `php composer.phar install`.
app-service Configure Language Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/configure-language-python.md
This article describes how [Azure App Service](overview.md) runs Python apps, how you can migrate existing apps to Azure, and how you can customize the behavior of App Service when needed. Python apps must be deployed with all the required [pip](https://pypi.org/project/pip/) modules.
-The App Service deployment engine automatically activates a virtual environment and runs `pip install -r requirements.txt` for you when you deploy a [Git repository](deploy-local-git.md), or a [zip package](deploy-zip.md) if `SCM_DO_BUILD_DURING_DEPLOYMENT` is set to `1`.
+The App Service deployment engine automatically activates a virtual environment and runs `pip install -r requirements.txt` for you when you deploy a [Git repository](deploy-local-git.md), or a [zip package](deploy-zip.md) [with build automation enabled](deploy-zip.md#enable-build-automation).
This guide provides key concepts and instructions for Python developers who use a built-in Linux container in App Service. If you've never used Azure App Service, first follow the [Python quickstart](quickstart-python.md) and [Python with PostgreSQL tutorial](tutorial-python-postgresql-app.md).
app-service Configure Language Ruby https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/configure-language-ruby.md
ENV['WEBSITE_SITE_NAME']
## Customize deployment
-When you deploy a [Git repository](deploy-local-git.md), or a [Zip package](deploy-zip.md) with build processes switched on, the deployment engine (Kudu) automatically runs the following post-deployment steps by default:
+When you deploy a [Git repository](deploy-local-git.md), or a [Zip package](deploy-zip.md) [with build automation enabled](deploy-zip.md#enable-build-automation), the deployment engine (Kudu) automatically runs the following post-deployment steps by default:
1. Check if a *Gemfile* exists. 1. Run `bundle clean`.
app-service Deploy Zip https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/deploy-zip.md
az webapp deployment source config-zip --resource-group <group-name> --name <app
This command deploys the files and directories from the ZIP file to your default App Service application folder (`\home\site\wwwroot`) and restarts the app.
+## Enable build automation
+ By default, the deployment engine assumes that a ZIP file is ready to run as-is and doesn't run any build automation. To enable the same build automation as in a [Git deployment](deploy-local-git.md), set the `SCM_DO_BUILD_DURING_DEPLOYMENT` app setting by running the following command in the [Cloud Shell](https://shell.azure.com): ```azurecli-interactive
app-service Monitor Instances Health Check https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/monitor-instances-health-check.md
This article uses Health check in the Azure portal to monitor App Service instan
> [!NOTE] > Health check doesn't follow 302 redirects. At most one instance will be replaced per hour, with a maximum of three instances per day per App Service Plan.
->
+>
+> On App Service Environments, if an instance continues to fail for one hour it will not be automatically replaced with a new instance due to the limited number of extra virtual machines on the stamp.
+>
## Enable Health Check
app-service Troubleshoot Domain Ssl Certificates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/troubleshoot-domain-ssl-certificates.md
When you add a TLS binding, you receive the following error message:
#### Cause
-This problem can occur if you have multiple IP-based SSL bindings for the same IP address across multiple apps. For example, app A has an IP-based SSL with an old certificate. App B has an IP-based SSL with a new certificate for the same IP address. When you update the app TLS binding with the new certificate, it fails with this error because the same IP address is being used for another app.
+This problem can occur if you have multiple IP-based TLS/SSL bindings for the same IP address across multiple apps. For example, app A has an IP-based TLS/SSL binding with an old certificate. App B has an IP-based TLS/SSL binding with a new certificate for the same IP address. When you update the app TLS binding with the new certificate, it fails with this error because the same IP address is being used for another app.
#### Solution To fix this problem, use one of the following methods: -- Delete the IP-based SSL binding on the app that uses the old certificate. -- Create a new IP-based SSL binding that uses the new certificate.
+- Delete the IP-based TLS/SSL binding on the app that uses the old certificate.
+- Create a new IP-based TLS/SSL binding that uses the new certificate.
### You can't delete a certificate
app-service Tutorial Multi Container App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/tutorial-multi-container-app.md
When the database has been created, Cloud Shell shows information similar to the
### Configure database variables in WordPress
-To connect the WordPress app to this new MySQL server, you'll configure a few WordPress-specific environment variables, including the SSL CA path defined by `MYSQL_SSL_CA`. The [Baltimore CyberTrust Root](https://www.digicert.com/digicert-root-certificates.htm) from [DigiCert](https://www.digicert.com/) is provided in the [custom image](#use-a-custom-image-for-mysql-ssl-and-other-configurations) below.
+To connect the WordPress app to this new MySQL server, you'll configure a few WordPress-specific environment variables, including the SSL CA path defined by `MYSQL_SSL_CA`. The [Baltimore CyberTrust Root](https://www.digicert.com/digicert-root-certificates.htm) from [DigiCert](https://www.digicert.com/) is provided in the [custom image](#use-a-custom-image-for-mysql-tlsssl-and-other-configurations) below.
To make these changes, use the [az webapp config appsettings set](/cli/azure/webapp/config/appsettings#az_webapp_config_appsettings_set) command in Cloud Shell. App settings are case-sensitive and space-separated.
When the app setting has been created, Cloud Shell shows information similar to
For more information on environment variables, see [Configure environment variables](configure-custom-container.md#configure-environment-variables).
-### Use a custom image for MySQL SSL and other configurations
+### Use a custom image for MySQL TLS/SSL and other configurations
-By default, SSL is used by Azure Database for MySQL. WordPress requires additional configuration to use SSL with MySQL. The WordPress 'official image' doesn't provide the additional configuration, but a [custom image](https://github.com/Azure-Samples/multicontainerwordpress) has been prepared for your convenience. In practice, you would add desired changes to your own image.
+By default, TLS/SSL is used by Azure Database for MySQL. WordPress requires additional configuration to use TLS/SSL with MySQL. The WordPress 'official image' doesn't provide the additional configuration, but a [custom image](https://github.com/Azure-Samples/multicontainerwordpress) has been prepared for your convenience. In practice, you would add desired changes to your own image.
The custom image is based on the 'official image' of [WordPress from Docker Hub](https://hub.docker.com/_/wordpress/). The following changes have been made in this custom image for Azure Database for MySQL:
app-service Tutorial Nodejs Mongodb App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/tutorial-nodejs-mongodb-app.md
module.exports = {
}; ```
-The `ssl=true` option is required because [Cosmos DB requires SSL](../cosmos-db/connect-mongodb-account.md#connection-string-requirements).
+The `ssl=true` option is required because [Cosmos DB requires TLS/SSL](../cosmos-db/connect-mongodb-account.md#connection-string-requirements).
Save your changes.
azure-arc Manage Agent https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/manage-agent.md
For Arc enabled servers, before you rename the machine, it is necessary to remov
1. Audit the VM extensions installed on the machine and note their configuration, using the [Azure CLI](manage-vm-extensions-cli.md#list-extensions-installed) or using [Azure PowerShell](manage-vm-extensions-powershell.md#list-extensions-installed).
-2. Remove VM extensions installed from the [Azure portal](manage-vm-extensions-portal.md#uninstall-extension), using the [Azure CLI](manage-vm-extensions-cli.md#remove-an-installed-extension), or using [Azure PowerShell](manage-vm-extensions-powershell.md#remove-an-installed-extension).
+2. Remove VM extensions installed from the [Azure portal](manage-vm-extensions-portal.md#uninstall-extensions), using the [Azure CLI](manage-vm-extensions-cli.md#remove-an-installed-extension), or using [Azure PowerShell](manage-vm-extensions-powershell.md#remove-an-installed-extension).
-3. Use the **azcmagent** tool with the [Disconnect](manage-agent.md#disconnect) parameter to disconnect the machine from Azure Arc and delete the machine resource from Azure. Disconnecting the machine from Arc enabled servers does not remove the Connected Machine agent, and you do not need to remove the agent as part of this process. You can run this manually while logged on interactively, or automate using the same service principal you used to onboard multiple agents, or with a Microsoft identity platform [access token](../../active-directory/develop/access-tokens.md). If you did not use a service principal to register the machine with Azure Arc enabled servers, see the following [article](onboard-service-principal.md#create-a-service-principal-for-onboarding-at-scale) to create a service principal.
+3. Use the **azcmagent** tool with the [Disconnect](manage-agent.md#disconnect) parameter to disconnect the machine from Azure Arc and delete the machine resource from Azure. Disconnecting the machine from Arc enabled servers does not remove the Connected Machine agent, and you do not need to remove the agent as part of this process. You can run azcmagent manually while logged on interactively, or automate using the same service principal you used to onboard multiple agents, or with a Microsoft identity platform [access token](../../active-directory/develop/access-tokens.md). If you did not use a service principal to register the machine with Azure Arc enabled servers, see the following [article](onboard-service-principal.md#create-a-service-principal-for-onboarding-at-scale) to create a service principal.
4. Rename the machines computer name.
To disconnect with your elevated logged-on credentials (interactive), run the fo
Perform one of the following methods to uninstall the Windows or Linux Connected Machine agent from the machine. Removing the agent does not unregister the machine with Arc enabled servers or remove the Azure VM extensions installed. For servers or machines you no longer want to manage with Azure Arc enabled servers, it is necessary to follow these steps to successfully stop managing it:
-1. Remove VM extensions installed from the [Azure portal](manage-vm-extensions-portal.md#uninstall-extension), using the [Azure CLI](manage-vm-extensions-cli.md#remove-an-installed-extension), or using [Azure PowerShell](manage-vm-extensions-powershell.md#remove-an-installed-extension) that you don't want to remain on the machine.
+1. Remove VM extensions installed from the [Azure portal](manage-vm-extensions-portal.md#uninstall-extensions), using the [Azure CLI](manage-vm-extensions-cli.md#remove-an-installed-extension), or using [Azure PowerShell](manage-vm-extensions-powershell.md#remove-an-installed-extension) that you don't want to remain on the machine.
1. Unregister the machine by running `azcmagent disconnect` to delete the Arc enabled servers resource in Azure. If that fails, you can delete the resource manually in Azure. Otherwise, if the resource was deleted in Azure, you'll need to run `azcmagent disconnect --force-local-only` on the server to remove the local configuration. ### Windows agent
azure-arc Manage Howto Migrate https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/manage-howto-migrate.md
To migrate an Azure Arc enabled server from one Azure region to another, you hav
> [!NOTE] > During this operation, it results in downtime during the migration.
-1. Remove VM extensions installed from the [Azure portal](manage-vm-extensions-portal.md#uninstall-extension), using the [Azure CLI](manage-vm-extensions-cli.md#remove-an-installed-extension), or using [Azure PowerShell](manage-vm-extensions-powershell.md#remove-an-installed-extension).
+1. Remove VM extensions installed from the [Azure portal](manage-vm-extensions-portal.md#uninstall-extensions), using the [Azure CLI](manage-vm-extensions-cli.md#remove-an-installed-extension), or using [Azure PowerShell](manage-vm-extensions-powershell.md#remove-an-installed-extension).
2. Use the **azcmagent** tool with the [Disconnect](manage-agent.md#disconnect) parameter to disconnect the machine from Azure Arc and delete the machine resource from Azure. Disconnecting the machine from Arc enabled servers does not remove the Connected Machine agent, and you do not need to remove the agent as part of this process. You can run this manually while logged on interactively, or automate using the same service principal you used to onboard multiple agents, or with a Microsoft identity platform [access token](../../active-directory/develop/access-tokens.md). If you did not use a service principal to register the machine with Azure Arc enabled servers, see the following [article](onboard-service-principal.md#create-a-service-principal-for-onboarding-at-scale) to create a service principal.
azure-arc Manage Vm Extensions Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/manage-vm-extensions-portal.md
Title: Enable VM extension from Azure portal description: This article describes how to deploy virtual machine extensions to Azure Arc enabled servers running in hybrid cloud environments from the Azure portal. Previously updated : 04/13/2021 Last updated : 06/25/2021 # Enable Azure VM extensions from the Azure portal
-This article shows you how to deploy and uninstall Azure VM extensions, supported by Azure Arc enabled servers, to a Linux or Windows hybrid machine through the Azure portal.
+This article shows you how to deploy, update, and uninstall Azure VM extensions supported by Azure Arc enabled servers, on a Linux or Windows hybrid machine through the Azure portal.
> [!NOTE] > The Key Vault VM extension (preview) does not support deployment from the Azure portal, only using the Azure CLI, the Azure PowerShell, or using an Azure Resource Manager template.
This article shows you how to deploy and uninstall Azure VM extensions, supporte
## Enable extensions from the portal
-VM extensions can be applied your Arc for server managed machine through the Azure portal.
+VM extensions can be applied to your Arc enabled server managed machine through the Azure portal.
1. From your browser, go to the [Azure portal](https://portal.azure.com).
VM extensions can be applied your Arc for server managed machine through the Azu
To complete the installation, you are required to provide the workspace ID and primary key. If you are not familiar with how to find this information, see [obtain workspace ID and key](../../azure-monitor/agents/log-analytics-agent.md#workspace-id-and-key).
-4. After confirming the required information provided, select **Create**. A summary of the deployment is displayed and you can review the status of the deployment.
+4. After confirming the required information provided, select **Review + Create**. A summary of the deployment is displayed and you can review the status of the deployment.
>[!NOTE] >While multiple extensions can be batched together and processed, they are installed serially. Once the first extension installation is complete, installation of the next extension is attempted.
You can get a list of the VM extensions on your Arc enabled server from the Azur
3. Choose **Extensions**, and the list of installed extensions is returned.
- ![List VM extension deployed to selected machine](./media/manage-vm-extensions/list-vm-extensions.png)
+ :::image type="content" source="media/manage-vm-extensions/list-vm-extensions.png" alt-text="List VM extension deployed to selected machine." border="true":::
-## Uninstall extension
+## Update extensions
+
+When a new version of a supported extension is released, you can update the extension to that latest release. Arc enabled servers will present a banner in the Azure portal when you navigate to Arc enabled servers, informing you there are updates available for one or more extensions installed on a machine. When you view the list of installed extensions for a selected Arc enabled server, you'll notice a column labeled **Update available**. If a newer version of an extension is released, the **Update available** value for that extension shows a value of **Yes**.
+
+Updating an extension to the newest version does not affect the configuration of that extension. You are not required to respecify configuration information for any extension you update.
++
+You can update one or select multiple extensions eligible for an update from the Azure portal by performing the following steps.
+
+> [!NOTE]
+> Currently you can only update extensions from the Azure portal. Performing this operation from the Azure CLI, Azure PowerShell, or using an Azure Resource Manager template is not supported at this time.
+
+1. From your browser, go to the [Azure portal](https://portal.azure.com).
+
+2. In the portal, browse to **Servers - Azure Arc** and select your hybrid machine from the list.
+
+3. Choose **Extensions**, and review the status of extensions under the **Update available** column.
+
+You can update one extension by one of three ways:
+
+* By selecting an extension from the list of installed extensions, and under the properties of the extension, select the **Update** option.
+
+ :::image type="content" source="media/manage-vm-extensions-portal/vm-extensions-update-from-extension.png" alt-text="Update extension from selected extension." border="true":::
+
+* By selecting the extension from the list of installed extensions, and select the **Update** option from the top of the page.
+
+* By selecting one or more extensions that are eligible for an update from the list of installed extensions, and then select the **Update** option.
+
+ :::image type="content" source="media/manage-vm-extensions-portal/vm-extensions-update-selected.png" alt-text="Update selected extension." border="true":::
+
+## Uninstall extensions
You can remove one or more extensions from an Arc enabled server from the Azure portal. Perform the following steps to remove an extension.
You can remove one or more extensions from an Arc enabled server from the Azure
2. In the portal, browse to **Servers - Azure Arc** and select your hybrid machine from the list.
-3. Choose **Extensions**, then select an extension from the list of installed extensions.
+3. Choose **Extensions**, and then select an extension from the list of installed extensions.
4. Select **Uninstall** and when prompted to verify, select **Yes** to proceed.
azure-arc Manage Vm Extensions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/manage-vm-extensions.md
Title: VM extension management with Azure Arc enabled servers description: Azure Arc enabled servers can manage deployment of virtual machine extensions that provide post-deployment configuration and automation tasks with non-Azure VMs. Previously updated : 04/13/2021 Last updated : 05/19/2021
Virtual machine (VM) extensions are small applications that provide post-deployment configuration and automation tasks on Azure VMs. For example, if a virtual machine requires software installation, anti-virus protection, or to run a script in it, a VM extension can be used.
-Azure Arc enabled servers enables you to deploy Azure VM extensions to non-Azure Windows and Linux VMs, simplifying the management of your hybrid machine through their lifecycle. VM extensions can be managed using the following methods on your hybrid machines or servers managed by Arc enabled servers:
+Azure Arc enabled servers enables you to deploy, remove, and update Azure VM extensions to non-Azure Windows and Linux VMs, simplifying the management of your hybrid machine through their lifecycle. VM extensions can be managed using the following methods on your hybrid machines or servers managed by Arc enabled servers:
- The [Azure portal](manage-vm-extensions-portal.md) - The [Azure CLI](manage-vm-extensions-cli.md)
Azure Arc enabled servers enables you to deploy Azure VM extensions to non-Azure
> [!NOTE] > Azure Arc enabled servers does not support deploying and managing VM extensions to Azure virtual machines. For Azure VMs, see the following [VM extension overview](../../virtual-machines/extensions/overview.md) article.
+> [!NOTE]
+> Currently you can only update extensions from the Azure portal. Performing this operation from the Azure CLI, Azure PowerShell, or using an Azure Resource Manager template is not supported at this time.
+ ## Key benefits Azure Arc enabled servers VM extension support provides the following key benefits:
azure-functions Create First Function Cli Java https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/create-first-function-cli-java.md
adobe-target-content: ./create-first-function-cli-java-uiex
In this article, you use command-line tools to create a Java function that responds to HTTP requests. After testing the code locally, you deploy it to the serverless environment of Azure Functions.
-Completing this quickstart incurs a small cost of a few USD cents or less in your Azure account.
+If Maven isn't your preferred development tool, check out our similar tutorials for Java developers:
++ [Gradle](./functions-create-first-java-gradle.md)++ [IntelliJ IDEA](/azure/developer/java/toolkit-for-intellij/quickstart-functions)++ [Visual Studio Code](create-first-function-vs-code-java.md)
-> [!NOTE]
-> If Maven is not your preferred development tool, check out our similar tutorials for Java developers using [Gradle](./functions-create-first-java-gradle.md), [IntelliJ IDEA](/azure/developer/jav).
+Completing this quickstart incurs a small cost of a few USD cents or less in your Azure account.
## Configure your local environment
azure-functions Create First Function Vs Code Java https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/create-first-function-vs-code-java.md
adobe-target-content: ./create-first-function-vs-code-java-uiex
In this article, you use Visual Studio Code to create a Java function that responds to HTTP requests. After testing the code locally, you deploy it to the serverless environment of Azure Functions.
-Completing this quickstart incurs a small cost of a few USD cents or less in your Azure account.
+If Visual Studio Code isn't your preferred development tool, check out our similar tutorials for Java developers:
++ [Gradle](./functions-create-first-java-gradle.md)++ [IntelliJ IDEA](/azure/developer/java/toolkit-for-intellij/quickstart-functions)++ [Maven](create-first-function-cli-java.md)
-> [!NOTE]
-> If Visual Studio Code isn't your preferred development tool, check out our similar tutorials for Java developers using [Maven](create-first-function-cli-java.md), [Gradle](./functions-create-first-java-gradle.md) and [IntelliJ IDEA](/azure/developer/java/toolkit-for-intellij/quickstart-functions).
+Completing this quickstart incurs a small cost of a few USD cents or less in your Azure account.
## Configure your environment
azure-functions Durable Functions Singletons https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/durable/durable-functions-singletons.md
async def main(req: func.HttpRequest, starter: str) -> func.HttpResponse:
existing_instance = await client.get_status(instance_id)
- if existing_instance != None or existing_instance.runtime_status in ["Completed", "Failed", "Terminated"]:
+ if existing_instance is None or existing_instance.runtime_status in [df.OrchestrationRuntimeStatus.Completed, df.OrchestrationRuntimeStatus.Failed, df.OrchestrationRuntimeStatus.Terminated]:
event_data = req.get_body() instance_id = await client.start_new(function_name, instance_id, event_data) logging.info(f"Started orchestration with ID = '{instance_id}'.")
async def main(req: func.HttpRequest, starter: str) -> func.HttpResponse:
else: return { 'status': 409,
- 'body': f"An instance with ID '${instance_id}' already exists"
+ 'body': f"An instance with ID '${existing_instance.instance_id}' already exists"
} ```
azure-functions Deploy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/start-stop-vms/deploy.md
Title: Deploy Start/Stop VMs v2 (preview)
description: This article tells how to deploy the Start/Stop VMs v2 (preview) feature for your Azure VMs in your Azure subscription. Previously updated : 03/29/2021 Last updated : 06/25/2021
Perform the steps in this topic in sequence to install the Start/Stop VMs v2 (preview) feature. After completing the setup process, configure the schedules to customize it to your requirements.
+> [!NOTE]
+> If you run into problems during deployment, you encounter an issue when using Start/Stop VMs v2 (preview), or if you have a related question, you can submit an issue on [GitHub](https://github.com/microsoft/startstopv2-deployments/issues). Filing an Azure support incident from the [Azure support site](https://azure.microsoft.com/support/options/) is not available for this preview version.
+ ## Deploy feature The deployment is initiated from the Start/Stop VMs v2 GitHub organization [here](https://github.com/microsoft/startstopv2-deployments/blob/main/README.md). While this feature is intended to manage all of your VMs in your subscription across all resource groups from a single deployment within the subscription, you can install another instance of it based on the operations model or requirements of your organization. It also can be configured to centrally manage VMs across multiple subscriptions.
azure-functions Manage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/start-stop-vms/manage.md
Title: Manage Start/Stop VMs v2 (preview)
description: This article tells how to monitor status of your Azure VMs managed by the Start/Stop VMs v2 (preview) feature and perform other management tasks. Previously updated : 03/16/2021 Last updated : 06/25/2021
The log data each tile in the dashboard displays is refreshed every hour, with a
To learn about working with a log-based dashboard, see the following [tutorial](../../azure-monitor/visualize/tutorial-logs-dashboards.md).
+> [!NOTE]
+> If you run into problems during deployment, you encounter an issue when using Start/Stop VMs v2 (preview), or if you have a related question, you can submit an issue on [GitHub](https://github.com/microsoft/startstopv2-deployments/issues). Filing an Azure support incident from the [Azure support site](https://azure.microsoft.com/support/options/) is not available for this preview version.
+ ## Configure email notifications To change email notifications after Start/Stop VMs v2 (preview) is deployed, you can modify the action group created during deployment.
azure-functions Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/start-stop-vms/overview.md
description: This article describes version two of the Start/Stop VMs (preview)
Previously updated : 03/29/2021 Last updated : 06/25/2021 # Start/Stop VMs v2 (preview) overview
The Start/Stop VMs v2 (preview) feature starts or stops Azure virtual machines (
This new version of Start/Stop VMs v2 (preview) provides a decentralized low-cost automation option for customers who want to optimize their VM costs. It offers all of the same functionality as the [original version](../../automation/automation-solution-vm-management.md) available with Azure Automation, but it is designed to take advantage of newer technology in Azure.
+> [!NOTE]
+> If you run into problems during deployment, you encounter an issue when using Start/Stop VMs v2 (preview), or if you have a related question, you can submit an issue on [GitHub](https://github.com/microsoft/startstopv2-deployments/issues). Filing an Azure support incident from the [Azure support site](https://azure.microsoft.com/support/options/) is not available for this preview version.
+ ## Overview Start/Stop VMs v2 (preview) is redesigned and it doesn't depend on Azure Automation or Azure Monitor Logs, as required by the [previous version](../../automation/automation-solution-vm-management.md). This version relies on [Azure Functions](../../azure-functions/functions-overview.md) to handle the VM start and stop execution.
azure-functions Remove https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/start-stop-vms/remove.md
Title: Remove Start/Stop VMs v2 (preview) overview
description: This article describes how to remove the Start/Stop VMs v2 (preview) feature. Previously updated : 03/30/2021 Last updated : 06/25/2021
After you enable the Start/Stop VMs v2 (preview) feature to manage the running s
- The Application Insights instance - Azure Storage account
+> [!NOTE]
+> If you run into problems during deployment, you encounter an issue when using Start/Stop VMs v2 (preview), or if you have a related question, you can submit an issue on [GitHub](https://github.com/microsoft/startstopv2-deployments/issues). Filing an Azure support incident from the [Azure support site](https://azure.microsoft.com/support/options/) is not available for this preview version.
+ ## Delete the dedicated resource group To delete the resource group, follow the steps outlined in the [Azure Resource Manager resource group and resource deletion](../../azure-resource-manager/management/delete-resource-group.md) article.
azure-functions Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/start-stop-vms/troubleshoot.md
Title: Troubleshoot Start/Stop VMs (preview)
description: This article tells how to troubleshoot issues encountered with the Start/Stop VMs (preview) feature for your Azure VMs. Previously updated : 03/31/2021 Last updated : 06/25/2021
Learn more about monitoring Azure Functions and logic apps:
* [How to configure monitoring for Azure Functions](../../azure-functions/configure-monitoring.md). * [Monitor logic apps](../../logic-apps/monitor-logic-apps.md).+
+* If you run into problems during deployment, you encounter an issue when using Start/Stop VMs v2 (preview), or if you have a related question, you can submit an issue on [GitHub](https://github.com/microsoft/startstopv2-deployments/issues). Filing an Azure support incident from the [Azure support site](https://azure.microsoft.com/support/options/) is not available for this preview version.
azure-monitor Azure Monitor Agent Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/agents/azure-monitor-agent-overview.md
description: Overview of the Azure Monitor agent (AMA), which collects monitorin
Previously updated : 03/16/2021 Last updated : 06/25/2021 # Azure Monitor agent overview
-The Azure Monitor agent (AMA) collects monitoring data from the guest operating system of Azure virtual machines and delivers it to Azure Monitor. This articles provides an overview of the Azure Monitor agent including how to install it and how to configure data collection.
+The Azure Monitor agent (AMA) collects monitoring data from the guest operating system of Azure virtual machines and delivers it to Azure Monitor. This article provides an overview of the Azure Monitor agent including how to install it and how to configure data collection.
## Relationship to other agents The Azure Monitor Agent replaces the following agents that are currently used by Azure Monitor to collect guest data from virtual machines ([view known gaps](/azure/azure-monitor/faq#is-the-new-azure-monitor-agent-at-parity-with-existing-agents)):
In addition to consolidating this functionality into a single agent, the Azure M
### Changes in data collection The methods for defining data collection for the existing agents are distinctly different from each other, and each have challenges that are addressed with Azure Monitor agent. -- Log Analytics agent gets its configuration from a Log Analytics workspace. This is easy to centrally configure, but difficult to define independent definitions for different virtual machines. It can only send data to a Log Analytics workspace.
+- Log Analytics agent gets its configuration from a Log Analytics workspace. It's easy to centrally configure but difficult to define independent definitions for different virtual machines. It can only send data to a Log Analytics workspace.
-- Diagnostic extension has a configuration for each virtual machine. This is easy to define independent definitions for different virtual machines but difficult to centrally manage. It can only send data to Azure Monitor Metrics, Azure Event Hubs, or Azure Storage. For Linux agents, the open source Telegraf agent is required to send data to Azure Monitor Metrics.
+- Diagnostic extension has a configuration for each virtual machine. It's easy to define independent definitions for different virtual machines but difficult to centrally manage. It can only send data to Azure Monitor Metrics, Azure Event Hubs, or Azure Storage. For Linux agents, the open source Telegraf agent is required to send data to Azure Monitor Metrics.
Azure Monitor agent uses [Data Collection Rules (DCR)](data-collection-rule-overview.md) to configure data to collect from each agent. Data collection rules enable manageability of collection settings at scale while still enabling unique, scoped configurations for subsets of machines. They are independent of the workspace and independent of the virtual machine, which allows them to be defined once and reused across machines and environments. See [Configure data collection for the Azure Monitor agent (preview)](data-collection-rule-azure-monitor-agent.md).
Azure Monitor agent uses [Data Collection Rules (DCR)](data-collection-rule-over
Azure Monitor agent coexists with the [generally available agents for Azure Monitor](agents-overview.md), but you may consider transitioning your VMs off the current agents during the Azure Monitor agent public preview period. Consider the following factors when making this determination. - **Environment requirements.** Azure Monitor agent supports [these operating systems](./agents-overview.md#supported-operating-systems) today latest operating systems and future environment support such as new operating system versions and types of networking requirements will most likely be provided only in this new agent. You should assess whether your environment is supported by Azure Monitor agent. If not, then you may need to stay with the current agent. If Azure Monitor agent supports your current environment, then you should consider transitioning to it.-- **Current and new feature requirements.** Azure Monitor agent introduces several new capabilities such as filtering, scoping, and multi-homing, but it isnΓÇÖt at parity yet with the current agents for other functionality such as custom log collection and integration with all solutions ([see solutions in preview](/azure/azure-monitor/faq#which-log-analytics-solutions-are-supported-on-the-new-azure-monitor-agent)). Most new capabilities in Azure Monitor will only be made available with Azure Monitor agent, so over time more functionality will only be available in the new agent. You should consider whether Azure Monitor agent has the features you require and if there are some features that you can temporarily do without to get other important features in the new agent. If Azure Monitor agent has all the core capabilities you require then consider transitioning to it. If there are critical features that you require then continue with the current agent until Azure Monitor agent reaches parity.-- **Tolerance for rework.** If you're setting up a new environment with resources such as deployment scripts and onboarding templates, assess the effort involved. If it will take a significant amount of work, then consider setting up your new environment with the new agent as it is now generally available. A deprecation date published for the Log Analytics agents in August, 2021. The current agents will be supported for several years once deprecation begins.--
+- **Current and new feature requirements.** Azure Monitor agent introduces several new capabilities such as filtering, scoping, and multi-homing, but it isnΓÇÖt at parity yet with the current agents for other functionality such as custom log collection and integration with all solutions ([see solutions in preview](/azure/azure-monitor/faq#which-log-analytics-solutions-are-supported-on-the-new-azure-monitor-agent)). Most new capabilities in Azure Monitor will only be made available with Azure Monitor agent, so over time more functionality will only be available in the new agent. Consider whether Azure Monitor agent has the features you require and if there are some features that you can temporarily do without to get other important features in the new agent. If Azure Monitor agent has all the core capabilities you require, then consider transitioning to it. If there are critical features that you require, then continue with the current agent until Azure Monitor agent reaches parity.
+- **Tolerance for rework.** If you're setting up a new environment with resources such as deployment scripts and onboarding templates, assess the effort involved. If it will take a significant amount of work, then consider setting up your new environment with the new agent as it is now generally available. A deprecation date published for the Log Analytics agents in August 2021. The current agents will be supported for several years once deprecation begins.
## Supported resource types Azure virtual machines, virtual machine scale sets, and Azure Arc enabled servers are currently supported. Azure Kubernetes Service and other compute resource types are not currently supported.
Azure virtual machines, virtual machine scale sets, and Azure Arc enabled server
## Supported regions Azure Monitor agent is available in all public regions that supports Log Analytics. Government regions and clouds are not currently supported.
+## Supported services and features
+The following table shows the current support for Azure Monitor agent with other Azure services.
+
+| Azure service | Current support |
+|:|:|
+| [Azure Security Center](../../security-center/security-center-introduction.md) | Private preview |
+| [Azure Sentinel](../../sentinel/overview.md) | Private preview |
++
+The following table shows the current support for Azure Monitor agent with Azure Monitor features.
+
+| Azure Monitor feature | Current support |
+|:|:|
+| [VM Insights](../vm/vminsights-overview.md) | Private preview |
+| [VM Insights guest health](../vm/vminsights-health-overview.md) | Public preview |
+
+The following table shows the current support for Azure Monitor agent with Azure solutions.
+
+| Solution | Current support |
+|:|:|
+| [Change Tracking](../../automation/change-tracking/overview.md) | Supported as File Integrity Monitoring (FIM) in Azure Security Center private preview. |
+| [Update Management](../../automation/update-management/overview.md) | Use Update Management v2 (private preview) that doesnΓÇÖt require an agent. |
+| [SQL Server](../insights/sql-insights-overview.md) | Support by SQL insights which is currently in public preview. |
++ ## Coexistence with other agents The Azure Monitor agent can coexist with the existing agents so that you can continue to use their existing functionality during evaluation or migration. This is particularly important because of the limitations supporting existing solutions. You should be careful though in collecting duplicate data since this could skew query results and result in additional charges for data ingestion and retention.
azure-monitor Java In Process Agent https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/java-in-process-agent.md
Title: Azure Monitor Application Insights Java description: Application performance monitoring for Java applications running in any environment without requiring code modification. Distributed tracing and application map. Previously updated : 03/29/2020 Last updated : 06/24/2021
Download [applicationinsights-agent-3.1.1.jar](https://github.com/microsoft/Appl
**2. Point the JVM to the agent**
-Add `-javaagent:path/to/applicationinsights-agent-3.1.1.jar` to your application's JVM args
+Add `-javaagent:path/to/applicationinsights-agent-3.1.1.jar` to your application's JVM args.
-Typical JVM args include `-Xmx512m` and `-XX:+UseG1GC`. So if you know where to add these, then you already know where to add this.
-
-For additional help with configuring your application's JVM args, please see [Tips for updating your JVM args](./java-standalone-arguments.md).
+For help with configuring your application's JVM args, see [Tips for updating your JVM args](./java-standalone-arguments.md).
**3. Point the agent to your Application Insights resource**
azure-monitor Manage Cost Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/manage-cost-storage.md
In cluster billing options, data retention is billed for each workspace. Cluster
## Estimating the costs to manage your environment
-If you're not yet using Azure Monitor Logs, you can use the [Azure Monitor pricing calculator](https://azure.microsoft.com/pricing/calculator/?service=monitor) to estimate the cost of using Log Analytics. Start by entering "Azure Monitor" in the Search box, and clicking on the resulting Azure Monitor tile. Scroll down the page to Azure Monitor, and select Log Analytics from the Type dropdown. Here you can enter the number of VMs and the GB of data you expect to collect from each VM. Typically 1 GB to 3 GB of data month is ingested from a typical Azure VM. If you're already evaluating Azure Monitor Logs already, you can use your data statistics from your own environment. See below for how to determine the [number of monitored VMs](#understanding-nodes-sending-data) and the [volume of data your workspace is ingesting](#understanding-ingested-data-volume).
+If you're not yet using Azure Monitor Logs, you can use the [Azure Monitor pricing calculator](https://azure.microsoft.com/pricing/calculator/?service=monitor) to estimate the cost of using Log Analytics. Start by entering "Azure Monitor" in the Search box, and clicking on the resulting Azure Monitor tile. Scroll down the page to Azure Monitor, and select Log Analytics from the Type dropdown. You can estimate your Log Analytics cost based on your anticipated data volume and desired retention. If you're already evaluating Azure Monitor Logs already, you can use your data statistics from your own environment. (See below for how to determine the [number of monitored VMs](#understanding-nodes-sending-data) and the [volume of data your workspace is ingesting](#understanding-ingested-data-volume). If you're not yet running Log Analytics, here is some guidance for estimating data volumes:
+
+1. **Monitoring VMs:** with typical monitoring eanabled, 1 GB to 3 GB of data month is ingested per monitored VM.
+2. **Monitoring Azure Kubernetes Service (AKS) clusters:** details on expected data volumes for monitoring a typical AKS cluster are available [here](../containers/container-insights-cost.md#estimating-costs-to-monitor-your-aks-cluster). Follow these [best practices](../containers/container-insights-cost.md#controlling-ingestion-to-reduce-cost) to control your AKS cluster monitoring costs.
+3. **Application monitoring:** the Azure Monitor pricing calculator includes a data volume estimator using on your application's usage and based on a statistcal analysis of Application Insights data volumes. In the Application Insights section of the pricing calculator, toggle the switch next to "Estimate data volume based on application activity" to use this.
## Understand your usage and estimate costs
azure-monitor Monitor Virtual Machine https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/vm/monitor-virtual-machine.md
This scenario includes monitoring of the following type of machines using Azure
- Hybrid machines which are virtual machines running in other clouds, with a managed service provider, or on-premises. They also include physical machines running on-premises. ## Layers of monitoring
-There are fundamentally three layers to a virtual machine that require monitoring. Each layer has a distinct set of telemetry and monitoring requirements.
+There are fundamentally four layers to a virtual machine that require monitoring. Each layer has a distinct set of telemetry and monitoring requirements.
| Layer | Description |
azure-resource-manager Quickstart Create Bicep Use Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/bicep/quickstart-create-bicep-use-visual-studio-code.md
Title: Create Bicep files - Visual Studio Code description: Use Visual Studio Code and the Bicep extension to Bicep files for deploy Azure resources Previously updated : 06/01/2021 Last updated : 06/25/2021
resource exampleStorage 'Microsoft.Storage/storageAccounts@2021-02-01' = {
You're almost done. Just provide values for those properties.
-Again, intellisense helps you. For `name`, provide the parameter that contains a name for the storage account. For `location`, set it to `eastus`. When adding SKU name and kind, intellisense presents the valid options.
+Again, intellisense helps you. For `name`, provide the parameter that contains a name for the storage account. For `location`, set it to `eastus`. When adding SKU name and kind, intellisense presents the valid options.
When you've finished, you have:
New-AzResourceGroupDeployment -ResourceGroupName exampleRG -TemplateFile ./main.
+> [!NOTE]
+> Replace **{your-unique-name}** including the curly brackets with a unique storage account name.
+ When the deployment finishes, you should see a message indicating the deployment succeeded. If you get an error message indicating the storage account is already taken, the storage name you provided is in use. Provide a name that is more likely to be unique. ## Clean up resources
azure-resource-manager Resource Name Rules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/resource-name-rules.md
In the following tables, the term alphanumeric refers to:
> [!div class="mx-tableFixed"] > | Entity | Scope | Length | Valid Characters | > | | | | |
-> | actionGroups | resource group | 1-260 | Can't use:<br>`/&%\?` <br><br>Can't end with space or period. |
+> | actionGroups | resource group | 1-260 | Can't use:<br>`:<>+/&%\?` <br><br>Can't end with space or period. |
> | components | resource group | 1-260 | Can't use:<br>`%&\?/` <br><br>Can't end with space or period. | > | scheduledQueryRules | resource group | 1-260 | Can't use:<br>`*<>%{}&:\\?/#` <br><br>Can't end with space or period. | > | metricAlerts | resource group | 1-260 | Can't use:<br>`*#&+:<>?@%{}\/` <br><br>Can't end with space or period. |
azure-sql Database Import Export Hang https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/database-import-export-hang.md
The import and export operations don't represent a traditional physical database
The Azure SQL Database Import/Export service provides a limited number of compute virtual machines (VMs) per region to process import and export operations. The compute VMs are hosted per region to make sure that the import or export avoids cross-region bandwidth delays and charges. If too many requests are made at the same time in the same region, significant delays can occur in processing the operations. The time that's required to complete requests can vary from a few seconds to many hours.
-> [!NOTE]
-> If a request is not processed within four days, the service automatically cancels the request.
## Recommended solutions
If your database exports are used only for recovery from accidental data deletio
## Related documents
-[Considerations when exporting a database](./database-export.md#considerations)
+[Considerations when exporting a database](./database-export.md#considerations)
azure-sql Elastic Pool Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/elastic-pool-overview.md
Previously updated : 12/9/2020 Last updated : 06/23/2021 # Elastic pools help you manage and scale multiple databases in Azure SQL Database [!INCLUDE[appliesto-sqldb](../includes/appliesto-sqldb.md)]
The following steps can help you estimate whether a pool is more cost-effective
> [!IMPORTANT] > If the number of databases in a pool approaches the maximum supported, make sure to consider [Resource management in dense elastic pools](elastic-pool-resource-management.md).
+### Per database properties
+
+You can optionally set "per database" properties to modify resource consumption patterns in elastic pools. For more information, see resource limits documentation for [DTU](resource-limits-dtu-elastic-pools.md#database-properties-for-pooled-databases) and [vCore](resource-limits-vcore-elastic-pools.md#database-properties-for-pooled-databases) elastic pools.
+ ## Using other SQL Database features with elastic pools ### Elastic jobs and elastic pools
azure-sql Elastic Pool Resource Management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/elastic-pool-resource-management.md
Azure SQL Database provides several metrics that are relevant for this type of m
|Metric name|Description|Recommended average value| |-|--|| |`avg_instance_cpu_percent`|CPU utilization of the SQL process associated with an elastic pool, as measured by the underlying operating system. Available in the [sys.dm_db_resource_stats](/sql/relational-databases/system-dynamic-management-views/sys-dm-db-resource-stats-azure-sql-database) view in every database, and in the [sys.elastic_pool_resource_stats](/sql/relational-databases/system-catalog-views/sys-elastic-pool-resource-stats-azure-sql-database) view in the `master` database. This metric is also emitted to Azure Monitor, where it is [named](../../azure-monitor/essentials/metrics-supported.md#microsoftsqlserverselasticpools) `sqlserver_process_core_percent`, and can be viewed in Azure portal. This value is the same for every database in the same elastic pool.|Below 70%. Occasional short spikes up to 90% may be acceptable.|
-|`max_worker_percent`|[Worker thread]( https://docs.microsoft.com/sql/relational-databases/thread-and-task-architecture-guide) utilization. Provided for each database in the pool, as well as for the pool itself. There are different limits on the number of worker threads at the database level, and at the pool level, therefore monitoring this metric at both levels is recommended. Available in the [sys.dm_db_resource_stats](/sql/relational-databases/system-dynamic-management-views/sys-dm-db-resource-stats-azure-sql-database) view in every database, and in the [sys.elastic_pool_resource_stats](/sql/relational-databases/system-catalog-views/sys-elastic-pool-resource-stats-azure-sql-database) view in the `master` database. This metric is also emitted to Azure Monitor, where it is [named](../../azure-monitor/essentials/metrics-supported.md#microsoftsqlserverselasticpools) `workers_percent`, and can be viewed in Azure portal.|Below 80%. Spikes up to 100% will cause connection attempts and queries to fail.|
+|`max_worker_percent`|[Worker thread](/sql/relational-databases/thread-and-task-architecture-guide) utilization. Provided for each database in the pool, as well as for the pool itself. There are different limits on the number of worker threads at the database level, and at the pool level, therefore monitoring this metric at both levels is recommended. Available in the [sys.dm_db_resource_stats](/sql/relational-databases/system-dynamic-management-views/sys-dm-db-resource-stats-azure-sql-database) view in every database, and in the [sys.elastic_pool_resource_stats](/sql/relational-databases/system-catalog-views/sys-elastic-pool-resource-stats-azure-sql-database) view in the `master` database. This metric is also emitted to Azure Monitor, where it is [named](../../azure-monitor/essentials/metrics-supported.md#microsoftsqlserverselasticpools) `workers_percent`, and can be viewed in Azure portal.|Below 80%. Spikes up to 100% will cause connection attempts and queries to fail.|
|`avg_data_io_percent`|IOPS utilization for read and write physical IO. Provided for each database in the pool, as well as for the pool itself. There are different limits on the number of IOPS at the database level, and at the pool level, therefore monitoring this metric at both levels is recommended. Available in the [sys.dm_db_resource_stats](/sql/relational-databases/system-dynamic-management-views/sys-dm-db-resource-stats-azure-sql-database) view in every database, and in the [sys.elastic_pool_resource_stats](/sql/relational-databases/system-catalog-views/sys-elastic-pool-resource-stats-azure-sql-database) view in the `master` database. This metric is also emitted to Azure Monitor, where it is [named](../../azure-monitor/essentials/metrics-supported.md#microsoftsqlserverselasticpools) `physical_data_read_percent`, and can be viewed in Azure portal.|Below 80%. Occasional short spikes up to 100% may be acceptable.| |`avg_log_write_percent`|Throughput utilizations for transaction log write IO. Provided for each database in the pool, as well as for the pool itself. There are different limits on the log throughput at the database level, and at the pool level, therefore monitoring this metric at both levels is recommended. Available in the [sys.dm_db_resource_stats](/sql/relational-databases/system-dynamic-management-views/sys-dm-db-resource-stats-azure-sql-database) view in every database, and in the [sys.elastic_pool_resource_stats](/sql/relational-databases/system-catalog-views/sys-elastic-pool-resource-stats-azure-sql-database) view in the `master` database. This metric is also emitted to Azure Monitor, where it is [named](../../azure-monitor/essentials/metrics-supported.md#microsoftsqlserverselasticpools) `log_write_percent`, and can be viewed in Azure portal. When this metric is close to 100%, all database modifications (INSERT, UPDATE, DELETE, MERGE statements, SELECT … INTO, BULK INSERT, etc.) will be slower.|Below 90%. Occasional short spikes up to 100% may be acceptable.| |`oom_per_second`|The rate of out-of-memory (OOM) errors in an elastic pool, which is an indicator of memory pressure. Available in the [sys.dm_resource_governor_resource_pools_history_ex](/sql/relational-databases/system-dynamic-management-views/sys-dm-resource-governor-resource-pools-history-ex-azure-sql-database) view. See [Examples](#examples) for a sample query to calculate this metric.|0|
CROSS JOIN (
## Next steps - For an introduction to elastic pools, see [Elastic pools help you manage and scale multiple databases in Azure SQL Database](./elastic-pool-overview.md).-- For information on tuning query workloads to reduce resource utilization, see [Monitoring and tuning]( https://docs.microsoft.com/azure/sql-database/sql-database-monitoring-tuning-index), and [Monitoring and performance tuning](./monitor-tune-overview.md).
+- For information on tuning query workloads to reduce resource utilization, see [Monitoring and tuning](monitoring-tuning-index.yml), and [Monitoring and performance tuning](./monitor-tune-overview.md).
azure-sql Resource Limits Dtu Elastic Pools https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/resource-limits-dtu-elastic-pools.md
If all DTUs of an elastic pool are used, then each database in the pool receives
### Database properties for pooled databases
-The following table describes the properties for pooled databases.
+For each elastic pool, you can optionally specify per database minimum and maximum DTUs to modify resource consumption patterns within the pool. Specified min and max values apply to all databases in the pool. Customizing min and max DTUs for individual databases in the pool is not supported.
+
+You can also set maximum storage per database, for example to prevent a database from consuming all pool storage. This setting can be configured independently for each database.
+
+The following table describes per database properties for pooled databases.
| Property | Description | |: |: |
-| Max eDTUs per database |The maximum number of eDTUs that any database in the pool may use, if available based on utilization by other databases in the pool. Max eDTU per database is not a resource guarantee for a database. This setting is a global setting that applies to all databases in the pool. Set max eDTUs per database high enough to handle peaks in database utilization. Some degree of overcommitting is expected since the pool generally assumes hot and cold usage patterns for databases where all databases are not simultaneously peaking. For example, suppose the peak utilization per database is 20 eDTUs and only 20% of the 100 databases in the pool are peak at the same time. If the eDTU max per database is set to 20 eDTUs, then it is reasonable to overcommit the pool by 5 times, and set the eDTUs per pool to 400. |
-| Min eDTUs per database |The minimum number of eDTUs that any database in the pool is guaranteed. This setting is a global setting that applies to all databases in the pool. The min eDTU per database may be set to 0, and is also the default value. This property is set to anywhere between 0 and the average eDTU utilization per database. The product of the number of databases in the pool and the min eDTUs per database cannot exceed the eDTUs per pool. For example, if a pool has 20 databases and the eDTU min per database set to 10 eDTUs, then the eDTUs per pool must be at least as large as 200 eDTUs. |
-| Max storage per database |The maximum database size set by the user for a database in a pool. However, pooled databases share allocated pool storage. Even if the total max storage *per database* is set to be greater than the total available storage *space of the pool*, the total space actually used by all of the databases will not be able to exceed the available pool limit. Max database size refers to the maximum size of the data files and does not include the space used by log files. |
+| Max DTUs per database |The maximum number of DTUs that any database in the pool may use, if available based on utilization by other databases in the pool. Max DTUs per database is not a resource guarantee for a database. If the workload in each database does not need all available pool resources to perform adequately, consider setting max DTUs per database to prevent a single database from monopolizing pool resources. Some degree of over-committing is expected since the pool generally assumes hot and cold usage patterns for databases, where all databases are not simultaneously peaking. |
+| Min DTUs per database |The minimum number of DTUs reserved for any database in the pool. Consider setting a min DTUs per database when you want to guarantee resource availability for each database regardless of resource consumption by other databases in the pool. The min DTUs per database may be set to 0, and is also the default value. This property is set to anywhere between 0 and the average DTUs utilization per database.|
+| Max storage per database |The maximum database size set by the user for a database in a pool. Pooled databases share allocated pool storage, so the size a database can reach is limited to the smaller of remaining pool storage and maximum database size. Maximum database size refers to the maximum size of the data files and does not include the space used by the log file. |
|||
+> [!IMPORTANT]
+> Because resources in an elastic pool are finite, setting min DTUs per database to a value greater than 0 implicitly limits resource utilization by each database. If, at a point in time, most databases in a pool are idle, resources reserved to satisfy the min DTUs guarantee are not available to databases active at that point in time.
+>
+> Additionally, setting min DTUs per database to a value greater than 0 implicitly limits the number of databases that can be added to the pool. For example, if you set the min DTUs to 100 in a 400 DTU pool, it means that you will not be able to add more than 4 databases to the pool, because 100 DTUs are reserved for each database.
+>
+
+While the per database properties are expressed in DTUs, they also govern consumption of other resource types, such as data IO, log IO, and worker threads. As you adjust min and max per database DTUs values, reservations and limits for all resource types are adjusted proportionally.
+ ## Next steps * For vCore resource limits for a single database, see [resource limits for single databases using the vCore purchasing model](resource-limits-vcore-single-databases.md)
azure-sql Resource Limits Vcore Elastic Pools https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/resource-limits-vcore-elastic-pools.md
Previously updated : 06/04/2021 Last updated : 06/23/2021 # Resource limits for elastic pools using the vCore purchasing model [!INCLUDE[appliesto-sqldb](../includes/appliesto-sqldb.md)]
This article provides the detailed resource limits for Azure SQL Database elasti
> [!IMPORTANT] > Under some circumstances, you may need to shrink a database to reclaim unused space. For more information, see [Manage file space in Azure SQL Database](file-space-manage.md).
-Each read-only replica has its own resources, such as vCores, memory, data IOPS, TempDB, workers, and sessions. Each read-only replica is subject to the resource limits detailed later in this article.
+Each read-only replica of an elastic pool has its own resources, such as vCores, memory, data IOPS, TempDB, workers, and sessions. Each read-only replica is subject to elastic pool resource limits detailed later in this article.
You can set the service tier, compute size (service objective), and storage amount using:
If all vCores of an elastic pool are busy, then each database in the pool receiv
## Database properties for pooled databases
-The following table describes the properties for pooled databases.
+For each elastic pool, you can optionally specify per database minimum and maximum vCores to modify resource consumption patterns within the pool. Specified min and max values apply to all databases in the pool. Customizing min and max vCores for individual databases in the pool is not supported.
-> [!NOTE]
-> The resource limits of individual databases in elastic pools are generally the same as for single databases outside of pools that has the same compute size (service objective). For example, the max concurrent workers for an GP_Gen4_1 database is 200 workers. So, the max concurrent workers for a database in a GP_Gen4_1 pool is also 200 workers. Note, the total number of concurrent workers in GP_Gen4_1 pool is 210.
+You can also set maximum storage per database, for example to prevent a database from consuming all pool storage. This setting can be configured independently for each database.
+
+The following table describes per database properties for pooled databases.
| Property | Description | |: |: |
-| Max vCores per database |The maximum number of vCores that any database in the pool may use, if available based on utilization by other databases in the pool. Max vCores per database is not a resource guarantee for a database. This setting is a global setting that applies to all databases in the pool. Set max vCores per database high enough to handle peaks in database utilization. Some degree of over-committing is expected since the pool generally assumes hot and cold usage patterns for databases where all databases are not simultaneously peaking.|
-| Min vCores per database |The minimum number of vCores that any database in the pool is guaranteed. This setting is a global setting that applies to all databases in the pool. The min vCores per database may be set to 0, and is also the default value. This property is set to anywhere between 0 and the average vCores utilization per database. The product of the number of databases in the pool and the min vCores per database cannot exceed the vCores per pool.|
-| Max storage per database |The maximum database size set by the user for a database in a pool. Pooled databases share allocated pool storage, so the size a database can reach is limited to the smaller of remaining pool storage and database size. Max database size refers to the maximum size of the data files and does not include the space used by log files. |
+| Max vCores per database |The maximum number of vCores that any database in the pool may use, if available based on utilization by other databases in the pool. Max vCores per database is not a resource guarantee for a database. If the workload in each database does not need all available pool resources to perform adequately, consider setting max vCores per database to prevent a single database from monopolizing pool resources. Some degree of over-committing is expected since the pool generally assumes hot and cold usage patterns for databases, where all databases are not simultaneously peaking. |
+| Min vCores per database |The minimum number of vCores reserved for any database in the pool. Consider setting a min vCores per database when you want to guarantee resource availability for each database regardless of resource consumption by other databases in the pool. The min vCores per database may be set to 0, and is also the default value. This property is set to anywhere between 0 and the average vCores utilization per database.|
+| Max storage per database |The maximum database size set by the user for a database in a pool. Pooled databases share allocated pool storage, so the size a database can reach is limited to the smaller of remaining pool storage and maximum database size. Maximum database size refers to the maximum size of the data files and does not include the space used by the log file. |
|||
+> [!IMPORTANT]
+> Because resources in an elastic pool are finite, setting min vCores per database to a value greater than 0 implicitly limits resource utilization by each database. If, at a point in time, most databases in a pool are idle, resources reserved to satisfy the min vCores guarantee are not available to databases active at that point in time.
+>
+> Additionally, setting min vCores per database to a value greater than 0 implicitly limits the number of databases that can be added to the pool. For example, if you set the min vCores to 2 in a 20 vCore pool, it means that you will not be able to add more than 10 databases to the pool, because 2 vCores are reserved for each database.
+>
+
+Even though the per database properties are expressed in vCores, they also govern consumption of other resource types, such as data IO, log IO, and worker threads. As you adjust min and max per database vCore values, reservations and limits for all resource types are adjusted proportionally.
+
+> [!NOTE]
+> The resource limits of individual databases in elastic pools are generally the same as for single databases outside of pools that have the same compute size (service objective). For example, the max concurrent workers for an GP_Gen4_1 database is 200 workers. So, the max concurrent workers for a database in a GP_Gen4_1 pool is also 200 workers. Note, the total number of concurrent workers in GP_Gen4_1 pool is 210.
+ ## Next steps - For vCore resource limits for a single database, see [resource limits for single databases using the vCore purchasing model](resource-limits-vcore-single-databases.md)
azure-sql Resource Limits Vcore Single Databases https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/resource-limits-vcore-single-databases.md
Previously updated : 06/04/2021 Last updated : 06/23/2021 # Resource limits for single databases using the vCore purchasing model [!INCLUDE[appliesto-sqldb](../includes/appliesto-sqldb.md)]
This article provides the detailed resource limits for single databases in Azure
> [!IMPORTANT] > Under some circumstances, you may need to shrink a database to reclaim unused space. For more information, see [Manage file space in Azure SQL Database](file-space-manage.md).
-Each read-only replica has its own resources, such as vCores, memory, data IOPS, TempDB, workers, and sessions. Each read-only replica is subject to the resource limits detailed later in this article.
+Each read-only replica of a database has its own resources, such as vCores, memory, data IOPS, TempDB, workers, and sessions. Each read-only replica is subject to the resource limits detailed later in this article.
You can set the service tier, compute size (service objective), and storage amount for a single database using:
azure-sql Sql Data Sync Data Sql Server Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/sql-data-sync-data-sql-server-sql-database.md
Provisioning and deprovisioning during sync group creation, update, and deletion
- Truncating tables is not an operation supported by Data Sync (changes won't be tracked). - Hyperscale databases are not supported. - Memory-optimized tables are not supported.
+- If the hub and member databases are in a virtual network, Data Sync won't work because the sync app, which is responsible for running sync between hub and members, does not support accessing the hub or member databases inside a customer's private link. This limitation still applies when customer also uses the Data Sync Private Link feature.
#### Unsupported data types
azure-vmware Concepts Monitor Protection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/concepts-monitor-protection.md
Microsoft Azure native services let you monitor, manage, and protect your virtua
The diagram shows the integrated monitoring architecture for Azure VMware Solution VMs. The Log Analytics agent enables collection of log data from Azure, Azure VMware Solution, and on-premises VMs. The log data is sent to Azure Monitor Logs and stored in a Log Analytics workspace. You can deploy the Log Analytics agent using Arc enabled servers [VM extensions support](../azure-arc/servers/manage-vm-extensions.md) for new and existing VMs.
azure-vmware Enable Public Internet Access https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/enable-public-internet-access.md
Title: Enable public internet access in Azure VMware Solution
+ Title: Enable public internet for Azure VMware Solution workloads
description: This article explains how to use the public IP functionality in Azure Virtual WAN. Previously updated : 02/04/2021 Last updated : 06/25/2021
-# Enable public internet access in Azure VMware Solution
+# Enable public internet for Azure VMware Solution workloads
Public IP is a feature in Azure VMware Solution connectivity. It makes resources, such as web servers, virtual machines (VMs), and hosts accessible through a public network.
You can have 100 public IPs per private cloud.
Now that you've covered how to use the public IP functionality in Azure VMware Solution, you may want to learn about: - Using public IP addresses with [Azure Virtual WAN](../virtual-wan/virtual-wan-about.md).-- [Creating an IPSec tunnel into Azure VMware Solution](./configure-site-to-site-vpn-gateway.md).
+- [Creating an IPSec tunnel into Azure VMware Solution](./configure-site-to-site-vpn-gateway.md).
azure-vmware Integrate Azure Native Services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/integrate-azure-native-services.md
In this article, you'll integrate Azure native services in your Azure VMware Sol
1. [Create an Azure Automation account](../automation/automation-create-standalone-account.md). >[!TIP]
- >You can [use an Azure Resource Manager (ARM) template to create an Automation accoun](../automation/quickstart-create-automation-account-template.md). Using an ARM template takes fewer steps compared to other deployment methods.
+ >You can [use an Azure Resource Manager (ARM) template to create an Automation account](../automation/quickstart-create-automation-account-template.md). Using an ARM template takes fewer steps compared to other deployment methods.
1. [Enable Update Management from an Automation account](../automation/update-management/enable-from-automation-account.md). This links your Log Analytics workspace to your automation account. It also enables Azure and non-Azure VMs in Update Management.
certification Concepts Marketing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/certification/concepts-marketing.md
Previously updated : 03/15/2021 Last updated : 06/22/2021 # Marketing properties
The top of the product description page highlights key characteristics, some of
| Target industries | Top 3 industries that your device is optimized for | Marketing details | | Product description | Free text field for you to write your marketing description of your product. This can capture details not listed in the portal, or add additional context for the benefits of using your device. | Marketing details|
-The remainder of the page is focused on displaying the technical specifications of your device in table format that will help your customer better understand your product. For convenience, the information displayed at the top of the page is also listed here. The rest of the table is sectioned by the components specified in the portal.
+The remainder of the page is focused on displaying the technical specifications of your device in table format that will help your customer better understand your product. For convenience, the information displayed at the top of the page is also listed here, along with some additional device information. The rest of the table is sectioned by the components specified in the portal.
![PDP bottom page](./media/concepts-marketing/pdp-bottom.png) | Field | Description | Where to add in the portal | ||-|-|
-| Component type | Classification of the form factor and primary purpose of your device ([Learn more](./resources-glossary.md)) | Product details of Device details|
+| Environmental certifications | Official certifications received for performance in different environments | Hardware of Device details |
+| Operating conditions | Ingress Protection value or temperature ranges the device is qualified for | Software of device details |
+| Azure software set-up | Classification of the set-up process to connect the device to Azure ([Learn more](./how-to-software-levels.md)) | Software of Device details |
+| Component type | Classification of the form factor and primary purpose of your device ([Learn more](./resources-glossary.md)) | Hardware of Device details|
| Component name| Name of the component you are describing | Product details of Device details | | Additional component information | Additional hardware specifications such as included sensors, connectivity, accelerators, etc. | Additional component information of Device details ([Learn more](./how-to-using-the-components-feature.md)) | | Device dependency text | Partner-provided text describing the different dependencies the product requires to connect to Azure ([Learn more](./how-to-indirectly-connected-devices.md)) | Customer-facing comments section of Dependencies tab of Device details |
Available both on the product tile and product description page is a Shop button
| Get Device| Link to external website for customer to purchase the device (or request a quote from the distributor). This may be the same as the Manufacturer's page if the distributor is the same as the device manufacturer. If a purchase page is not available, this will redirect to the distributor's page for customer to contact them directly. | Distributor product page URL in marketing details. If no purchase page is available, link will default to Distributor URL in Marketing detail. | ## External links+ Also included within the Product Description page are links that navigate to partner-provided sites or files that help the customer better understand the product. They appear towards the top of the page, beneath the product description text. The links displayed will differ for different device types and certification programs. | Link | Description | Where to add in the portal |
certification How To Software Levels https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/certification/how-to-software-levels.md
+
+ Title: Software levels of Azure Certified Devices
+description: A breakdown of the different software levels that an Azure Certified Device may be classified as.
++++ Last updated : 06/22/2021+++
+# Software levels of Azure Certified Devices
+
+Software levels are a feature defined by the Azure Certified Device program to help device builders indicate the technical level of difficulty a customer can expect when connecting the device to Azure services. Appearing on the catalog as "Azure software set-up," these values are aimed to help viewers better understand the product and its connection to Azure. The definitions of each of these levels are provided below.
+
+## Level 1
+
+User can immediately connect device to Azure by simply adding provisioning details. The certified IoT device already contains pre-installed software that was used for certification upon purchase. This level is most similar to having an "out-of-the-box" set-up experience for IoT beginners who are not as comfortable with compiling source code.
+
+## Level 2
+
+User must flash/apply manufacturer-provided software image to the device to connect to Azure. Extra tools/software experience may be required. The link to the software image is also provided in our catalog.
+
+## Level 3
+
+User must follow a manufacturer-provided guide to prepare and install Azure-specific software. No Azure-specific software image is provided, so some customization and compilation of provided source code is required.
+
+## Level 4
+
+User must develop, customize, and recompile their own device code to connect to Azure. No manufacturer-supported source code is available. This level is most well suited for developers looking to create custom deployments for their device.
+
+## Next steps
+
+These levels are aimed to help you get started with building IoT solutions with Azure! Ready to get started? Visit the [Azure Certified Device catalog](https://devicecatalog.azure.com) to get searching for devices!
+
+Are you a device builder who is looking to add this software level to your certified device? Check out the links below.
+- [Edit a previously published device](how-to-edit-published-device.md)
+- [Tutorial: Adding device details](tutorial-02-adding-device-details.md)
certification How To Using The Components Feature https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/certification/how-to-using-the-components-feature.md
You may have questions regarding how many components to include, or what compone
| Finished Product | 1 | Customer Ready Product, Discrete | N/A | | Finished Product with **detachable peripheral(s)** | 2 or more | Customer Ready Product, Discrete | Peripheral / Discrete or Integrated | | Finished Product with **integrated component(s)** | 2 or more | Customer Ready Product, Discrete | Select appropriate type / Discrete or integrated |
-| Solution-Ready Dev Kit | 2 or more | Customer Ready Product, Discrete | Select appropriate type / Discrete or integrated |
+| Solution-Ready Dev Kit | 2 or more | Customer Ready Product, Discrete or Integrated| Select appropriate type / Discrete or integrated |
## Example component usage
certification Tutorial 01 Creating Your Project https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/certification/tutorial-01-creating-your-project.md
Previously updated : 03/01/2021 Last updated : 06/22/2021
In this tutorial, you will learn how to:
## Prerequisites - - Valid work/school [Azure Active Directory account](../active-directory/fundamentals/active-directory-whatis.md). - Verified Microsoft Partner Network (MPN) account. If you don't have an MPN account, [join the partner network](https://partner.microsoft.com/) before you begin.
Then, you must supply basic device information. You can to edit this information
| Device type | Specification of Finished Product or Solution-Ready Developer Kit. For more information about the terminology, see [Certification glossary](./resources-glossary.md). | | Device class | Gateway, Sensor, or other. For more information about the terminology, see [Certification glossary](./resources-glossary.md). | | Device source code URL | Required if you are certifying a Solution-Ready Dev Kit, optional otherwise. URL must be to a GitHub location for your device code. |+
+ > [!Note]
+ > If you are marketing a Microsoft service (e.g. Azure Sphere), please ensure that your device name adheres to Microsoft [branding guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks).
+ 1. Select the `Next` button to continue to the `Certifications` tab. ![Image of the Create new project form, Certifications tab](./media/images/create-new-project-certificationswindow.png)
cognitive-services Tutorial Bing Image Search Single Page App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Bing-Image-Search/tutorial-bing-image-search-single-page-app.md
Leave the command window open while you use the tutorial app; closing the window
## See also
-* [Bing Image Search API reference](//docs.microsoft.com/rest/api/cognitiveservices/bing-images-api-v7-reference)
+* [Bing Image Search API reference](/rest/api/cognitiveservices/bing-images-api-v7-reference)
cognitive-services Tutorial Image Post https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Bing-Image-Search/tutorial-image-post.md
If there are identifiable people or places in the image, this request will retur
## See also
-* [Bing Image Search API reference](//docs.microsoft.com/rest/api/cognitiveservices/bing-images-api-v7-reference)
+* [Bing Image Search API reference](/rest/api/cognitiveservices/bing-images-api-v7-reference)
cognitive-services Tutorial Bing News Search Single Page App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Bing-News-Search/tutorial-bing-news-search-single-page-app.md
Leave the command window open while you use the tutorial app; closing the window
## Next steps > [!div class="nextstepaction"]
-> [Bing News Search API reference](//docs.microsoft.com/rest/api/cognitiveservices/bing-news-api-v7-reference)
+> [Bing News Search API reference](/rest/api/cognitiveservices/bing-news-api-v7-reference)
cognitive-services Tutorial Bing Video Search Single Page App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Bing-Video-Search/tutorial-bing-video-search-single-page-app.md
Leave the command window open while you use the tutorial app; closing the window
## Next steps > [!div class="nextstepaction"]
-> [Bing Video Search API reference](//docs.microsoft.com/rest/api/cognitiveservices/bing-video-api-v7-reference)
+> [Bing Video Search API reference](/rest/api/cognitiveservices/bing-video-api-v7-reference)
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Bing-Web-Search/language-support.md
Alternatively, you can specify the market with the `mkt` query parameter, and a
## Next steps
-* [Bing Image Search API reference](//docs.microsoft.com/rest/api/cognitiveservices/bing-images-api-v7-reference)
+* [Bing Image Search API reference](/rest/api/cognitiveservices/bing-images-api-v7-reference)
cognitive-services Tutorial Bing Web Search Single Page App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Bing-Web-Search/tutorial-bing-web-search-single-page-app.md
Leave the command window open while you use the sample app; closing the window s
## Next steps > [!div class="nextstepaction"]
-> [Bing Web Search API v7 reference](//docs.microsoft.com/rest/api/cognitiveservices/bing-web-api-v7-reference)
+> [Bing Web Search API v7 reference](/rest/api/cognitiveservices/bing-web-api-v7-reference)
cognitive-services Cognitive Services Container Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/cognitive-services-container-support.md
Previously updated : 06/07/2021 Last updated : 06/25/2021 keywords: on-premises, Docker, container, Kubernetes #Customer intent: As a potential customer, I want to know more about how Cognitive Services provides and supports Docker containers for each service.
cognitive-services Container Image Tags https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/containers/container-image-tags.md
Previously updated : 05/13/2021 Last updated : 06/25/2021
Release notes for `1.1.013050001-amd64-preview`
- ## Form Recognizer
-The [Form Recognizer][fr-containers] container image can be found on the `mcr.microsoft.com` container registry syndicate. It resides within the `azure-cognitive-services/custom-form` repository and is named `labeltool`. The fully qualified container image name is, `mcr.microsoft.com/azure-cognitive-services/custom-form/labeltool`.
+Form Recognizer features are supported by seven containers:
-This container image has the following tags available. You can also find a full list of [tags on the MCR](https://mcr.microsoft.com/v2/azure-cognitive-services/custom-form/labeltool/tags/list).
+| Container name | Fully qualified image name |
+|||
+| **Layout** | mcr.microsoft.com/azure-cognitive-services/form-recognizer/layout |
+| **Business Card** | mcr.microsoft.com/azure-cognitive-services/form-recognizer/businesscard |
+| **ID Document** | mcr.microsoft.com/azure-cognitive-services/form-recognizer/id-document |
+| **Receipt** | mcr.microsoft.com/azure-cognitive-services/form-recognizer/receipt |
+| **Invoice** | mcr.microsoft.com/azure-cognitive-services/form-recognizer/invoice |
+| **Custom API** | mcr.microsoft.com/azure-cognitive-services/form-recognizer/custom-api |
+| **Custom Supervised** | mcr.microsoft.com/azure-cognitive-services/form-recognizer/custom-supervised |
-# [Latest version](#tab/current)
+[Form Recognizer][fr-containers] container images can be found on the `mcr.microsoft.com` container registry syndicate. They reside within the `azure-cognitive-services/form-recognizer` repository.
-| Image Tags | Notes |
-|-|:|
-| `latest` | |
-| `1.1.009301-amd64-preview` | |
+Container images have the following tags available:
+# [Latest version](#tab/current)
+
+| Container | Tags |
+||:|
+| **Layout**| &bullet; `latest` </br> &bullet; `2.1-preview` </br> &bullet; `2.1.0.016140001-08108749-amd64-preview`|
+| **Business Card** | &bullet; `latest` </br> &bullet; `2.1-preview` </br> &bullet; `2.1.016190001-amd64-preview` </br> &bullet; `2.1.016320001-amd64-preview` |
+| **ID Document** | &bullet; `latest` </br> &bullet; `2.1-preview`</br>&bullet; `2.1.016190001-amd64-preview`</br>&bullet; `2.1.016320001-amd64-preview` |
+| **Receipt**| &bullet; `latest` </br> &bullet; `2.1-preview`</br>&bullet; `2.1.016190001-amd64-preview`</br>&bullet; `2.1.016320001-amd64-preview` |
+| **Invoice**| &bullet; `latest` </br> &bullet; `2.1-preview`</br>&bullet; `2.1.016190001-amd64-preview`</br>&bullet; `2.1.016320001-amd64-preview` |
+| **Custom API** | &bullet; `latest` </br> &bullet;`2.1-distroless-20210622013115034-0cc5fcf6`</br>&bullet; `2.1-preview`|
+| **Custom Supervised**| &bullet; `latest` </br> &bullet; `2.1-distroless-20210622013149174-0cc5fcf6`</br>&bullet; `2.1-preview`|
# [Previous versions](#tab/previous)
-| Image Tags | Notes |
-|-|:|
-| `1.1.008640001-amd64-preview` | |
-| `1.1.008510001-amd64-preview` | |
+> [!IMPORTANT]
+> The Form Recognizer v1.0 container has been retired.
This container image has the following tags available.
[ad-containers]: ../anomaly-Detector/anomaly-detector-container-howto.md [cv-containers]: ../computer-vision/computer-vision-how-to-install-containers.md [fa-containers]: ../face/face-how-to-install-containers.md
-[fr-containers]: ../form-recognizer/form-recognizer-container-howto.md
+[fr-containers]: ../form-recognizer/containers/form-recognizer-container-install-run.md
[lu-containers]: ../luis/luis-container-howto.md [sp-stt]: ../speech-service/speech-container-howto.md?tabs=stt [sp-cstt]: ../speech-service/speech-container-howto.md?tabs=cstt
communication-services Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/best-practices.md
+
+ Title: Azure Communication Services - best practices
+description: Learn more about Azure Communication Service best practices
+++++ Last updated : 06/18/2021++++
+# Best practices: Azure Communication Services calling SDKs
+This article provides information about best practices related to the Azure Communication Services (ACS) calling SDKs.
+
+## ACS web JavaScript SDK best practices
+This section provides information about best practices associated with the Azure Communication Services JavaScript voice and video calling SDK.
+
+## JavaScript voice and video calling SDK
+
+### Plug-in microphone or enable microphone from device manager when ACS call in progress
+When there is no microphone available at the beginning of a call, and then a microphone becomes available, the "noMicrophoneDevicesEnumerated" call diagnostic event will be raised.
+When this happens, your application should invoke `askDevicePermission` to obtain user consent to enumerate devices. Then user will then be able to mute/unmute the microphone.
+
+### Stop video on page hide
+When user navigate away from the calling tab then the video streaming stops. Some devices continue to stream the last frame. To avoid this issue, Developers are encouraged to stop video streaming when users navigate away from an active video-enabled call. Video can be stopped by calling the `call.stopVideo` API.
+```JavaScript
+document.addEventListener("visibilitychange", function() {
+ if (document.visibilityState === 'visible') {
+ // Start Video if it was stopped on visibility change (flag true)
+ } else {
+ // Stop Video if it's on and set flag = true to keep track
+ }
+});
+```
+
+### Dispose video stream renderer view
+Communication Services applications should dispose `VideoStreamRendererView`, or its parent `VideoStreamRenderer` instance, when it is no longer needed.
+
+### Hang up the call on onbeforeunload event
+Your application should invoke `call.hangup` when the `onbeforeunload` event is emitted.
+
+### Hang up the call on microphoneMuteUnexpectedly UFD
+When an iOS/Safari user receives a PSTN call, Azure Communication Services loses microphone access.
+Azure Communication Services will raise the `microphoneMuteUnexpectedly` call diagnostic event, and at this point Communication Services will not be able to regain access to microphone.
+It's recommended to hang up the call ( `call.hangUp` ) when this situation occurs.
+
+### Device management
+Developers should use SDK for device and media operations.
+- Application should use `DeviceManager.askDevicePermission` to get user consent to use devices
+- Application should not use browser APIs like `getUserMedia` or `getDisplayMedia` to acquire streams outside of SDK. If it does so, please make sure it disposes stream before using DeviceManager or accessing any other device via ACS SDK.
+
+## Next steps
+For more information, see the following articles:
+
+- [Add chat to your app](../quickstarts/chat/get-started.md)
+- [Add voice calling to your app](../quickstarts/voice-video-calling/getting-started-with-calling.md)
+- [Reference documentation](reference.md)
container-registry Container Registry Tasks Pack Build https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/container-registry-tasks-pack-build.md
Title: Build image with Cloud Native Buildpack description: Use the az acr pack build command to build a container image from an app and push to Azure Container Registry, without using a Dockerfile. Previously updated : 10/24/2019 Last updated : 06/24/2021
At a minimum, specify the following when you run `az acr pack build`:
* An Azure container registry where you run the command * An image name and tag for the resulting image * One of the [supported context locations](container-registry-tasks-overview.md#context-locations) for ACR Tasks, such as a local directory, a GitHub repo, or a remote tarball
-* The name of a Buildpack builder image suitable for your application. Azure Container Registry caches builder images such as `cloudfoundry/cnb:0.0.34-cflinuxfs3` for faster builds.
+* The name of a Buildpack builder image suitable for your application. If not cached by Azure Container Registry, the builder image must be pulled using the `--pull` parameter.
`az acr pack build` supports other features of ACR Tasks commands including [run variables](container-registry-tasks-reference-yaml.md#run-variables) and [task run logs](container-registry-tasks-logs.md) that are streamed and also saved for later retrieval. ## Example: Build Node.js image with Cloud Foundry builder
-The following example builds a container image from a Node.js app in the [Azure-Samples/nodejs-docs-hello-world](https://github.com/Azure-Samples/nodejs-docs-hello-world) repo, using the `cloudfoundry/cnb:0.0.34-cflinuxfs3` builder. This builder is cached by Azure Container Registry, so a `--pull` parameter isn't required:
+The following example builds a container image from a Node.js app in the [Azure-Samples/nodejs-docs-hello-world](https://github.com/Azure-Samples/nodejs-docs-hello-world) repo, using the `cloudfoundry/cnb:cflinuxfs3` builder.
```azurecli az acr pack build \ --registry myregistry \
- --image {{.Run.Registry}}/node-app:1.0 \
- --builder cloudfoundry/cnb:0.0.34-cflinuxfs3 \
+ --image node-app:1.0 \
+ --pull --builder cloudfoundry/cnb:cflinuxfs3 \
https://github.com/Azure-Samples/nodejs-docs-hello-world.git ```
Browse to `localhost:1337` in your favorite browser to see the sample web app. P
## Example: Build Java image with Heroku builder
-The following example builds a container image from the Java app in the [buildpack/sample-java-app](https://github.com/buildpack/sample-java-app) repo, using the `heroku/buildpacks:18` builder. The `--pull` parameter specifies that the command should pull the latest builder image.
+The following example builds a container image from the Java app in the [buildpack/sample-java-app](https://github.com/buildpack/sample-java-app) repo, using the `heroku/buildpacks:18` builder.
```azurecli az acr pack build \
cosmos-db Dedicated Gateway https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/dedicated-gateway.md
The dedicated gateway is built into Azure Cosmos DB. When you [provision a dedic
There are only minimal code changes required in order for your application to use a dedicated gateway. Both new and existing Azure Cosmos DB accounts can provision a dedicated gateway for improved read performance.
+> [!NOTE]
+> Do you have any feedback about the dedicated gateway? We want to hear it! Feel free to share feedback directly with the Azure Cosmos DB engineering team:
+cosmoscachefeedback@microsoft.com
+ ## Connection modes There are three ways to connect to an Azure Cosmos DB account:
cosmos-db How To Configure Integrated Cache https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/how-to-configure-integrated-cache.md
For a read request (point read or query) to utilize the integrated cache, **all*
- Your client uses gateway mode (Python and Node.js SDK's always use gateway mode) - The consistency for the request must be set to eventual.
+> [!NOTE]
+> Do you have any feedback about the integrated cache? We want to hear it! Feel free to share feedback directly with the Azure Cosmos DB engineering team:
+cosmoscachefeedback@microsoft.com
++ ## Next steps - [Integrated cache FAQ](integrated-cache-faq.md)
cosmos-db Integrated Cache https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/integrated-cache.md
An integrated cache is automatically configured within the dedicated gateway. Th
The integrated cache is a read-through, write-through cache with a Least Recently Used (LRU) eviction policy. The item cache and query cache share the same capacity within the integrated cache and the LRU eviction policy applies to both. In other words, data is evicted from the cache strictly based on when it was least recently used, regardless of whether it is a point read or query.
+> [!NOTE]
+> Do you have any feedback about the integrated cache? We want to hear it! Feel free to share feedback directly with the Azure Cosmos DB engineering team:
+cosmoscachefeedback@microsoft.com
+ ## Workloads that benefit from the integrated cache The main goal of the integrated cache is to reduce costs for read-heavy workloads. Low latency, while helpful, is not the main benefit of the integrated cache because Azure Cosmos DB is already fast without caching.
cost-management-billing Troubleshoot Cant Find Invoice https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/manage/troubleshoot-cant-find-invoice.md
+
+ Title: Troubleshoot cant view invoice in the Azure portal
+description: Resolving an issue when trying to view your invoice in the Azure portal.
+++
+tags: billing
+++ Last updated : 06/25/2021+++
+# Troubleshoot issues while trying to view invoice in the Azure portal
+
+You may experience issues when you try to view your invoice in the Azure portal. This short guide will discuss some common issues.
+
+## Common issues and solutions
+
+#### <a name="subnotfound"></a> You see the message ΓÇ£We canΓÇÖt display the invoices for your subscription. This typically happens when you sign in with an email, which doesnΓÇÖt have access to view invoices. Check youΓÇÖve signed in with the correct email address. If you are still seeing the error, see Why you might not see an invoice.ΓÇ¥
+
+This happens when the identity that you used to sign in does not have access to the subscription.
+
+To resolve this issue, try one of the following options:
+
+**Verify that you're signed in with the correct email address:**
+
+Only the email that has the account administrator role for the subscription can view its invoice. Verify that you've signed in with the correct email address. The email address is displayed in the email that you received when your invoice is generated.
+
+ ![Screenshot that shows invoice email](./media/troubleshoot-cant-find-invoice/invoice-email.png)
+
+**Verify that you're signed in with the correct account:**
+
+Some customers have two accounts with the same email address - a work or a school account and a personal account. Typically, only one of their accounts has permission to view invoices. You might have two accounts with your email address. If you sign in with the account that doesn't have permission, you would not see the invoice. To identify if you have multiple accounts and use a different account, follow the steps below:
+
+1. Sign in to the [Azure portal](https://portal.azure.com) in an InPrivate/Incognito window.
+1. If you have multiple accounts with the same email, then you'll be prompted to select either **Work or school account** or **Personal account**. Select one of the accounts then follow the [instructions here to view your invoice](../understand/download-azure-invoice.md#download-your-mosp-azure-subscription-invoice).
+
+ ![Screenshot that shows account selection](./media/troubleshoot-cant-find-invoice/two-accounts.png)
+
+1. Try other account, if you still can't view the invoice in the Azure portal.
+
+**Verify that you're signed in to the correct Azure Active directory (AAD) tenant:**
+
+Your billing account and subscription is associated with an AAD tenant. If you're signed in to an incorrect tenant, you won't see the invoice for your subscription. Try the following steps to switch tenants in the Azure portal:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Select your email address from the top-right of the page.
+1. Select Switch directory.
+
+ ![Screenshot that shows selecting switch directory](./media/troubleshoot-cant-find-invoice/select-switch-tenant.png)
+
+1. Select a tenant from the All Directories section. If you don't see All Directories section, you don't have access to multiple tenants.
+
+ ![Screenshot that shows selecting another directory](./media/troubleshoot-cant-find-invoice/select-another-tenant.png)
+
+#### <a name="cantsearchinvoice"></a>You couldn't find the invoice that you see on your credit card statement
+
+You find a charge on your credit card **Microsoft Gxxxxxxxxx**. You can find all other invoices in the portal but not Gxxxxxxxxx. This happens when the invoice belongs to a different subscription or billing profile. Follow the steps below to view the invoice.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Search for the invoice number in the Azure portal search bar.
+1. Select **View your invoice**.
+
+ ![Screenshot that shows searching for invoice](./media/troubleshoot-cant-find-invoice/search-invoice.png)
+
+## Contact us for help
+
+If you have questions or need help, [create a support request](https://ms.portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest).
+
+## Next steps
+
+- [View and download your Azure invoice](../understand/download-azure-invoice.md)
+- [View and download your Azure usage and charges](../understand/download-azure-daily-usage.md)
+- [No subscriptions found sign in error for Azure portal](no-subscriptions-found.md)
data-factory Compute Optimized Retire https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/compute-optimized-retire.md
- Title: Compute optimized retirement
-description: Data flow compute optimized option is being retired
---- Previously updated : 06/09/2021--
-# Retirement of data flow compute optimized option
--
-Azure Data Factory and Azure Synapse Analytics data flows provide a low-code mechanism to transform data in ETL jobs at scale using a graphical design paradigm. Data flows execute on the Azure Data Factory and Azure Synapse Analytics serverless Integration Runtime facility. The scalable nature of Azure Data Factory and Azure Synapse Analytics Integration Runtimes enabled three different compute options for the Azure Databricks Spark environment that is utilized to execute data flows at scale: Memory Optimized, General Purpose, and Compute Optimized. Memory Optimized and General Purpose are the recommended classes of data flow compute to use with your Integration Runtime for production workloads. Because Compute Optimized will often not suffice for common use cases with data flows, we recommend using General Purpose or Memory Optimized data flows in production workloads.
-
-## Migration steps
-
-1. Create a new Azure Integration Runtime with ΓÇ£General PurposeΓÇ¥ or ΓÇ£Memory OptimizedΓÇ¥ as the compute type.
-2. Set your data flow activity using either of those compute types.
-
- ![Compute types](media/data-flow/compute-types.png)
-
-## Comparison between different compute options
-
-| Compute Option | Performance |
-| :-- | :-- |
-| General Purpose Data Flows | Good for general use cases in production workloads |
-| Memory Optimized Data Flows | Best performing runtime for data flows when working with large datasets and many calculations |
-| Compute Optimized Data Flows | Not recommended for production workloads |
-
-[Find more detailed information at the data flows FAQ here](https://aka.ms/dataflowsqa)
-[Post questions and find answers on data flows on Microsoft Q&A](https://aka.ms/datafactoryqa)
data-factory How To Create Schedule Trigger https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/how-to-create-schedule-trigger.md
You can create a **schedule trigger** to schedule a pipeline to run periodically
> For time zones that observe daylight saving, trigger time will auto-adjust for the twice a year change. To opt out of the daylight saving change, please select a time zone that does not observe daylight saving, for instance UTC 1. Specify **Recurrence** for the trigger. Select one of the values from the drop-down list (Every minute, Hourly, Daily, Weekly, and Monthly). Enter the multiplier in the text box. For example, if you want the trigger to run once for every 15 minutes, you select **Every Minute**, and enter **15** in the text box.
- 1. In the recurrence part, if you choose "Day(s), Week(s) or Month(s)" from the drop-down, you can find "Advanced recurrence options".
+ 1. In the **Recurrence**, if you choose "Day(s), Week(s) or Month(s)" from the drop-down, you can find "Advanced recurrence options".
:::image type="content" source="./media/how-to-create-schedule-trigger/advanced.png" alt-text="Advanced recurrence options of Day(s), Week(s) or Month(s)"::: 1. To specify an end date time, select **Specify an End Date**, and specify _Ends On_, then select **OK**. There is a cost associated with each pipeline run. If you are testing, you may want to ensure that the pipeline is triggered only a couple of times. However, ensure that there is enough time for the pipeline to run between the publish time and the end time. The trigger comes into effect only after you publish the solution to Data Factory, not when you save the trigger in the UI.
databox-online Azure Stack Edge Gpu Prepare Windows Generalized Image Iso https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-prepare-windows-generalized-image-iso.md
Previously updated : 04/15/2021 Last updated : 06/25/2021 #Customer intent: As an IT admin, I need to be able to quickly deploy new Windows virtual machines on my Azure Stack Edge Pro GPU device, and I want to use an ISO image for OS installation.
After creating the new virtual machine, follow these steps to mount your ISO ima
![In BIOS settings, the first item under Startup order should be CD](./media/azure-stack-edge-gpu-prepare-windows-generalized-image-iso/vhd-from-iso-14.png) + 3. Under **DVD Drive**, select **Image file**, and browse to your ISO image. ![In DVD drive settings, select the image file for your VHD](./media/azure-stack-edge-gpu-prepare-windows-generalized-image-iso/vhd-from-iso-15.png)
To finish building your virtual machine, you need to start the virtual machine a
[!INCLUDE [Connect to Hyper-V VM](../../includes/azure-stack-edge-connect-to-hyperv-vm.md)]
-## Generalize the VHD
+> [!NOTE]
+> If you installed the Windows Server 2019 Standard operating system on your virtual machine, you'll need to change the **BIOS** setting to **IDE** before you [generalize the VHD](#generalize-the-vhd).
+
+## Generalize the VHD
+
+Use the *sysprep* utility to generalize the VHD.
+
+1. If you're generalizing a Windows Server 2019 Standard VM, before you generalize the VHD, make IDE the first **BIOS** setting for the virtual machine.
+
+ 1. In Hyper-V Manager, select the VM, and then select **Settings**.
+
+ ![Screenshot showing how to open Settings for a selected VM in Hyper-V Manager](./media/azure-stack-edge-gpu-prepare-windows-generalized-image-iso/vhd-from-iso-01.png)
+
+ 1. Under **BIOS**, ensure that **IDE** is at the top of the **Startup order** list. Then select **OK** to save the setting.
+
+ ![Screenshot showing IDE at top of startup order in BIOS settings for a VM in Hyper-V Manager](./media/azure-stack-edge-gpu-prepare-windows-generalized-image-iso/vhd-from-iso-02.png)
+
+1. Inside the VM, open a command prompt.
+
+1. Run the following command to generalize the VHD.
+
+ ```
+ c:\windows\system32\sysprep\sysprep.exe /oobe /generalize /shutdown /mode:vm
+ ```
+
+ For details, see [Sysprep (system preparation) overview](/windows-hardware/manufacture/desktop/sysprep--system-preparation--overview).
+
+1. After the command is complete, the VM will shut down. **Do not restart the VM**.
+<!--[!INCLUDE [Generalize the VHD](../../includes/azure-stack-edge-generalize-vhd.md)]-->
Your VHD can now be used to create a generalized image to use on Azure Stack Edge Pro GPU.
databox-online Azure Stack Edge Gpu Prepare Windows Vhd Generalized Image https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-prepare-windows-vhd-generalized-image.md
Previously updated : 04/15/2021 Last updated : 06/18/2021 #Customer intent: As an IT admin, I need to understand how to create and upload Azure VM images that I can use to deploy virtual machines on my Azure Stack Edge Pro GPU device.
To finish building your virtual machine, you need to start the virtual machine a
After you're connected to the VM, complete the Machine setup wizard, and then sign into the VM.<!--It's not clear what they are doing here. Where does the Machine setup wizard come in?-->
-## Generalize the VHD
+## Generalize the VHD
+
+Use the *sysprep* utility to generalize the VHD.
[!INCLUDE [Generalize the VHD](../../includes/azure-stack-edge-generalize-vhd.md)]
databox-online Azure Stack Edge Security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-security.md
The Azure Stack Edge service is a management service that's hosted in Azure. The
[!INCLUDE [data-box-edge-gateway-data-rest](../../includes/data-box-edge-gateway-service-protection.md)]
-## Azure Stack Edge Pro FPGA device protection
+## Azure Stack Edge device protection
-The Azure Stack Edge Pro FPGA device is an on-premises device that helps transform your data by processing it locally and then sending it to Azure. Your device:
+The Azure Stack Edge device is an on-premises device that helps transform your data by processing it locally and then sending it to Azure. Your device:
- Needs an activation key to access the Azure Stack Edge service. - Is protected at all times by a device password.
event-grid Enable Diagnostic Logs Topic https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/enable-diagnostic-logs-topic.md
Title: Azure Event Grid - Enable diagnostic logs for topics or domains description: This article provides step-by-step instructions on how to enable diagnostic logs for an Azure event grid topic. Previously updated : 04/22/2021 Last updated : 06/25/2021 # Enable Diagnostic logs for Azure event grid topics or domains
This article provides step-by-step instructions to enable diagnostic settings fo
"message": "Message:outcome=NotFound, latencyInMs=2635, id=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxxx, systemId=xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx, state=FilteredFailingDelivery, deliveryTime=11/1/2019 12:17:10 AM, deliveryCount=0, probationCount=0, deliverySchema=EventGridEvent, eventSubscriptionDeliverySchema=EventGridEvent, fields=InputEvent, EventSubscriptionId, DeliveryTime, State, Id, DeliverySchema, LastDeliveryAttemptTime, SystemId, fieldCount=, requestExpiration=1/1/0001 12:00:00 AM, delivered=False publishTime=11/1/2019 12:17:10 AM, eventTime=11/1/2019 12:17:09 AM, eventType=Type, deliveryTime=11/1/2019 12:17:10 AM, filteringState=FilteredWithRpc, inputSchema=EventGridEvent, publisher=DIAGNOSTICLOGSTEST-EASTUS.EASTUS-1.EVENTGRID.AZURE.NET, size=363, fields=Id, PublishTime, SerializedBody, EventType, Topic, Subject, FilteringHashCode, SystemId, Publisher, FilteringTopic, TopicCategory, DataVersion, MetadataVersion, InputSchema, EventTime, fieldCount=15, url=sb://diagnosticlogstesting-eastus.servicebus.windows.net/, deliveryResponse=NotFound: The messaging entity 'sb://diagnosticlogstesting-eastus.servicebus.windows.net/eh-diagnosticlogstest' could not be found. TrackingId:c98c5af6-11f0-400b-8f56-c605662fb849_G14, SystemTracker:diagnosticlogstesting-eastus.servicebus.windows.net:eh-diagnosticlogstest, Timestamp:2019-11-01T00:17:13, referenceId: ac141738a9a54451b12b4cc31a10dedc_G14:" } ```+
+## Use Azure Resource Manager template
+Here's a sample Azure Resource Manager template to enable diagnostic settings for an event grid topic. When you deploy this sample template, the following resources are created.
+
+- An event grid topic
+- A Log Analytics workspace
+
+Then, it creates a diagnostic setting on the topic to send diagnostic information to the Log Analytics workspace.
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "topic_name": {
+ "defaultValue": "spegrid0917topic",
+ "type": "String"
+ },
+ "log_analytics_workspace_name": {
+ "defaultValue": "splogaw0625",
+ "type": "String"
+ },
+ "location": {
+ "defaultValue": "eastus",
+ "type": "String"
+ },
+ "sku": {
+ "defaultValue": "Free",
+ "type": "String"
+ }
+ },
+ "variables": {},
+ "resources": [
+ {
+ "type": "Microsoft.EventGrid/topics",
+ "apiVersion": "2020-10-15-preview",
+ "name": "[parameters('topic_name')]",
+ "location": "[parameters('location')]",
+ "sku": {
+ "name": "Basic"
+ },
+ "kind": "Azure",
+ "identity": {
+ "type": "None"
+ },
+ "properties": {
+ "inputSchema": "EventGridSchema",
+ "publicNetworkAccess": "Enabled"
+ }
+ },
+ {
+ "apiVersion": "2017-03-15-preview",
+ "name": "[parameters('log_analytics_workspace_name')]",
+ "location": "[parameters('location')]",
+ "type": "Microsoft.OperationalInsights/workspaces",
+ "properties": {
+ "sku": {
+ "name": "[parameters('sku')]"
+ }
+ }
+ },
+ {
+ "type": "Microsoft.EventGrid/topics/providers/diagnosticSettings",
+ "apiVersion": "2017-05-01-preview",
+ "name": "[concat(parameters('topic_name'), '/', 'Microsoft.Insights/', parameters('log_analytics_workspace_name'))]",
+ "location": "[parameters('location')]",
+ "dependsOn": [
+ "[resourceId('Microsoft.EventGrid/topics', parameters('topic_name'))]",
+ "[resourceId('Microsoft.OperationalInsights/workspaces', parameters('log_analytics_workspace_name'))]"
+ ],
+ "properties": {
+ "workspaceId": "[resourceId('Microsoft.OperationalInsights/workspaces', parameters('log_analytics_workspace_name'))]",
+ "metrics": [
+ {
+ "category": "AllMetrics",
+ "enabled": true
+ }
+ ],
+ "logs": [
+ {
+ "category": "DeliveryFailures",
+ "enabled": true
+ },
+ {
+ "category": "PublishFailures",
+ "enabled": true
+ }
+ ]
+ }
+ }
+ ]
+}
+```
+ ## Next steps For the log schema and other conceptual information about diagnostic logs for topics or domains, see [Diagnostic logs](diagnostic-logs.md).
firewall Firewall Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall/firewall-diagnostics.md
Previously updated : 05/06/2021 Last updated : 06/25/2021 #Customer intent: As an administrator, I want monitor Azure Firewall logs and metrics so that I can track firewall activity.
You can view and analyze activity log data by using any of the following methods
## View and analyze the network and application rule logs
-[Azure Monitor logs](../azure-monitor/insights/azure-networking-analytics.md) collects the counter and event log files. It includes visualizations and powerful search capabilities to analyze your logs.
-
-For Azure Firewall log analytics sample queries, see [Azure Firewall log analytics samples](./firewall-workbook.md).
- [Azure Firewall Workbook](firewall-workbook.md) provides a flexible canvas for Azure Firewall data analysis. You can use it to create rich visual reports within the Azure portal. You can tap into multiple Firewalls deployed across Azure, and combine them into unified interactive experiences. You can also connect to your storage account and retrieve the JSON log entries for access and performance logs. After you download the JSON files, you can convert them to CSV and view them in Excel, Power BI, or any other data-visualization tool.
healthcare-apis Access Fhir Postman Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/fhir/access-fhir-postman-tutorial.md
A client application can access the Azure API for FHIR through a [REST API](http
To deploy the Azure API for FHIR (a managed service), you can use the [Azure portal](fhir-paas-portal-quickstart.md), [PowerShell](fhir-paas-powershell-quickstart.md), or [Azure CLI](fhir-paas-cli-quickstart.md). - A registered [confidential client application](register-confidential-azure-ad-client-app.md) to access the FHIR service.-- You have granted permissions to the confidential client application, for example, "FHIR Data Contributor", to access the FHIR service. For more information, see [Configure Azure RBAC for FHIR](./configure-azure-rbac.md).
+- You have granted permissions to the confidential client application and your user account, for example, "FHIR Data Contributor", to access the FHIR service. For more information, see [Configure Azure RBAC for FHIR](./configure-azure-rbac.md).
- Postman installed. For more information about Postman, see [Get Started with Postman](https://www.getpostman.com).
iot-central How To Move Device To Iot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/how-to-move-device-to-iot.md
- Title: How to move a device to Azure IoT Central from IoT Hub
-description: How to move device to Azure IoT Central from IoT Hub
-- Previously updated : 02/20/2021 ----
-# This article applies to operators and device developers.
-
-# How to transfer a device to Azure IoT Central from IoT Hub
-
-This article describes how to transfer a device to an Azure IoT Central application from an IoT Hub.
-
-A device first connects to a DPS endpoint to retrieve the information it needs to connect to your application. Internally, your IoT Central application uses an IoT hub to handle device connectivity.
-
-A device can be connected to an IoT hub directly using a connection string or using DPS. [Azure IoT Hub Device Provisioning service (DPS)](../../iot-dps/about-iot-dps.md) is the route for IoT Central.
-
-## To move the device to Azure IoT Central
-
-To connect a device to IoT Central from the IoT Hub a device needs to be updated with:
-
-* The [Scope ID](../../iot-dps/concepts-service.md) of the IoT Central application.
-* A key derived either from the [group SAS](concepts-get-connected.md) key or [the X.509 cert](../../iot-hub/iot-hub-x509ca-overview.md)
-
-To interact with IoT Central, there must be a device template that models the properties/telemetry/commands that the device implements. For more information, see [Get connected to IoT Central](concepts-get-connected.md) and [What are device templates?](concepts-device-templates.md)
-
-## Next steps
-
-Some suggested next steps are to:
--- Review some sample code that shows how to use SAS tokens in [Tutorial: Create and connect a client application to your Azure IoT Central application](tutorial-connect-device.md)-- Learn how to [How to connect devices with X.509 certificates using Node.js device SDK for IoT Central Application](how-to-connect-devices-x509.md)-- Learn how to [Monitor device connectivity using Azure CLI](./howto-monitor-devices-azure-cli.md)-- Learn how to [Define a new IoT device type in your Azure IoT Central application](./howto-set-up-template.md)-- Read about [Azure IoT Edge devices and Azure IoT Central](./concepts-iot-edge.md)
iot-central Howto Add Tiles To Your Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-add-tiles-to-your-dashboard.md
The following table describes the different types of tile you can add to a dashb
| Image | Image tiles display a custom image and can be clickable. The URL can be a relative link to another page in the application, or an absolute link to an external site.| | Label | Label tiles display custom text on a dashboard. You can choose the size of the text. Use a label tile to add relevant information to the dashboard such descriptions, contact details, or help.| | Count | Count tiles display the number of devices in a device group.|
-| Map | Map tiles display the location of one or more devices on a map. You can also display up to 100 points of a device's location history. For example, you can display sampled route of where a device has been on the past week.|
+| Map | Map tiles display the [location](howto-use-location-data.md) of one or more devices on a map. You can also display up to 100 points of a device's location history. For example, you can display sampled route of where a device has been on the past week.|
| KPI | KPI tiles display aggregate telemetry values for one or more devices over a time period. For example, you can use it to show the maximum temperature and pressure reached for one or more devices during the last hour.| | Line chart | Line chart tiles plot one or more aggregate telemetry values for one or more devices for a time period. For example, you can display a line chart to plot the average temperature and pressure of one or more devices for the last hour.| | Bar chart | Bar chart tiles plot one or more aggregate telemetry values for one or more devices for a time period. For example, you can display a bar chart to show the average temperature and pressure of one or more devices over the last hour.|
iot-central Howto Build Iotc Device Bridge https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-build-iotc-device-bridge.md
Each key in the `measurements` object must match the name of a telemetry type in
You can include a `timestamp` field in the body to specify the UTC date and time of the message. This field must be in ISO 8601 format. For example, `2020-06-08T20:16:54.602Z`. If you don't include a timestamp, the current date and time is used.
-You can include a `modelId` field in the body. Use this field to associate the device with a device template during provisioning. This functionality is only supported by [V3 applications](howto-get-app-info.md).
+You can include a `modelId` field in the body. Use this field to associate the device with a device template during provisioning. This functionality is only supported by [V3 applications](howto-faq.md#how-do-i-get-information-about-my-application).
The `deviceId` must be alphanumeric, lowercase, and may contain hyphens. If you don't include the `modelId` field, or if IoT Central doesn't recognize the model ID, then a message with an unrecognized `deviceId` creates a new _unassociated device_ in IoT Central. An operator can manually migrate the device to the correct device template. To learn more, see [Manage devices in your Azure IoT Central application > Migrating devices to a template](howto-manage-devices.md).
-In [V2 applications](howto-get-app-info.md), the new device appears on the **Device Explorer > Unassociated devices** page. Select **Associate** and choose a device template to start receiving incoming telemetry from the device.
+In [V2 applications](howto-faq.md#how-do-i-get-information-about-my-application), the new device appears on the **Device Explorer > Unassociated devices** page. Select **Associate** and choose a device template to start receiving incoming telemetry from the device.
> [!NOTE] > Until the device is associated to a template, all HTTP calls to the function return a 403 error status.
iot-central Howto Connect Sphere https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-connect-sphere.md
- Title: Connect an Azure Sphere device in Azure IoT Central | Microsoft Docs
-description: Learn how to connect an Azure Sphere (DevKit) device to an Azure IoT Central application.
----- Previously updated : 04/30/2020--
-# This article applies to device developers.
--
-# Connect an Azure Sphere device to your Azure IoT Central application
-
-This article shows you how to connect an Azure Sphere (DevKit) device to an Azure IoT Central application.
-
-Azure Sphere is a secured, high-level application platform with built-in communication and security features for internet-connected devices. It includes a secured, connected, crossover microcontroller unit (MCU), a custom high-level Linux-based operating system (OS), and a cloud-based security service that provides continuous, renewable security. For more information, see [What is Azure Sphere?](/azure-sphere/product-overview/what-is-azure-sphere).
-
-[Azure Sphere development kits](https://azure.microsoft.com/services/azure-sphere/get-started/) provide everything you need to start prototyping and developing Azure Sphere applications. Azure IoT Central with Azure Sphere enables an end-to-end stack for an IoT Solution. Azure Sphere provides the device support and IoT Central as a zero-code, managed IoT application platform.
-
-In this how-to article, you:
--- Create an Azure Sphere device in IoT Central using the Azure Sphere DevKit device template from the library.-- Prepare Azure Sphere DevKit device for Azure IoT.-- Connect Azure Sphere DevKit to Azure IoT Central.-- View the telemetry from the device in IoT Central.-
-## Prerequisites
-
-To complete the steps in this article, you need the following resources:
--- An Azure IoT Central application.-- Visual Studio 2019, version 16.4 or later.-- An [Azure Sphere MT3620 development kit from Seeed Studios](/azure-sphere/hardware/mt3620-reference-board-design).-
-> [!NOTE]
-> If you don't have a physical device, then after the first step step skip to the last section to try a simulated device.
-
-## Create the device in IoT Central
-
-To create an Azure Sphere device in IoT Central:
-
-1. In your Azure IoT Central application, select the **Device Templates** tab and select **+ New**. In the section **Use a featured device template**, select **Azure Sphere Sample Device**.
-
- :::image type="content" source="media/howto-connect-sphere/sphere-create-template.png" alt-text="Device template for Azure Sphere DevKit":::
-
-1. In the device template, edit the view called **Overview** to show **Temperature** and **Button Press**.
-
-1. Select the **Editing Device and Cloud Data** view type to add another view that shows the read/write property **Status LED**. Drag the **Status LED** property to the empty, dotted rectangle on the right-side of the form. Select **Save**.
-
-## Prepare the device
-
-Before you can connect the Azure Sphere DevKit device to IoT Central, you need to [setup the device and development environment](https://github.com/Azure/azure-sphere-samples/tree/master/Samples/AzureIoT).
-
-## Connect the device
-
-To enable the sample to connect to IoT Central, you must [configure an Azure IoT Central application and then modify the sample's application manifest](https://github.com/Azure/azure-sphere-samples/blob/master/Samples/AzureIoT/READMEStartWithIoTCentral.md).
-
-## View the telemetry from the device
-
-When the device is connected to IoT Central, you can see the telemetry on the dashboard.
--
-## Create a simulated device
-
-If you don't have a physical Azure Sphere DevKit device, you can create a simulated device to try Azure IoT Central application.
-
-To create a simulated device:
--- Select **Devices > Azure IoT Sphere**-- Select **+ New**.-- Enter a unique **Device ID** and a friendly **Device name**.-- Enable the **Simulated** setting.-- Select **Create**.-
-## Next steps
-
-Some suggested next steps are to:
--- Read about [Device connectivity in Azure IoT Central](./concepts-get-connected.md)-- Learn how to [Monitor device connectivity using Azure CLI](./howto-monitor-devices-azure-cli.md)
iot-central Howto Create Iot Central Application https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-create-iot-central-application.md
The following table summarizes the differences between the three standard plans:
| S1 | 2 | 5,000 | A few messages per hour | | S2 | 2 | 30,000 | Messages every few minutes |
-To learn more, see [Manage your bill in an IoT Central application](howto-view-bill.md).
- ### Application name The _application name_ you choose appears in the title bar on every page in your IoT Central application. It also appears on your application's tile on the **My apps** page on the [Azure IoT Central](https://aka.ms/iotcentral) site.
iot-central Howto Export Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-export-data.md
For example, you can:
## Prerequisites
-To use data export features, you must have a [V3 application](howto-get-app-info.md), and you must have the [Data export](howto-manage-users-roles.md) permission.
+To use data export features, you must have a [V3 application](howto-faq.md#how-do-i-get-information-about-my-application), and you must have the [Data export](howto-manage-users-roles.md) permission.
If you have a V2 application, see [Migrate your V2 IoT Central application to V3](howto-migrate.md).
Each exported message contains a normalized form of the full message the device
- `enqueuedTime`: The time at which this message was received by IoT Central. - `enrichments`: Any enrichments set up on the export. - `module`: The IoT Edge module that sent this message. This field only appears if the message came from an IoT Edge module.-- `component`: The component that sent this message. This field only appears if the capabilities sent in the message were modelled as a [component in the device template](howto-set-up-template.md#create-a-component).
+- `component`: The component that sent this message. This field only appears if the capabilities sent in the message were modeled as a [component in the device template](howto-set-up-template.md#create-a-component).
- `messageProperties`: Additional properties that the device sent with the message. These properties are sometimes referred to as *application properties*. [Learn more from IoT Hub docs](../../iot-hub/iot-hub-devguide-messages-construct.md). For Event Hubs and Service Bus, IoT Central exports a new message quickly after it receives the message from a device. In the user properties (also referred to as application properties) of each message, the `iotcentral-device-id`, `iotcentral-application-id`, and `iotcentral-message-source` are included automatically.
Each message or record represents changes to device and cloud properties. Inform
- `schema`: The name and version of the payload schema. - `enqueuedTime`: The time at which this change was detected by IoT Central. - `templateId`: The ID of the device template associated with the device.-- `properties`: An array of properties that changed, including the names of the properties and values that changed. The component and module information is included if the property is modelled within a component or an IoT Edge module.
+- `properties`: An array of properties that changed, including the names of the properties and values that changed. The component and module information is included if the property is modeled within a component or an IoT Edge module.
- `enrichments`: Any enrichments set up on the export. - For Event Hubs and Service Bus, IoT Central exports new messages data to your event hub or Service Bus queue or topic in near real time. In the user properties (also referred to as application properties) of each message, the `iotcentral-device-id`, `iotcentral-application-id`, `iotcentral-message-source`, and `iotcentral-message-type` are included automatically.
iot-central Howto Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-faq.md
# Frequently asked questions for IoT Central
-**How do I check for credential issues if a device isn't connecting to my IoT Central application?**
+## How do I get information about my application?
+
+You may need:
+
+- This information if you contact support.
+- The Azure subscription your application uses to locate billing information in the Azure portal.
+- The application's ID when you're working with the REST API.
+- The application's version to complete tasks such as adding a connector.
+
+To get information about your IoT Central application:
+
+1. Select the **Help** link on the top menu.
+
+1. Select **About your app**.
+
+1. The **About your app** page shows information about your application:
+
+ :::image type="content" source="media/howto-faq/about-your-app2.png" alt-text="About your app screenshot":::
+
+ Use the **Copy info** button to copy the information to the clipboard.
+
+## How do I transfer a device from IoT Hub to IoT Central?
+
+A device can connect to an IoT hub directly using a connection string or using the [Device Provisioning Service (DPS)](../../iot-dps/about-iot-dps.md). IoT Central always uses DPS.
+
+To connect a device that was connected to IoT Hub to IoT Central, update the device with:
+
+- The Scope ID of the IoT Central application.
+- A key derived from the application's group SAS key or X.509 certificate.
+
+To learn more, see [Get connected to Azure IoT Central](concepts-get-connected.md)
+
+To interact with IoT Central, there must be a device template that models the device capabilities. To learn more, see [What are device templates?](concepts-device-templates.md).
+
+## How do I check for credential issues if a device isn't connecting to my IoT Central application?
The [Troubleshoot why data from your devices isn't showing up in Azure IoT Central](troubleshoot-connection.md) includes steps to diagnose connectivity issues for devices.
-**How do I file a ticket with customer support?**
+## How do I file a ticket with customer support?
If you need help, you can file an [Azure support ticket](https://portal.azure.com/#create/Microsoft.Support). For more information, including other support options, see [Azure IoT support and help options](../../iot-fundamentals/iot-support-help.md).
-**How do I unblock a device?**
+## How do I unblock a device?
When a device is blocked, it can't send data to your IoT Central application. Blocked devices have a status of **Blocked** on the **Devices** page in your application. An operator must unblock the device before it can resume sending data:
When a device is blocked, it can't send data to your IoT Central application. Bl
When an operator unblocks a device the status returns to its previous value, **Registered** or **Provisioned**.
-**How do I approve a device?**
+## How do I move from a free to a standard pricing plan?
+
+- Applications that use the free pricing plan are free for seven days before they expire. To avoid losing data, you can move them to a standard pricing plan at any time before they expire.
+- Applications that use a standard pricing plan are charged per device, with the first two devices free, per application.
+
+Learn more about pricing on the [Azure IoT Central pricing page](https://azure.microsoft.com/pricing/details/iot-central/).
+
+In the pricing section, you can move your application from the free to a standard pricing plan.
+
+To complete this self-service process, follow these steps:
+
+1. Go to the **Pricing** page in the **Administration** section.
+
+1. Select the **Plan**
+
+ :::image type="content" source="media/howto-faq/free-trial-billing.png" alt-text="Trial state":::
+
+1. Select the appropriate Azure Active Directory, and then the Azure subscription to use for your application that uses a paid plan.
+
+1. After you select **Save**, your application now uses a paid plan and you start getting billed.
+
+> [!Note]
+> By default, you are converted to a *Standard 2* pricing plan.
+
+## How do I change my application pricing plan
+
+Applications that use a standard pricing plan are charged per device, with the first two devices free, per application.
+
+In the pricing section, you can upgrade or downgrade your Azure IoT pricing plan at any time.
+
+1. Go to the **Pricing** page in the **Administration** section.
+
+ :::image type="content" source="media/howto-faq/pricing.png" alt-text="Upgrade pricing plan":::
+
+1. Select the **Plan** and then select **Save** to upgrade or downgrade.
+
+## How do I approve a device?
If the device status is **Waiting for Approval** on the **Devices** page, it means the **Auto approve** option is disabled:
An operator must explicitly approve a device before it starts sending data. Devi
:::image type="content" source="media/howto-faq/approve-device.png" alt-text="Screenshot showing how to approve a device":::
-**How do I associate a device with a device template?**
+## How do I associate a device with a device template?
If the device status is **Unassociated**, it means the device connecting to IoT Central doesn't have an associated device template. This situation typically happens in the following scenarios:
If the device status is **Unassociated**, it means the device connecting to IoT
The operator can associate a device to a device template from the **Devices** page using the **Migrate** button. To learn more, see [Manage devices in your Azure IoT Central application > Migrating devices to a template](howto-manage-devices.md).
-**Where can I learn more about IoT Hub?**
+## Where can I learn more about IoT Hub?
Azure IoT Central uses Azure IoT Hub as a cloud gateway that enables device connectivity. IoT Hub enables:
Azure IoT Central uses Azure IoT Hub as a cloud gateway that enables device conn
To learn more about IoT Hub, see [Azure IoT Hub](../../iot-hub/index.yml).
-**Where can I learn more about the Device Provisioning Service (DPS)?**
+## Where can I learn more about the Device Provisioning Service (DPS)?
Azure IoT Central uses DPS to enable devices to connect to your application. To learn more about the role DPS plays in connecting devices to IoT Central, see [Get connected to Azure IoT Central](concepts-get-connected.md). To learn more about DPS, see [Provisioning devices with Azure IoT Hub Device Provisioning Service](../../iot-dps/about-iot-dps.md).
iot-central Howto Get App Info https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-get-app-info.md
- Title: Get information about Azure IoT Central application version | Microsoft Docs
-description: How to get information about the IoT Central application you're using
--- Previously updated : 02/26/2021----
-# About your application
-
-This article shows you how to get information about your IoT Central application. You may need:
--- This information if you contact support.-- The Azure subscription your application uses to locate billing information in the Azure portal.-- The application's ID when you're working with the REST API.-- The application's version to complete tasks such as adding a connector.-
-## Get information about your application
-
-To get information about your IoT Central application:
-
-1. Select the **Help** link on the top menu.
-
-1. Select **About your app**.
-
-1. The **About your app** page shows information about your application:
-
- :::image type="content" source="media/howto-get-app-info/about-your-app2.png" alt-text="About your app screenshot":::
-
- Use the **Copy info** button to copy the information to the clipboard.
-
-## Next steps
-
-Now that you know how to find the version of your IoT Central application, a suggested next step is to continue exploring the how-to articles for administrators: [Change IoT Central application settings](howto-administer.md).
-
-If you have a V2 application, see [Migrate your V2 IoT Central application to V3](howto-migrate.md).
iot-central Howto Manage Users Roles https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-manage-users-roles.md
When you define a custom role, you choose the set of permissions that a user is
## Next steps
-Now that you've learned how to manage users and roles in your IoT Central application, the suggested next step is to learn how to [Manage your bill](howto-view-bill.md).
+Now that you've learned how to manage users and roles in your IoT Central application, the suggested next step is to learn how to [Customize application UI](howto-customize-ui.md).
iot-central Howto Migrate https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-migrate.md
Currently, when you create a new IoT Central application, it's a V3 application. If you previously created an application, then depending on when you created it, it may be V2. This article describes how to migrate a V2 to a V3 application to be sure you're using the latest IoT Central features.
-To learn how to identify the version of an IoT Central application, see [About your application](howto-get-app-info.md).
+To learn how to identify the version of an IoT Central application, see [How do I get information about my application?](howto-faq.md#how-do-i-get-information-about-my-application).
The steps to migrate an application from V2 to V3 are:
iot-central Howto Monitor Application Health https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-monitor-application-health.md
# Monitor the overall health of an IoT Central application > [!NOTE]
-> Metrics are only available for version 3 IoT Central applications. To learn how to check your application version, see [About your application](./howto-get-app-info.md).
+> Metrics are only available for version 3 IoT Central applications. To learn how to check your application version, see [How do I get information about my application?](howto-faq.md#how-do-i-get-information-about-my-application).
In this article, you learn how to use the set of metrics provided by IoT Central to assess the health of devices connected to your IoT Central application and the health of your running data exports.
Metrics are enabled by default for your IoT Central application and you access t
### Trial applications
-Applications that use the free trial plan don't have an associated Azure subscription and so don't support Azure Monitor metrics. You can [convert an application to a standard pricing plan](./howto-view-bill.md#move-from-free-to-standard-pricing-plan) and get access to these metrics.
+Applications that use the free trial plan don't have an associated Azure subscription and so don't support Azure Monitor metrics. You can [convert an application to a standard pricing plan](./howto-faq.md#how-do-i-move-from-a-free-to-a-standard-pricing-plan) and get access to these metrics.
## View metrics in the Azure portal
iot-central Howto Transform Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-transform-data.md
The following table shows three example transformation types:
||-|-|-| | Message Format | Convert to or manipulate JSON messages. | CSV to JSON | At ingress. IoT Central only accepts value JSON messages. To learn more, see [Telemetry, property, and command payloads](concepts-telemetry-properties-commands.md). | | Computations | Math functions that [Azure Functions](../../azure-functions/index.yml) can execute. | Unit conversion from Fahrenheit to Celsius. | Transform using the egress pattern to take advantage of scalable device ingress through direct connection to IoT Central. Transforming the data lets you use IoT Central features such as visualizations and jobs. |
-| Message Enrichment | Enrichments from external data sources not found in device properties or telemetry. To learn more about internal enrichments, see [Export IoT data to cloud destinations using data export](howto-export-data.md) | Add weather information to messages using location data from devices. | Transform using the egress pattern to take advantage of scalable device ingress through direct connection to IoT Central. |
+| Message Enrichment | Enrichments from external data sources not found in device properties or telemetry. To learn more about internal enrichments, see [Export IoT data to cloud destinations using data export](howto-export-data.md) | Add weather information to messages using [location data](howto-use-location-data.md) from devices. | Transform using the egress pattern to take advantage of scalable device ingress through direct connection to IoT Central. |
## Prerequisites
iot-central Howto Use Location Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-use-location-data.md
Title: Use location data in an Azure IoT Central solution
description: Learn how to use location data sent from a device connected to your IoT Central application. Plot location data on a map or create geofencing rules. Previously updated : 01/08/2021 Last updated : 06/25/2021
iot-central Howto View Bill https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-view-bill.md
- Title: Manage your bill and convert from the free pricing plan in Azure IoT Central application | Microsoft Docs
-description: Learn how to manage your bill and move from the free pricing plan to a standard pricing plan in your Azure IoT Central application
-- Previously updated : 11/23/2019-----
-# Administrator
--
-# Manage your bill in an IoT Central application
-
-This article describes how you can manage your Azure IoT Central billing. You can move your application from the free pricing plan to a standard pricing plan, and also upgrade or downgrade your pricing plan.
-
-To access the **Administration** section, you must be in the *Administrator* role or have a *custom user role* that allows you to view billing. If you create an Azure IoT Central application, you're automatically assigned to the **Administrator** role.
-
-## Move from free to standard pricing plan
--- Applications that use the free pricing plan are free for seven days before they expire. To avoid losing data, you can move them to a standard pricing plan at any time before they expire.-- Applications that use a standard pricing plan are charged per device, with the first two devices free, per application.-
-Learn more about pricing on the [Azure IoT Central pricing page](https://azure.microsoft.com/pricing/details/iot-central/).
-
-In the pricing section, you can move your application from the free to a standard pricing plan.
-
-To complete this self-service process, follow these steps:
-
-1. Go to the **Pricing** page in the **Administration** section.
-
-1. Select the **Plan**
-
- :::image type="content" source="media/howto-view-bill/free-trial-billing.png" alt-text="Trial state":::
--
-1. Select the appropriate Azure Active Directory, and then the Azure subscription to use for your application that uses a paid plan.
-
-1. After you select **Save**, your application now uses a paid plan and you start getting billed.
-
-> [!Note]
-> By default, you are converted to a *Standard 2* pricing plan.
-
-## How to change your application pricing plan
-
-Applications that use a standard pricing plan are charged per device, with the first two devices free, per application.
-
-In the pricing section, you can upgrade or downgrade your Azure IoT pricing plan at any time.
-
-1. Go to the **Pricing** page in the **Administration** section.
-
- :::image type="content" source="media/howto-view-bill/pricing.png" alt-text="Upgrade pricing plan":::
-
-1. Select the **Plan** and then select **Save** to upgrade or downgrade.
-
-## View your bill
-
-1. Select the appropriate Azure Active Directory, and then the Azure subscription to use for your application that uses a paid plan.
-
-1. After you select **Convert**, your application now uses a paid plan and you start getting billed.
-
-> [!Note]
-> By default, you are converted to a *Standard 2* pricing plan.
-
-## Next steps
-
-Now that you've learned about how to manage your bill in Azure IoT Central application, the suggested next step is to learn about [Customize application UI](howto-customize-ui.md) in Azure IoT Central.
iot-central Overview Iot Central Admin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/overview-iot-central-admin.md
The administrator can configure the behavior and appearance of an IoT Central ap
- [Change application name and URL](howto-administer.md#change-application-name-and-url) - [Customize the UI](howto-customize-ui.md)-- [Move an application to a different pricing plans](howto-view-bill.md)
+- [Move an application to a different pricing plans](howto-faq.md#how-do-i-move-from-a-free-to-a-standard-pricing-plan)
- [Configure file uploads](howto-configure-file-uploads.md) ## Export an application
iot-central Overview Iot Central Tour https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/overview-iot-central-tour.md
The top menu appears on every page:
* To search for devices, enter a **Search** value. * To change the UI language or theme, choose the **Settings** icon. Learn more about [managing your application preferences](howto-manage-preferences.md)
-* To get help and support, choose the **Help** drop-down for a list of resources. You can [get information about your application](./howto-get-app-info.md) from the **About your app** link. In an application on the free pricing plan, the support resources include access to [live chat](howto-show-hide-chat.md).
+* To get help and support, choose the **Help** drop-down for a list of resources. You can [get information about your application](howto-faq.md#how-do-i-get-information-about-my-application) from the **About your app** link. In an application on the free pricing plan, the support resources include access to [live chat](howto-show-hide-chat.md).
* To sign out of the application, choose the **Account** icon. You can choose between a light theme or a dark theme for the UI:
iot-central Tutorial Smart Meter App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/energy/tutorial-smart-meter-app.md
To verify the app creation and data simulation, go to the **Dashboard**. If you
After you successfully deploy the app template, it comes with sample smart meter device, device model, and a dashboard. Adatum is a fictitious energy company, who monitors and manages smart meters. On the smart meter monitoring dashboard, you see smart meter properties, data, and sample commands. It enables operators and support teams to proactively perform the following activities before it turns into support incidents:
-* Review the latest meter info and its installed location on the map
+* Review the latest meter info and its installed [location](../core/howto-use-location-data.md) on the map
* Proactively check the meter network and connection status * Monitor Min and Max voltage readings for network health * Review the energy, power, and voltage trends to catch any anomalous patterns
iot-central Tutorial Solar Panel App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/energy/tutorial-solar-panel-app.md
To verify the app creation and data simulation, go to the **Dashboard**. If you
After you successfully deploy the app template, you'll want to explore the app a bit more. Notice that it comes with sample smart meter device, device model, and dashboard. Adatum is a fictitious energy company that monitors and manages solar panels. On the solar panel monitoring dashboard, you see solar panel properties, data, and sample commands. This dashboard allows you or your support team to perform the following activities proactively, before any problems require additional support:
-* Review the latest panel info and its installed location on the map.
+* Review the latest panel info and its installed [location](../core/howto-use-location-data.md) on the map.
* Check the panel status and connection status. * Review the energy generation and temperature trends to catch any anomalous patterns. * Track the total energy generation for planning and billing purposes.
iot-central Tutorial Connected Waste Management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/government/tutorial-connected-waste-management.md
The dashboard consists of different tiles:
* **Fill level KPI tile**: This tile displays a value reported by a *fill level* sensor in a waste bin. Fill level and other sensors, like *odor meter* or *weight* in a waste bin, can be remotely monitored. An operator can take action, like dispatching a trash collection truck.
-* **Waste monitoring area map**: This tile uses Azure Maps, which you can configure directly in Azure IoT Central. The map tile displays device location. Try to hover over the map and try the controls over the map, like zoom-in, zoom-out, or expand.
+* **Waste monitoring area map**: This tile uses Azure Maps, which you can configure directly in Azure IoT Central. The map tile displays device [location](../core/howto-use-location-data.md). Try to hover over the map and try the controls over the map, like zoom-in, zoom-out, or expand.
![Screenshot of Connected Waste Management Template Dashboard map.](./media/tutorial-connectedwastemanagement/connectedwastemanagement-dashboard-map.png)
iot-central Tutorial Water Consumption Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/government/tutorial-water-consumption-monitoring.md
The dashboard consists of different kinds of tiles:
* **Average water flow KPI tile**: The KPI tile is configured to display as an example *the average in the last 30 minutes*. You can customize the KPI tile and set it to a different type and time range. * **Device command tiles**: These tiles include the **Close valve**, **Open valve**, and **Set valve position** tiles. Selecting the commands takes you to the simulated device command page. In Azure IoT Central, a *command* is a *device capability* type. We'll explore this concept later in the [Device template](../government/tutorial-water-consumption-monitoring.md#explore-the-device-template) section of this tutorial.
-* **Water distribution area map**: The map uses Azure Maps, which you can configure directly in Azure IoT Central. The map tile displays the device location. Hover over the map and try the controls over the map, like *zoom in*, *zoom out*, or *expand*.
+* **Water distribution area map**: The map uses Azure Maps, which you can configure directly in Azure IoT Central. The map tile displays the device [location](../core/howto-use-location-data.md). Hover over the map and try the controls over the map, like *zoom in*, *zoom out*, or *expand*.
:::image type="content" source="media/tutorial-waterconsumptionmonitoring/water-consumption-monitoring-dashboard-map.png" alt-text="Water consumption monitoring dashboard map":::
iot-central Tutorial Water Quality Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/government/tutorial-water-quality-monitoring.md
The dashboard includes the following kinds of tiles:
* **Average pH KPI tiles**: KPI tiles like **Average pH in the last 30 minutes** are at the top of the dashboard pane. You can customize KPI tiles and set each to a different type and time range.
-* **Water monitoring area map**: Azure IoT Central uses Azure Maps, which you can directly set in your application to show device location. You can also map location information from your application to your device and then use Azure Maps to show the information on a map. Hover over the map and try the controls.
+* **Water monitoring area map**: Azure IoT Central uses Azure Maps, which you can directly set in your application to show device [location](../core/howto-use-location-data.md). You can also map location information from your application to your device and then use Azure Maps to show the information on a map. Hover over the map and try the controls.
* **Average pH distribution heat-map chart**: You can select different visualization charts to show device telemetry in the way that is most appropriate for your application.
iot-central Tutorial Iot Central Connected Logistics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/retail/tutorial-iot-central-connected-logistics.md
This dashboard is pre-configured to show the critical logistics device operation
The dashboard enables two different gateway device management operations:
-* View the logistics routes for truck shipments and the location details of ocean shipments.
+* View the logistics routes for truck shipments and the [location](../core/howto-use-location-data.md) details of ocean shipments.
* View the gateway status and other relevant information. :::image type="content" source="media/tutorial-iot-central-connected-logistics/connected-logistics-dashboard-1.png" alt-text="Connected logistics dashboard":::
iot-central Tutorial Iot Central Smart Inventory Management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/retail/tutorial-iot-central-smart-inventory-management.md
This dashboard is pre-configured to showcase the critical smart inventory manage
The dashboard is logically divided between two different gateway device management operations, * The warehouse is deployed with a fixed BLE gateway & BLE tags on pallets to track & trace inventory at a larger facility * Retail store is implemented with a fixed RFID gateway & RFID tags at individual an item level to track and trace the stock in a store outlet
- * View the gateway location, status & related details
+ * View the gateway [location](../core/howto-use-location-data.md), status & related details
> [!div class="mx-imgBorder"] > ![Screenshot showing the top half of the smart inventory managementdashboard](./media/tutorial-iot-central-smart-inventory-management/smart_inventory_management_dashboard1.png)
iot-edge Gpu Acceleration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/gpu-acceleration.md
GPUs are a popular choice for artificial intelligence computations, because they
Azure IoT Edge for Linux on Windows supports several GPU passthrough technologies, including:
-* **Direct Device Assignment (DDA)** - GPU cores are wholly dedicated to the Linux virtual machine.
+* **Direct Device Assignment (DDA)** - GPU cores are allocated either to the Linux virtual machine or the host.
-* **GPU-Paravirtualization (GPU-PV)** - The GPU is shared between the Linux VM and host.
+* **GPU-Paravirtualization (GPU-PV)** - The GPU is shared between the Linux virtual machine and the host.
The Azure IoT Edge for Linux on Windows deployment will automatically select the appropriate passthrough method to match the supported capabilities of your device's GPU hardware.
Once you register, follow the instructions on the **2. Flight** tab to get acces
### T4 GPUs
-For **T4 GPUs**, Microsoft recommends a device mitigation driver from your GPU's vendor. For more information, see [Deploy graphics devices using direct device assignment](/windows-server/virtualization/hyper-v/deploy/deploying-graphics-devices-using-dda#optionalinstall-the-partitioning-driver).
+For **T4 GPUs**, Microsoft recommends installing a device mitigation driver from your GPU's vendor. While optional, installing a mitigation driver may improve the security of your deployment. For more information, see [Deploy graphics devices using direct device assignment](/windows-server/virtualization/hyper-v/deploy/deploying-graphics-devices-using-dda#optionalinstall-the-partitioning-driver).
> [!WARNING]
-> Enabling hardware device passthrough may increase security risks.
+> Enabling hardware device passthrough may increase security risks. We recommend that you install a device mitigation driver from your GPU's vendor.
### GeForce/Quadro GPUs
Now you are ready to deploy and run GPU-accelerated Linux modules in your Window
* [Create your deployment of Azure IoT Edge for Linux on Windows](how-to-install-iot-edge-on-windows.md)
-* Learn more about GPU passthrough technologies by visiting the [DDA documentation](/windows-server/virtualization/hyper-v/plan/plan-for-gpu-acceleration-in-windows-server#discrete-device-assignment-dda).
+* Try our [GPU-enabled sample featuring Vision on Edge](https://github.com/Azure-Samples/azure-intelligent-edge-patterns/blob/master/factory-ai-vision/Tutorial/Eflow.md), a solution template illustrating how to build your own vision-based machine learning application.
+
+* Learn more about GPU passthrough technologies by visiting the [DDA documentation](/windows-server/virtualization/hyper-v/plan/plan-for-gpu-acceleration-in-windows-server#discrete-device-assignment-dda) and [GPU-PV blog post](https://devblogs.microsoft.com/directx/directx-heart-linux/#gpu-virtualization).
load-balancer Components https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-balancer/components.md
Load balancer can have multiple frontend IPs. Learn more about [multiple fronten
The group of virtual machines or instances in a virtual machine scale set that is serving the incoming request. To scale cost-effectively to meet high volumes of incoming traffic, computing guidelines generally recommend adding more instances to the backend pool.
-Load balancer instantly reconfigures itself via automatic reconfiguration when you scale instances up or down. Adding or removing VMs from the backend pool reconfigures the load balancer without additional operations. The scope of the backend pool is any virtual machine in the virtual network.
+Load balancer instantly reconfigures itself via automatic reconfiguration when you scale instances up or down. Adding or removing VMs from the backend pool reconfigures the load balancer without additional operations. The scope of the backend pool is any virtual machine in a single virtual network.
When considering how to design your backend pool, design for the least number of individual backend pool resources to optimize the length of management operations. There's no difference in data plane performance or scale.
Basic load balancer doesn't support outbound rules.
- Learn about load balancer [limits](../azure-resource-manager/management/azure-subscription-service-limits.md) - Load balancer provides load balancing and port forwarding for specific TCP or UDP protocols. Load-balancing rules and inbound NAT rules support TCP and UDP, but not other IP protocols including ICMP. - Outbound flow from a backend VM to a frontend of an internal Load Balancer will fail.-- A load balancer rule can't span two virtual networks. Frontends and their backend instances must be located in the same virtual network.
+- A load balancer rule cannot span two virtual networks. All load balancer frontends and their backend instances must be in a single virtual network.
- Forwarding IP fragments isn't supported on load-balancing rules. IP fragmentation of UDP and TCP packets isn't supported on load-balancing rules. HA ports load-balancing rules can be used to forward existing IP fragments. For more information, see [High availability ports overview](load-balancer-ha-ports-overview.md). - You can only have 1 Public Load Balancer and 1 internal Load Balancer per availability set
managed-instance-apache-cassandra Dual Write Proxy Migration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/managed-instance-apache-cassandra/dual-write-proxy-migration.md
It is recommended that you install the proxy on all nodes in your source Cassand
```bash java -jar target/cassandra-proxy-1.0-SNAPSHOT-fat.jar localhost <target-server> --proxy-jks-file <path to JKS file> --proxy-jks-password <keystore password> ```+
+Starting the proxy in this way assumes the following are true:
+
+- source and target endpoints have the same username and password
+- source and target endpoints implement SSL
+
+If your source and target endpoints cannot meet these criteria, read below for further configuration options.
+
+### Configure SSL
+ For SSL, you can either implement an existing keystore (for example the one used by your source cluster), or you can create self-signed certificate using keytool: ```bash keytool -genkey -keyalg RSA -alias selfsigned -keystore keystore.jks -storepass password -validity 360 -keysize 2048 ```
+You can also disable SSL for source or target endpoints if they do not implement SSL. Use the `--disable-source-tls` or `--disable-target-tls` flags:
+
+```bash
+java -jar target/cassandra-proxy-1.0-SNAPSHOT-fat.jar localhost <target-server> --source-port 9042 --target-port 10350 --proxy-jks-file <path to JKS file> --proxy-jks-password <keystore password> --target-username <username> --target-password <password> --disable-source-tls true --disable-target-tls true
+```
> [!NOTE] > Make sure your client application uses the same keystore and password as the one used for the dual-write proxy when building SSL connections to the database via the proxy.
-Starting the proxy in this way assumes the following are true:
-- source and target endpoints have the same username and password-- source and target endpoints implement SSL
+### Configure credentials and port
-By default, the source credentials will be passed through from your client app, and used by the proxy for making connections to the source and target clusters. If necessary, you can specify the username and password of the target Cassandra endpoint separately when starting the proxy:
+By default, the source credentials will be passed through from your client app, and used by the proxy for making connections to the source and target clusters. As mentioned above, this assumes that source and target credentials are the same. If necessary, you can specify a different username and password for the target Cassandra endpoint separately when starting the proxy:
```bash java -jar target/cassandra-proxy-1.0-SNAPSHOT-fat.jar localhost <target-server> --proxy-jks-file <path to JKS file> --proxy-jks-password <keystore password> --target-username <username> --target-password <password>
The default source and target ports, when not specified, will be `9042`. If eith
java -jar target/cassandra-proxy-1.0-SNAPSHOT-fat.jar localhost <target-server> --source-port 9042 --target-port 10350 --proxy-jks-file <path to JKS file> --proxy-jks-password <keystore password> --target-username <username> --target-password <password> ```
-You can also disable SSL for source or target endpoints if they do not implement SSL. Use the `--disable-source-tls` or `--disable-target-tls` flags:
-
-```bash
-java -jar target/cassandra-proxy-1.0-SNAPSHOT-fat.jar localhost <target-server> --source-port 9042 --target-port 10350 --proxy-jks-file <path to JKS file> --proxy-jks-password <keystore password> --target-username <username> --target-password <password> --disable-source-tls true --disable-target-tls true
-```
+### Deploy proxy remotely
There may be circumstances in which you do not want to install the proxy on the cluster nodes themselves, and prefer to install it on a separate machine. In that scenario, you would need need to specify the IP of the `<source-server>`:
java -jar target/cassandra-proxy-1.0-SNAPSHOT-fat.jar <source-server> <destinati
> [!NOTE] > If you do not install and run the proxy on all nodes in a native Apache Cassandra cluster, this will impact performance in your application as the client driver will no be able to open connections to all nodes within the cluster.
-By default, the proxy listens on port 29042. However, you can also change the port the proxy listens on. You may wish to do this if you want to eliminate application level code changes by having the source Cassandra server run on a different port, and have the proxy run on the standard Cassandra port:
+### Allow zero application code changes
+
+By default, the proxy listens on port `29042`. This requires the application code to be changed to point to this port. However, you can also change the port the proxy listens on. You may wish to do this if you want to eliminate application level code changes by having the source Cassandra server run on a different port, and have the proxy run on the standard Cassandra port `9042`:
```bash java -jar target/cassandra-proxy-1.0-SNAPSHOT-fat.jar source-server destination-server --proxy-port 9042 ``` > [!NOTE]
-> Installing the proxy on cluster nodes does not require restart of the nodes. However, if you have many application clients and prefer to have the proxy running on the standard Cassandra port 9042 in order to eliminate any application level code changes, you would need to restart your cluster.
+> Installing the proxy on cluster nodes does not require restart of the nodes. However, if you have many application clients and prefer to have the proxy running on the standard Cassandra port `9042` in order to eliminate any application level code changes, you would need to change the [Apache Cassandra default port](https://cassandra.apache.org/doc/latest/faq/#what-ports-does-cassandra-use). You would then need to restart the nodes in your cluster, and configure the source port to be the new port you have defined for your source Cassandra cluster. In the below example, we change the source Cassandra cluster to run on port 3074, and start the cluster on port 9042.
+>```bash
+>java -jar target/cassandra-proxy-1.0-SNAPSHOT-fat.jar source-server destination-server --proxy-port 9042 --source-port 3074
+>```
+
+### Force protocols
-The proxy has some functionality to force protocols which may be necessary if the source endpoint is more advanced then the target. In that case you can specify `--protocol-version` and `--cql-version`:
+The proxy has functionality to force protocols which may be necessary if the source endpoint is more advanced then the target, or otherwise unsupported. In that case you can specify `--protocol-version` and `--cql-version` to force protocol to comply with the target:
```bash java -jar target/cassandra-proxy-1.0-SNAPSHOT-fat.jar source-server destination-server --protocol-version 4 --cql-version 3.11
DFfromSourceCassandra
``` > [!NOTE]
-> In the above Scala sample, you will notice that `timestamp` is being set to the current time prior to reading all the data in the source table, and then `writetime` is being set to this backdated timestamp. This is to ensure that records that are written from the historic data load to the target endpoint cannot overwrite updates that come in with a later timestamp from the dual-write proxy while historic data is being read. If for any reason you need to preserve *exact* timestamps, you should take a historic data migration approach which preserves timestamps, such [this](https://github.com/scylladb/scylla-migrator) sample.
+> In the above Scala sample, you will notice that `timestamp` is being set to the current time prior to reading all the data in the source table, and then `writetime` is being set to this backdated timestamp. This is to ensure that records that are written from the historic data load to the target endpoint cannot overwrite updates that come in with a later timestamp from the dual-write proxy while historic data is being read. If for any reason you need to preserve *exact* timestamps, you should take a historic data migration approach which preserves timestamps, such as [this](https://github.com/scylladb/scylla-migrator) sample.
## Validation
marketplace Azure Vm Get Sas Uri https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/azure-vm-get-sas-uri.md
Last updated 06/23/2021
# Generate a SAS URI for a VM image > [!NOTE]
-> You donΓÇÖt need a SAS URI to publish your VM. You can simply share an image in Parter Center. Refer to [Create a virtual machine using an approved base](azure-vm-create-using-approved-base.md) or [Create a virtual machine using your own image](azure-vm-create-using-own-image.md) instructions.
+> You don't need a SAS URI to publish your VM. You can simply share an image in Parter Center. Refer to [Create a virtual machine using an approved base](azure-vm-create-using-approved-base.md) or [Create a virtual machine using your own image](azure-vm-create-using-own-image.md) instructions.
Generating SAS URIs for your VHDs has these requirements: -- Only List and Read permissions are required. DonΓÇÖt provide Write or Delete access.
+- Only List and Read permissions are required. Don't provide Write or Delete access.
- The duration for access (expiry date) should be a minimum of three weeks from when the SAS URI is created. - To protect against UTC time changes, set the start date to one day before the current date. For example, if the current date is June 16, 2020, select 6/15/2020.
$resourceGroupName=myResourceGroupName
$snapshotName=mySnapshot #Provide Shared Access Signature (SAS) expiry duration in seconds (such as 3600)
-#Know more about SAS here: https://docs.microsoft.com/en-us/azure/storage/storage-dotnet-shared-access-signature-part-1
+#Know more about SAS here: https://docs.microsoft.com/azure/storage/storage-dotnet-shared-access-signature-part-1
$sasExpiryDuration=3600 #Provide storage account name where you want to copy the underlying VHD file. Currently, only general purpose v1 storage is supported.
There are two common tools used to create a SAS address (URL):
2. Create a PowerShell file (.ps1 file extension), copy in the following code, then save it locally. ```azurecli-interactive
- az storage container generate-sas --connection-string ΓÇÿDefaultEndpointsProtocol=https;AccountName=<account-name>;AccountKey=<account-key>;EndpointSuffix=core.windows.netΓÇÖ --name <container-name> --permissions rl --start ΓÇÿ<start-date>ΓÇÖ --expiry ΓÇÿ<expiry-date>ΓÇÖ
+ az storage container generate-sas --connection-string 'DefaultEndpointsProtocol=https;AccountName=<account-name>;AccountKey=<account-key>;EndpointSuffix=core.windows.net' --name <container-name> --permissions rl --start '<start-date>' --expiry '<expiry-date>'
``` 3. Edit the file to use the following parameter values. Provide dates in UTC datetime format, such as 2020-04-01T00:00:00Z.
There are two common tools used to create a SAS address (URL):
Here's an example of proper parameter values (at the time of this writing): ```azurecli-interactive
- az storage container generate-sas --connection-string ΓÇÿDefaultEndpointsProtocol=https;AccountName=st00009;AccountKey=6L7OWFrlabs7Jn23OaR3rvY5RykpLCNHJhxsbn9ON c+bkCq9z/VNUPNYZRKoEV1FXSrvhqq3aMIDI7N3bSSvPg==;EndpointSuffix=core.windows.netΓÇÖ --name <container-name> -- permissions rl --start ΓÇÿ2020-04-01T00:00:00ZΓÇÖ --expiry ΓÇÿ2021-04-01T00:00:00ZΓÇÖ
+ az storage container generate-sas --connection-string 'DefaultEndpointsProtocol=https;AccountName=st00009;AccountKey=6L7OWFrlabs7Jn23OaR3rvY5RykpLCNHJhxsbn9ON c+bkCq9z/VNUPNYZRKoEV1FXSrvhqq3aMIDI7N3bSSvPg==;EndpointSuffix=core.windows.net' --name <container-name> -- permissions rl --start '2020-04-01T00:00:00Z' --expiry '2021-04-01T00:00:00Z'
``` 1. Save the changes.
marketplace Marketplace Faq Publisher Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/marketplace-faq-publisher-guide.md
Refunds are available to customers under certain conditions and for certain char
Refunds are not issued for variable charges resulting from usage (from either virtual machine offers or metered billing).
+> [!NOTE]
+> Refunds are not granted for metered usage of Virtual Machines, metered billing for SaaS and Azure Apps, or other instances of product consumption (Pay as you Go, etc.). If a vendor does receive a refund, it will be reflected in the Partner Center [payout reporting](/partner-center/payout-statement?context=/azure/marketplace/context/context).
+ ## Resources ### Where can I find more information about the commercial marketplace?
mysql Quickstart Create Server Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/flexible-server/quickstart-create-server-portal.md
Complete these steps to create a flexible server:
> :::image type="content" source="./media/quickstart-create-server-portal/find-mysql-portal.png" alt-text="Screenshot that shows a search for Azure Database for MySQL servers.":::
-2. Select **Add**.
+2. Select **Create**.
3. On the **Select Azure Database for MySQL deployment option** page, select **Flexible server** as the deployment option:
Complete these steps to create a flexible server:
- Public access (allowed IP addresses) - Private access (VNet Integration)
- When you use public access, access to your server is limited to allowed IP addresses that you add to a firewall rule. This method prevents external applications and tools from connecting to the server and any databases on the server, unless you create a rule to open the firewall for a specific IP address or range. When you use private access (VNet Integration), access to your server is limited to your virtual network. In this quickstart, you'll learn how to enable public access to connect to the server. [Learn more about connectivity methods in the concepts article.](./concepts-networking.md)
+ When you use public access, access to your server is limited to allowed IP addresses that you add to a firewall rule. This method prevents external applications and tools from connecting to the server and any databases on the server, unless you create a rule to open the firewall for a specific IP address or range. When you use private access (VNet Integration), access to your server is limited to your virtual network. [Learn more about connectivity methods in the concepts article.](./concepts-networking.md)
+
+ In this quickstart, you'll learn how to enable public access to connect to the server. On the **Networking tab**, for **Connectivity method** select **Public access**. For configuring **Firewall rules**, select **Add current client IP address**.
> [!NOTE] > You can't change the connectivity method after you create the server. For example, if you select **Public access (allowed IP addresses)** when you create the server, you can't change to **Private access (VNet Integration)** after the server is created. We highly recommend that you create your server with private access to help secure access to your server via VNet Integration. [Learn more about private access in the concepts article.](./concepts-networking.md)
network-watcher Network Watcher Nsg Flow Logging Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/network-watcher/network-watcher-nsg-flow-logging-powershell.md
Network Security Group flow logs are a feature of Network Watcher that allows yo
The detailed specification of all NSG flow logs commands for various versions of AzPowerShell can be found [here](/powershell/module/az.network/#network-watcher)
+> [!NOTE]
+> - The commands [Get-AzNetworkWatcherFlowLogStatus](https://docs.microsoft.com/powershell/module/az.network/get-aznetworkwatcherflowlogstatus) and [Set-AzNetworkWatcherConfigFlowLog](https://docs.microsoft.com/powershell/module/az.network/set-aznetworkwatcherconfigflowlog) used in this doc, requires an additional "reader" permission in the resource group of the network watcher. Also, these commands are old and may soon be deprecated.
+> - It is recommended to use the new [Get-AzNetworkWatcherFlowLog](https://docs.microsoft.com/powershell/module/az.network/get-aznetworkwatcherflowlog) and [Set-AzNetworkWatcherFlowLog](https://docs.microsoft.com/powershell/module/az.network/set-aznetworkwatcherflowlog) commands instead.
+> - The new [Get-AzNetworkWatcherFlowLog](https://docs.microsoft.com/powershell/module/az.network/get-aznetworkwatcherflowlog) command offers four variants for flexibility. In case you are using the "Location <String>" variant of this command, an additional "reader" permission in the resource group of the network watcher would be required. For other variants, no additional permissions are required.
+ ## Register Insights provider In order for flow logging to work successfully, the **Microsoft.Insights** provider must be registered. If you are not sure if the **Microsoft.Insights** provider is registered, run the following script.
network-watcher Network Watcher Nsg Flow Logging Rest https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/network-watcher/network-watcher-nsg-flow-logging-rest.md
The response returned from the preceding example is as follows:
} } ```
+> [!NOTE]
+> - The api [Network Watchers - Set Flow Log Configuration](https://docs.microsoft.com/rest/api/network-watcher/network-watchers/set-flow-log-configuration) used above is old and may soon be deprecated.
+> - It is recommended to use the new [Flow Logs - Create Or Update](https://docs.microsoft.com/rest/api/network-watcher/flow-logs/create-or-update) rest api instead.
## Disable Network Security Group flow logs
The response returned from the preceding example is as follows:
} ```
+> [!NOTE]
+> - The api [Network Watchers - Set Flow Log Configuration](https://docs.microsoft.com/rest/api/network-watcher/network-watchers/set-flow-log-configuration) used above is old and may soon be deprecated.
+> - It is recommended to use the new [Flow Logs - Create Or Update](https://docs.microsoft.com/rest/api/network-watcher/flow-logs/create-or-update) rest api to disable flow logs and the [Flow Logs - Delete](https://docs.microsoft.com/rest/api/network-watcher/flow-logs/delete) to delete flow logs resource.
+ ## Query flow logs The following REST call queries the status of flow logs on a Network Security Group.
The following is an example of the response returned:
} ```
+> [!NOTE]
+> - The api [Network Watchers - Get Flow Log Status](https://docs.microsoft.com/rest/api/network-watcher/network-watchers/get-flow-log-status) used above, requires an additional "reader" permission in the resource group of the network watcher. Also, this api is old and may soon be deprecated.
+> - It is recommended to use the new [Flow Logs - Get](https://docs.microsoft.com/rest/api/network-watcher/flow-logs/get) rest api instead.
+ ## Download a flow log The storage location of a flow log is defined at creation. A convenient tool to access these flow logs saved to a storage account is Microsoft Azure Storage Explorer, which can be downloaded here: https://storageexplorer.com/
openshift Howto Create A Storageclass https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/openshift/howto-create-a-storageclass.md
oc patch storageclass azure-file -p '{"metadata": {"annotations":{"storageclass.
Create a new application and assign storage to it.
+> [!NOTE]
+> To use the `httpd-example` template, you must deploy your ARO cluster with the pull secret enabled. For more information, see [Get a Red Hat pull secret](tutorial-create-cluster.md#get-a-red-hat-pull-secret-optional).
+ ```bash oc new-project azfiletest
-oc new-app -template httpd-example
+oc new-app httpd-example
#Wait for the pod to become Ready curl $(oc get route httpd-example -n azfiletest -o jsonpath={.spec.host})
-oc set volume dc/httpd-example --add --name=v1 -t pvc --claim-size=1G -m /data
+#If you have set the storage class by default, you can omit the --claim-class parameter
+oc set volume dc/httpd-example --add --name=v1 -t pvc --claim-size=1G -m /data --claim-class='azure-file'
#Wait for the new deployment to rollout export POD=$(oc get pods --field-selector=status.phase==Running -o jsonpath={.items[].metadata.name})
-oc exec $POD -- bash -c "mkdir ./data"
oc exec $POD -- bash -c "echo 'azure file storage' >> /data/test.txt" oc exec $POD -- bash -c "cat /data/test.txt"
postgresql Tutorial Django Aks Database https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/flexible-server/tutorial-django-aks-database.md
- Title: 'Tutorial: Deploy Django on AKS cluster with PostgreSQL Flexible Server by using Azure CLI'
-description: Learn how to quickly build and deploy Django on AKS with Azure Database for PostgreSQL - Flexible Server.
---- Previously updated : 12/10/2020---
-# Tutorial: Deploy Django app on AKS with Azure Database for PostgreSQL - Flexible Server
-
-In this quickstart, you deploy a Django application on Azure Kubernetes Service (AKS) cluster with Azure Database for PostgreSQL - Flexible Server (Preview) using the Azure CLI.
-
-**[AKS](../../aks/intro-kubernetes.md)** is a managed Kubernetes service that lets you quickly deploy and manage clusters. **[Azure Database for PostgreSQL - Flexible Server (Preview)](overview.md)** is a fully managed database service designed to provide more granular control and flexibility over database management functions and configuration settings.
-
-> [!NOTE]
-> - Azure Database for PostgreSQL Flexible Server is currently in public preview
-> - This quickstart assumes a basic understanding of Kubernetes concepts, Django and PostgreSQL.
-
-## Pre-requisites
--- Use [Azure Cloud Shell](../../cloud-shell/quickstart.md) using the bash environment.-
- [![Embed launch](https://shell.azure.com/images/launchcloudshell.png "Launch Azure Cloud Shell")](https://shell.azure.com)
-- If you prefer, [install](/cli/azure/install-azure-cli) Azure CLI to run CLI reference commands.
- - If you're using a local install, sign in with Azure CLI by using the [az login](/cli/azure/reference-index#az_login) command. To finish the authentication process, follow the steps displayed in your terminal. See [Sign in with Azure CLI](/cli/azure/authenticate-azure-cli) for additional sign-in options.
- - When you're prompted, install Azure CLI extensions on first use. For more information about extensions, see [Use extensions with Azure CLI](/cli/azure/azure-cli-extensions-overview).
- - Run [az version](/cli/azure/reference-index?#az_version) to find the version and dependent libraries that are installed. To upgrade to the latest version, run [az upgrade](/cli/azure/reference-index?#az_upgrade). This article requires the latest version of Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
-
-> [!NOTE]
-> If running the commands in this quickstart locally (instead of Azure Cloud Shell), ensure you run the commands as administrator.
-
-## Create a resource group
-
-An Azure resource group is a logical group in which Azure resources are deployed and managed. Let's create a resource group, *django-project* using the [az group create][az-group-create] command in the *eastus* location.
-
-```azurecli-interactive
-az group create --name django-project --location eastus
-```
-
-> [!NOTE]
-> The location for the resource group is where resource group metadata is stored. It is also where your resources run in Azure if you don't specify another region during resource creation.
-
-The following example output shows the resource group created successfully:
-
-```json
-{
- "id": "/subscriptions/<guid>/resourceGroups/django-project",
- "location": "eastus",
- "managedBy": null,
-
- "name": "django-project",
- "properties": {
- "provisioningState": "Succeeded"
- },
- "tags": null
-}
-```
-
-## Create AKS cluster
-
-Use the [az aks create](/cli/azure/aks#az_aks_create) command to create an AKS cluster. The following example creates a cluster named *myAKSCluster* with one node. This will take several minutes to complete.
-
-```azurecli-interactive
-az aks create --resource-group django-project --name djangoappcluster --node-count 1 --generate-ssh-keys
-```
-
-After a few minutes, the command completes and returns JSON-formatted information about the cluster.
-
-> [!NOTE]
-> When creating an AKS cluster a second resource group is automatically created to store the AKS resources. See [Why are two resource groups created with AKS?](../../aks/faq.md#why-are-two-resource-groups-created-with-aks)
-
-## Connect to the cluster
-
-To manage a Kubernetes cluster, you use [kubectl](https://kubernetes.io/docs/reference/kubectl/overview/), the Kubernetes command-line client. If you use Azure Cloud Shell, `kubectl` is already installed. To install `kubectl` locally, use the [az aks install-cli](/cli/azure/aks#az_aks_install_cli) command:
-
-```azurecli-interactive
-az aks install-cli
-```
-
-To configure `kubectl` to connect to your Kubernetes cluster, use the [az aks get-credentials](/cli/azure/aks#az_aks_get_credentials) command. This command downloads credentials and configures the Kubernetes CLI to use them.
-
-```azurecli-interactive
-az aks get-credentials --resource-group django-project --name djangoappcluster
-```
-
-> [!NOTE]
-> The above command uses the default location for the [Kubernetes configuration file](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/), which is `~/.kube/config`. You can specify a different location for your Kubernetes configuration file using *--file*.
-
-To verify the connection to your cluster, use the [kubectl get]( https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get) command to return a list of the cluster nodes.
-
-```azurecli-interactive
-kubectl get nodes
-```
-
-The following example output shows the single node created in the previous steps. Make sure that the status of the node is *Ready*:
-
-```output
-NAME STATUS ROLES AGE VERSION
-aks-nodepool1-31718369-0 Ready agent 6m44s v1.12.8
-```
-
-## Create an Azure Database for PostgreSQL - Flexible Server
-Create a flexible server with the [az postgreSQL flexible-server create](/cli/azure/postgres/flexible-server#az_postgres_flexible_server_create)command. The following command creates a server using service defaults and values from your Azure CLI's local context:
-
-```azurecli-interactive
-az postgres flexible-server create --public-access <YOUR-IP-ADDRESS>
-```
-
-The server created has the below attributes:
-- A new empty database, ```postgres``` is created when the server is first provisioned. In this quickstart we will use this database.-- Autogenerated server name, admin username, admin password, resource group name (if not already specified in local context), and in the same location as your resource group-- Service defaults for remaining server configurations: compute tier (General Purpose), compute size/SKU (Standard_D2s_v3 which uses 2vCores), backup retention period (7 days), and PostgreSQL version (12)-- Using public-access argument allow you to create a server with public access protected by firewall rules. By providing your IP address to add the firewall rule to allow access from your client machine.-- Since the command is using Local context it will create the server in the resource group ```django-project``` and in the region ```eastus```.--
-## Build your Django docker image
-
-Create a new [Django application](https://docs.djangoproject.com/en/3.1/intro/) or use your existing Django project. Make sure your code is in this folder structure.
-
-```
-ΓööΓöÇΓöÇΓöÇmy-djangoapp
- ΓööΓöÇΓöÇΓöÇviews.py
- ΓööΓöÇΓöÇΓöÇmodels.py
- ΓööΓöÇΓöÇΓöÇforms.py
- Γö£ΓöÇΓöÇΓöÇtemplates
- . . . . . . .
- Γö£ΓöÇΓöÇΓöÇstatic
- . . . . . . .
-ΓööΓöÇΓöÇΓöÇmy-django-project
- ΓööΓöÇΓöÇΓöÇsettings.py
- ΓööΓöÇΓöÇΓöÇurls.py
- ΓööΓöÇΓöÇΓöÇwsgi.py
- . . . . . . .
- ΓööΓöÇΓöÇΓöÇ Dockerfile
- ΓööΓöÇΓöÇΓöÇ requirements.txt
- ΓööΓöÇΓöÇΓöÇ manage.py
-
-```
-Update ```ALLOWED_HOSTS``` in ```settings.py``` to make sure the Django application uses the external IP that gets assigned to kubernetes app.
-
-```python
-ALLOWED_HOSTS = ['*']
-```
-
-Update ```DATABASES={ }``` section in the ```settings.py``` file. The code snippet below is reading the database host , username and password from the Kubernetes manifest file.
-
-```python
-DATABASES={
- 'default':{
- 'ENGINE':'django.db.backends.postgresql_psycopg2',
- 'NAME':os.getenv('DATABASE_NAME'),
- 'USER':os.getenv('DATABASE_USER'),
- 'PASSWORD':os.getenv('DATABASE_PASSWORD'),
- 'HOST':os.getenv('DATABASE_HOST'),
- 'PORT':'5432',
- 'OPTIONS': {'sslmode': 'require'}
- }
-}
-```
-
-### Generate a requirements.txt file
-Create a ```requirements.txt``` file to list out the dependencies for the Django Application. Here is an example ```requirements.txt``` file. You can use [``` pip freeze > requirements.txt```](https://pip.pypa.io/en/stable/reference/pip_freeze/) to generate a requirements.txt file for your existing application.
-
-``` text
-Django==2.2.17
-postgres==3.0.0
-psycopg2-binary==2.8.6
-psycopg2-pool==1.1
-pytz==2020.4
-```
-
-### Create a Dockerfile
-Create a new file named ```Dockerfile``` and copy the code snippet below. This Dockerfile in setting up Python 3.8 and installing all the requirements listed in requirements.txt file.
-
-```docker
-# Use the official Python image from the Docker Hub
-FROM python:3.8.2
-
-# Make a new directory to put our code in.
-RUN mkdir /code
-
-# Change the working directory.
-WORKDIR /code
-
-# Copy to code folder
-COPY . /code/
-
-# Install the requirements.
-RUN pip install -r requirements.txt
-
-# Run the application:
-CMD python manage.py runserver 0.0.0.0:8000
-```
-
-### Build your image
-Make sure you're in the directory ```my-django-app``` in a terminal using the ```cd``` command. Run the following command to build your bulletin board image:
-
-``` bash
-
-docker build --tag myblog:latest .
-
-```
-
-Deploy your image to [Docker hub](https://docs.docker.com/get-started/part3/#create-a-docker-hub-repository-and-push-your-image) or [Azure Container registry](../../container-registry/container-registry-get-started-azure-cli.md).
-
-> [!IMPORTANT]
->If you are using Azure container regdistry (ACR), then run the ```az aks update``` command to attach ACR account with the AKS cluster.
->
->```azurecli-interactive
->az aks update -n myAKSCluster -g django-project --attach-acr <your-acr-name>
-> ```
->
-
-## Create Kubernetes manifest file
-
-A Kubernetes manifest file defines a desired state for the cluster, such as what container images to run. Let's create a manifest file named ```djangoapp.yaml``` and copy in the following YAML definition.
-
->[!IMPORTANT]
-> - Replace ```[DOCKER-HUB-USER/ACR ACCOUNT]/[YOUR-IMAGE-NAME]:[TAG]``` with your actual Django docker image name and tag, for example ```docker-hub-user/myblog:latest```.
-> - Update ```env``` section below with your ```SERVERNAME```, ```YOUR-DATABASE-USERNAME```, ```YOUR-DATABASE-PASSWORD``` of your postgres flexible server.
--
-```yaml
-apiVersion: apps/v1
-kind: Deployment
-metadata:
- name: django-app
-spec:
- replicas: 1
- selector:
- matchLabels:
- app: django-app
- template:
- metadata:
- labels:
- app: django-app
- spec:
- containers:
- - name: django-app
- image: [DOCKER-HUB-USER-OR-ACR-ACCOUNT]/[YOUR-IMAGE-NAME]:[TAG]
- ports:
- - containerPort: 80
- env:
- - name: DATABASE_HOST
- value: "SERVERNAME.postgres.database.azure.com"
- - name: DATABASE_USERNAME
- value: "YOUR-DATABASE-USERNAME"
- - name: DATABASE_PASSWORD
- value: "YOUR-DATABASE-PASSWORD"
- - name: DATABASE_NAME
- value: "postgres"
- affinity:
- podAntiAffinity:
- requiredDuringSchedulingIgnoredDuringExecution:
- - labelSelector:
- matchExpressions:
- - key: "app"
- operator: In
- values:
- - django-app
- topologyKey: "kubernetes.io/hostname"
-
-apiVersion: v1
-kind: Service
-metadata:
- name: python-svc
-spec:
- type: LoadBalancer
- ports:
- - port: 8000
- selector:
- app: django-app
-```
-
-## Deploy Django to AKS cluster
-Deploy the application using the [kubectl apply](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply) command and specify the name of your YAML manifest:
-
-```console
-kubectl apply -f djangoapp.yaml
-```
-
-The following example output shows the Deployments and Services created successfully:
-
-```output
-deployment "django-app" created
-service "python-svc" created
-```
-
-A deployment ```django-app``` allows you to describes details on of your deployment such as which images to use for the app, the number of pods and pod configuration. A service ```python-svc``` is created to expose the application through an external IP.
-
-## Test the application
-
-When the application runs, a Kubernetes service exposes the application front end to the internet. This process can take a few minutes to complete.
-
-To monitor progress, use the [kubectl get service](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get) command with the `--watch` argument.
-
-```azurecli-interactive
-kubectl get service django-app --watch
-```
-
-Initially the *EXTERNAL-IP* for the *django-app* service is shown as *pending*.
-
-```output
-NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
-django-app LoadBalancer 10.0.37.27 <pending> 80:30572/TCP 6s
-```
-
-When the *EXTERNAL-IP* address changes from *pending* to an actual public IP address, use `CTRL-C` to stop the `kubectl` watch process. The following example output shows a valid public IP address assigned to the service:
-
-```output
-django-app LoadBalancer 10.0.37.27 52.179.23.131 80:30572/TCP 2m
-```
-
-Now open a web browser to the external IP address of your service view the Django application.
-
->[!NOTE]
-> - Currently the Django site is not using HTTPS. It is recommended to [ENABLE TLS with your own certificates](../../aks/ingress-own-tls.md).
-> - You can enable [HTTP routing](../../aks/http-application-routing.md) for your cluster. When http routing is enabled, it configures an Ingress controller in your AKS cluster. As applications are deployed, the solution also creates publicly accessible DNS names for application endpoints.
-
-## Run database migrations
-
-For any django application, you would need to run database migration or collect static files. You can run these django shell commands using ```$ kubectl exec <pod-name> -- [COMMAND]```. Before running the command you need to find the pod name using ```kubectl get pods```.
-
-```bash
-$ kubectl get pods
-```
-
-You will see an output like this
-```output
-NAME READY STATUS RESTARTS AGE
-django-app-5d9cd6cd8-l6x4b 1/1 Running 0 2m
-```
-
-Once the pod name has been found you can run django database migrations with the command ```$ kubectl exec <pod-name> -- [COMMAND]```. Note ```/code/``` is the working directory for the project define in ```Dockerfile``` above.
-
-```bash
-$ kubectl exec django-app-5d9cd6cd8-l6x4b -- python /code/manage.py migrate
-```
-
-The output would look like
-```output
-Operations to perform:
- Apply all migrations: admin, auth, contenttypes, sessions
-Running migrations:
- Applying contenttypes.0001_initial... OK
- Applying auth.0001_initial... OK
- Applying admin.0001_initial... OK
- Applying admin.0002_logentry_remove_auto_add... OK
- Applying admin.0003_logentry_add_action_flag_choices... OK
- . . . . . .
-```
-
-If you run into issues, please run ```kubectl logs <pod-name>``` to see what exception is thrown by your application. If the application is working successfully you would see an output like this when running ```kubectl logs```.
-
-```output
-Watching for file changes with StatReloader
-Performing system checks...
-
-System check identified no issues (0 silenced).
-
-You have 17 unapplied migration(s). Your project may not work properly until you apply the migrations for app(s): admin, auth, contenttypes, sessions.
-Run 'python manage.py migrate' to apply them.
-December 08, 2020 - 23:24:14
-Django version 2.2.17, using settings 'django_postgres_app.settings'
-Starting development server at http://0.0.0.0:8000/
-Quit the server with CONTROL-C.
-```
-
-## Clean up the resources
-
-To avoid Azure charges, you should clean up unneeded resources. When the cluster is no longer needed, use the [az group delete](/cli/azure/group#az_group_delete) command to remove the resource group, container service, and all related resources.
-
-```azurecli-interactive
-az group delete --name django-project --yes --no-wait
-```
-
-> [!NOTE]
-> When you delete the cluster, the Azure Active Directory service principal used by the AKS cluster is not removed. For steps on how to remove the service principal, see [AKS service principal considerations and deletion](../../aks/kubernetes-service-principal.md#additional-considerations). If you used a managed identity, the identity is managed by the platform and does not require removal.
-
-## Next steps
--- Learn how to [access the Kubernetes web dashboard](../../aks/kubernetes-dashboard.md) for your AKS cluster-- Learn how to [enable continuous deployment](../../aks/deployment-center-launcher.md)-- Learn how to [scale your cluster](../../aks/tutorial-kubernetes-scale.md)-- Learn how to manage your [postgres flexible server](./quickstart-create-server-cli.md)-- Learn how to [configure server parameters](./howto-configure-server-parameters-using-cli.md) for your database server.+
+ Title: 'Tutorial: Deploy Django on AKS cluster with PostgreSQL Flexible Server by using Azure CLI'
+description: Learn how to quickly build and deploy Django on AKS with Azure Database for PostgreSQL - Flexible Server.
++++ Last updated : 12/10/2020+++
+# Tutorial: Deploy Django app on AKS with Azure Database for PostgreSQL - Flexible Server
+
+In this quickstart, you deploy a Django application on Azure Kubernetes Service (AKS) cluster with Azure Database for PostgreSQL - Flexible Server (Preview) using the Azure CLI.
+
+**[AKS](../../aks/intro-kubernetes.md)** is a managed Kubernetes service that lets you quickly deploy and manage clusters. **[Azure Database for PostgreSQL - Flexible Server (Preview)](overview.md)** is a fully managed database service designed to provide more granular control and flexibility over database management functions and configuration settings.
+
+> [!NOTE]
+> - Azure Database for PostgreSQL Flexible Server is currently in public preview
+> - This quickstart assumes a basic understanding of Kubernetes concepts, Django and PostgreSQL.
+
+## Pre-requisites
+
+- Launch [Azure Cloud Shell](https://shell.azure.com) in new browser window. You can [install Azure CLI](/cli/azure/install-azure-cli#install) on you local machine too. If you're using a local install, login with Azure CLI by using the [az login](/cli/azure/reference-index#az_login) command. To finish the authentication process, follow the steps displayed in your terminal.
+- Run [az version](/cli/azure/reference-index?#az_version) to find the version and dependent libraries that are installed. To upgrade to the latest version, run [az upgrade](/cli/azure/reference-index?#az_upgrade). This article requires the latest version of Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
+
+## Create a resource group
+
+An Azure resource group is a logical group in which Azure resources are deployed and managed. Let's create a resource group, *django-project* using the [az-group-create](/cli/azure/groupt#az_group_create) command in the *eastus* location.
+
+```azurecli-interactive
+az group create --name django-project --location eastus
+```
+
+> [!NOTE]
+> The location for the resource group is where resource group metadata is stored. It is also where your resources run in Azure if you don't specify another region during resource creation.
+
+The following example output shows the resource group created successfully:
+
+```json
+{
+ "id": "/subscriptions/<guid>/resourceGroups/django-project",
+ "location": "eastus",
+ "managedBy": null,
+
+ "name": "django-project",
+ "properties": {
+ "provisioningState": "Succeeded"
+ },
+ "tags": null
+}
+```
+
+## Create AKS cluster
+
+Use the [az aks create](/cli/azure/aks#az_aks_create) command to create an AKS cluster. The following example creates a cluster named *myAKSCluster* with one node. This will take several minutes to complete.
+
+```azurecli-interactive
+az aks create --resource-group django-project --name djangoappcluster --node-count 1 --generate-ssh-keys
+```
+
+After a few minutes, the command completes and returns JSON-formatted information about the cluster.
+
+> [!NOTE]
+> When creating an AKS cluster a second resource group is automatically created to store the AKS resources. See [Why are two resource groups created with AKS?](../../aks/faq.md#why-are-two-resource-groups-created-with-aks)
+
+## Connect to the cluster
+
+To manage a Kubernetes cluster, you use [kubectl](https://kubernetes.io/docs/reference/kubectl/overview/), the Kubernetes command-line client. If you use Azure Cloud Shell, `kubectl` is already installed.
+
+> [!NOTE]
+> If running Azure CLI locally , please run the [az aks install-cli](/cli/azure/aks#az_aks_install_cli) command to install `kubectl`.
+
+To configure `kubectl` to connect to your Kubernetes cluster, use the [az aks get-credentials](/cli/azure/aks#az_aks_get_credentials) command. This command downloads credentials and configures the Kubernetes CLI to use them.
+
+```azurecli-interactive
+az aks get-credentials --resource-group django-project --name djangoappcluster
+```
+
+To verify the connection to your cluster, use the [kubectl get]( https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get) command to return a list of the cluster nodes.
+
+```azurecli-interactive
+kubectl get nodes
+```
+
+The following example output shows the single node created in the previous steps. Make sure that the status of the node is *Ready*:
+
+```output
+NAME STATUS ROLES AGE VERSION
+aks-nodepool1-31718369-0 Ready agent 6m44s v1.12.8
+```
+
+## Create an Azure Database for PostgreSQL - Flexible Server
+Create a flexible server with the [az postgreSQL flexible-server create](/cli/azure/postgres/flexible-server#az_postgres_flexible_server_create) command. The following command creates a server using service defaults and values from your Azure CLI's local context:
+
+```azurecli-interactive
+az postgres flexible-server create --public-access all
+```
+
+The server created has the below attributes:
+- A new empty database, ```postgres``` is created when the server is first provisioned. In this quickstart we will use this database.
+- Autogenerated server name, admin username, admin password, resource group name (if not already specified in local context), and in the same location as your resource group
+- Using public-access argument allow you to create a server with public access to any client with correct username and password.
+- Since the command is using local context it will create the server in the resource group ```django-project``` and in the region ```eastus```.
++
+## Build your Django docker image
+
+Create a new [Django application](https://docs.djangoproject.com/en/3.1/intro/) or use your existing Django project. Make sure your code is in this folder structure.
+
+> [!NOTE]
+> If you don't have an application you can go directly to [**Create Kubernetes manifest file**](./tutorial-django-aks-database.md#create-kubernetes-manifest-file) to use our sample image, [mksuni/django-aks-app:latest](https://hub.docker.com/r/mksuni/django-aks-app).
+
+```
+ΓööΓöÇΓöÇΓöÇmy-djangoapp
+ ΓööΓöÇΓöÇΓöÇviews.py
+ ΓööΓöÇΓöÇΓöÇmodels.py
+ ΓööΓöÇΓöÇΓöÇforms.py
+ Γö£ΓöÇΓöÇΓöÇtemplates
+ . . . . . . .
+ Γö£ΓöÇΓöÇΓöÇstatic
+ . . . . . . .
+ΓööΓöÇΓöÇΓöÇmy-django-project
+ ΓööΓöÇΓöÇΓöÇsettings.py
+ ΓööΓöÇΓöÇΓöÇurls.py
+ ΓööΓöÇΓöÇΓöÇwsgi.py
+ . . . . . . .
+ ΓööΓöÇΓöÇΓöÇ Dockerfile
+ ΓööΓöÇΓöÇΓöÇ requirements.txt
+ ΓööΓöÇΓöÇΓöÇ manage.py
+
+```
+Update ```ALLOWED_HOSTS``` in ```settings.py``` to make sure the Django application uses the external IP that gets assigned to kubernetes app.
+
+```python
+ALLOWED_HOSTS = ['*']
+```
+
+Update ```DATABASES={ }``` section in the ```settings.py``` file. The code snippet below is reading the database host, username and password from the Kubernetes manifest file.
+
+```python
+DATABASES={
+ 'default':{
+ 'ENGINE':'django.db.backends.postgresql_psycopg2',
+ 'NAME':os.getenv('DATABASE_NAME'),
+ 'USER':os.getenv('DATABASE_USER'),
+ 'PASSWORD':os.getenv('DATABASE_PASSWORD'),
+ 'HOST':os.getenv('DATABASE_HOST'),
+ 'PORT':'5432',
+ 'OPTIONS': {'sslmode': 'require'}
+ }
+}
+```
+
+### Generate a requirements.txt file
+Create a ```requirements.txt``` file to list out the dependencies for the Django Application. Here is an example ```requirements.txt``` file. You can use [``` pip freeze > requirements.txt```](https://pip.pypa.io/en/stable/reference/pip_freeze/) to generate a requirements.txt file for your existing application.
+
+``` text
+Django==2.2.17
+postgres==3.0.0
+psycopg2-binary==2.8.6
+psycopg2-pool==1.1
+pytz==2020.4
+```
+
+### Create a Dockerfile
+Create a new file named ```Dockerfile``` and copy the code snippet below. This Dockerfile in setting up Python 3.8 and installing all the requirements listed in requirements.txt file.
+
+```docker
+# Use the official Python image from the Docker Hub
+FROM python:3.8.2
+
+# Make a new directory to put our code in.
+RUN mkdir /code
+
+# Change the working directory.
+WORKDIR /code
+
+# Copy to code folder
+COPY . /code/
+
+# Install the requirements.
+RUN pip install -r requirements.txt
+
+# Run the application:
+CMD python manage.py runserver 0.0.0.0:8000
+```
+
+### Build your image
+Make sure you're in the directory ```my-django-app``` in a terminal using the ```cd``` command. Run the following command to build your bulletin board image:
+
+``` bash
+
+docker build --tag myblog:latest .
+
+```
+
+Deploy your image to [Docker hub](https://docs.docker.com/get-started/part3/#create-a-docker-hub-repository-and-push-your-image) or [Azure Container registry](../../container-registry/container-registry-get-started-azure-cli.md).
+
+> [!IMPORTANT]
+>If you are using Azure container regdistry (ACR), then run the ```az aks update``` command to attach ACR account with the AKS cluster.
+>
+>```azurecli-interactive
+>az aks update -n myAKSCluster -g django-project --attach-acr <your-acr-name>
+> ```
+>
+
+## Create Kubernetes manifest file
+
+A Kubernetes manifest file defines a desired state for the cluster, such as what container images to run. Let's create a manifest file named ```djangoapp.yaml``` and copy in the following YAML definition.
+
+>[!IMPORTANT]
+> - Replace ```[DOCKER-HUB-USER/ACR ACCOUNT]/[YOUR-IMAGE-NAME]:[TAG]``` with your actual Django docker image name and tag, for example ```docker-hub-user/myblog:latest```. You can use the demo sample app ```mksuni/django-aks-app:latest``` in the manifest file.
+> - Update ```env``` section below with your ```SERVERNAME```, ```YOUR-DATABASE-USERNAME```, ```YOUR-DATABASE-PASSWORD``` of your postgres flexible server.
+
+```yaml
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: django-app
+spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: django-app
+ template:
+ metadata:
+ labels:
+ app: django-app
+ spec:
+ containers:
+ - name: django-app
+ image: [DOCKER-HUB-USER-OR-ACR-ACCOUNT]/[YOUR-IMAGE-NAME]:[TAG]
+ ports:
+ - containerPort: 80
+ env:
+ - name: DATABASE_HOST
+ value: "SERVERNAME.postgres.database.azure.com"
+ - name: DATABASE_USERNAME
+ value: "YOUR-DATABASE-USERNAME"
+ - name: DATABASE_PASSWORD
+ value: "YOUR-DATABASE-PASSWORD"
+ - name: DATABASE_NAME
+ value: "postgres"
+ affinity:
+ podAntiAffinity:
+ requiredDuringSchedulingIgnoredDuringExecution:
+ - labelSelector:
+ matchExpressions:
+ - key: "app"
+ operator: In
+ values:
+ - django-app
+ topologyKey: "kubernetes.io/hostname"
+
+apiVersion: v1
+kind: Service
+metadata:
+ name: python-svc
+spec:
+ type: LoadBalancer
+ ports:
+ - port: 8000
+ selector:
+ app: django-app
+```
+
+## Deploy Django to AKS cluster
+Deploy the application using the [kubectl apply](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply) command and specify the name of your YAML manifest:
+
+```console
+kubectl apply -f djangoapp.yaml
+```
+
+The following example output shows the Deployments and Services created successfully:
+
+```output
+deployment "django-app" created
+service "python-svc" created
+```
+
+A deployment ```django-app``` allows you to describes details on of your deployment such as which images to use for the app, the number of pods and pod configuration. A service ```python-svc``` is created to expose the application through an external IP.
+
+## Test the application
+
+When the application runs, a Kubernetes service exposes the application front end to the internet. This process can take a few minutes to complete.
+
+To monitor progress, use the [kubectl get service](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get) command with the `--watch` argument.
+
+```azurecli-interactive
+kubectl get service django-app --watch
+```
+
+Initially the *EXTERNAL-IP* for the *django-app* service is shown as *pending*.
+
+```output
+NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+django-app LoadBalancer 10.0.37.27 <pending> 80:30572/TCP 6s
+```
+
+When the *EXTERNAL-IP* address changes from *pending* to an actual public IP address, use `CTRL-C` to stop the `kubectl` watch process. The following example output shows a valid public IP address assigned to the service:
+
+```output
+django-app LoadBalancer 10.0.37.27 52.179.23.131 80:30572/TCP 2m
+```
+
+Now open a web browser to the external IP address of your service view the Django application.
+
+>[!NOTE]
+> - Currently the Django site is not using HTTPS. It is recommended to [ENABLE TLS with your own certificates](../../aks/ingress-own-tls.md).
+> - You can enable [HTTP routing](../../aks/http-application-routing.md) for your cluster. When http routing is enabled, it configures an Ingress controller in your AKS cluster. As applications are deployed, the solution also creates publicly accessible DNS names for application endpoints.
+
+## Run database migrations
+
+For any django application, you would need to run database migration or collect static files. You can run these django shell commands using ```$ kubectl exec <pod-name> -- [COMMAND]```. Before running the command you need to find the pod name using ```kubectl get pods```.
+
+```bash
+$ kubectl get pods
+```
+
+You will see an output like this
+```output
+NAME READY STATUS RESTARTS AGE
+django-app-5d9cd6cd8-l6x4b 1/1 Running 0 2m
+```
+
+Once the pod name has been found you can run django database migrations with the command ```$ kubectl exec <pod-name> -- [COMMAND]```. Note ```/code/``` is the working directory for the project define in ```Dockerfile``` above.
+
+```bash
+$ kubectl exec django-app-5d9cd6cd8-l6x4b -- python /code/manage.py migrate
+```
+
+The output would look like
+```output
+Operations to perform:
+ Apply all migrations: admin, auth, contenttypes, sessions
+Running migrations:
+ Applying contenttypes.0001_initial... OK
+ Applying auth.0001_initial... OK
+ Applying admin.0001_initial... OK
+ Applying admin.0002_logentry_remove_auto_add... OK
+ Applying admin.0003_logentry_add_action_flag_choices... OK
+ . . . . . .
+```
+
+If you run into issues, please run ```kubectl logs <pod-name>``` to see what exception is thrown by your application. If the application is working successfully you would see an output like this when running ```kubectl logs```.
+
+```output
+Watching for file changes with StatReloader
+Performing system checks...
+
+System check identified no issues (0 silenced).
+
+You have 17 unapplied migration(s). Your project may not work properly until you apply the migrations for app(s): admin, auth, contenttypes, sessions.
+Run 'python manage.py migrate' to apply them.
+December 08, 2020 - 23:24:14
+Django version 2.2.17, using settings 'django_postgres_app.settings'
+Starting development server at http://0.0.0.0:8000/
+Quit the server with CONTROL-C.
+```
+
+## Clean up the resources
+
+To avoid Azure charges, you should clean up unneeded resources. When the cluster is no longer needed, use the [az group delete](/cli/azure/group#az_group_delete) command to remove the resource group, container service, and all related resources.
+
+```azurecli-interactive
+az group delete --name django-project --yes --no-wait
+```
+
+> [!NOTE]
+> When you delete the cluster, the Azure Active Directory service principal used by the AKS cluster is not removed. For steps on how to remove the service principal, see [AKS service principal considerations and deletion](../../aks/kubernetes-service-principal.md#additional-considerations). If you used a managed identity, the identity is managed by the platform and does not require removal.
+
+## Next steps
+
+- Learn how to [access the Kubernetes web dashboard](../../aks/kubernetes-dashboard.md) for your AKS cluster
+- Learn how to [enable continuous deployment](../../aks/deployment-center-launcher.md)
+- Learn how to [scale your cluster](../../aks/tutorial-kubernetes-scale.md)
+- Learn how to manage your [postgres flexible server](./quickstart-create-server-cli.md)
+- Learn how to [configure server parameters](./howto-configure-server-parameters-using-cli.md) for your database server.
private-link Private Endpoint Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/private-link/private-endpoint-overview.md
# What is Azure Private Endpoint?
-Azure Private Endpoint is a network interface that connects you privately and securely to a service powered by Azure Private Link. Private Endpoint uses a private IP address from your VNet, effectively bringing the service into your VNet. The service could be an Azure service such as Azure Storage, Azure Cosmos DB, SQL, etc. or your own [Private Link Service](private-link-service-overview.md).
+Azure Private Endpoint is a network interface that connects you privately and securely to a service powered by Azure Private Link. Private Endpoint uses a private IP address from your VNet, effectively bringing the service into your VNet. The service could be an Azure service such as Azure Storage, Azure Cosmos DB, Azure SQL Database, or your own [Private Link Service](private-link-service-overview.md).
## Private Endpoint properties A Private Endpoint specifies the following properties:
remote-rendering Authentication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/remote-rendering/how-tos/authentication.md
Azure AD authentication is described in the [Azure Spatial Anchors documentation
Follow the steps to configure Azure Active Directory user authentication in the Azure portal.
-1. Register your application in Azure AD. As part of registering, you will need to determine whether your application should be multitenant. You will also need to provide the redirect URLs allowed for your application in the Authentication blade.
-![Authentication setup](./media/aad-app-setup.png)
+1. Register your application in Azure Active Directory. As part of registering, you will need to determine whether your application should be multitenant. You will also need to provide the redirect URLs allowed for your application in the Authentication blade.
+ 1. In the API permissions tab, request **Delegated Permissions** for **mixedreality.signin** scope under **mixedreality**.
-![Api permissions](./media/aad-app-api-permissions.png)
+ 1. Grant admin consent in the Security -> Permissions tab.
-![Admin consent](./media/aad-app-grant-admin-consent.png)
-1. In the Access Control panel grant desired [roles](#azure-role-based-access-control) for your applications and user, on behalf of which you want to use delegated access permissions to your Azure Remote Rendering resource.
-![Add permissions](./media/arr-add-role-assignment.png)
-![Role assignments](./media/arr-role-assignments.png)
+
+1. Then, navigate to your Azure Remote Rendering Resource. In the Access Control panel grant desired [roles](#azure-role-based-access-control) for your applications and user, on behalf of which you want to use delegated access permissions to your Azure Remote Rendering resource.
For information on using Azure AD user authentication in your application code, see the [Tutorial: Securing Azure Remote Rendering and model storage - Azure Active Directory authentication](../tutorials/unity/security/security.md#azure-active-directory-azure-ad-authentication)
remote-rendering Security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/remote-rendering/tutorials/unity/security/security.md
The **RemoteRenderingCoordinator** script has a delegate named **ARRCredentialGe
1. After configuring the new AAD application, check your AAD application looks like the following images: **AAD Application -> Authentication**
- ![App authentication](./../../../how-tos/media/aad-app-setup.png)
+ :::image type="content" source="./../../../how-tos/media/azure-active-directory-app-setup.png" alt-text="App authentication":::
**AAD Application -> API Permissions**
- ![App APIs](./media/request-api-permissions-step-five.png)
+ :::image type="content" source="./media/azure-active-directory-api-permissions-granted.png" alt-text="App APIs":::
1. After configuring your Remote Rendering account, check your configuration looks like the following image: **AAR -> AccessControl (IAM)**
- ![ARR Role](./../../../how-tos/media/arr-role-assignments.png)
+ :::image type="content" source="./../../../how-tos/media/azure-remote-rendering-role-assignments.png" alt-text="ARR Role":::
>[!NOTE] > An *Owner* role is not sufficient to manage sessions via the client application. For every user you want to grant the ability to manage sessions you must provide the role **Remote Rendering Client**. For every user you want to manage sessions and convert models, you must provide the role **Remote Rendering Administrator**.
In the Unity Editor, when AAD Auth is active, you will need to authenticate ever
* **Azure Remote Rendering Account ID** is the same **Account ID** you've been using for **RemoteRenderingCoordinator**. * **Azure Remote Rendering Account Domain** is the same **Account Domain** you've been using in the **RemoteRenderingCoordinator**.
- ![Screenshot that highlights the Application (client) ID and Directory (tenant) ID.](./media/app-overview-data.png)
+ :::image type="content" source="./media/azure-active-directory-app-overview.png" alt-text="Screenshot that highlights the Application (client) ID and Directory (tenant) ID.":::
1. Press Play in the Unity Editor and consent to running a session. Since the **AAD Authentication** component has a view controller, its automatically hooked up to display a prompt after the session authorization modal panel.
role-based-access-control Role Assignments Portal Subscription Admin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/role-based-access-control/role-assignments-portal-subscription-admin.md
Previously updated : 01/11/2021 Last updated : 06/25/2021
To make a user an administrator of an Azure subscription, assign them the [Owner
1. In the Search box at the top, search for subscriptions.
- ![Azure portal search for resource group](./media/shared/sub-portal-search.png)
- 1. Click the subscription you want to use. The following shows an example subscription.
- ![Resource group overview](./media/shared/sub-overview.png)
+ ![Screenshot of Subscriptions overview](./media/shared/sub-overview.png)
-## Step 2: Open the Add role assignment pane
+## Step 2: Open the Add role assignment page
**Access control (IAM)** is the page that you typically use to assign roles to grant access to Azure resources. It's also known as identity and access management (IAM) and appears in several locations in the Azure portal.
To make a user an administrator of an Azure subscription, assign them the [Owner
The following shows an example of the Access control (IAM) page for a subscription.
- ![Access control (IAM) page for a resource group](./media/shared/sub-access-control.png)
+ ![Screenshot of Access control (IAM) page for a subscription.](./media/shared/sub-access-control.png)
1. Click the **Role assignments** tab to view the role assignments at this scope.
-1. Click **Add** > **Add role assignment**.
- If you don't have permissions to assign roles, the Add role assignment option will be disabled.
+1. Click **Add** > **Add role assignment (Preview)**.
- ![Add role assignment menu](./media/shared/add-role-assignment-menu.png)
+ If you don't have permissions to assign roles, the Add role assignment option will be disabled.
- The Add role assignment pane opens.
+ ![Screenshot of Add > Add role assignment menu for preview experience.](./media/shared/add-role-assignment-menu-preview.png)
- ![Add role assignment pane](./media/shared/add-role-assignment.png)
+ The Assign a role page opens.
## Step 3: Select the Owner role The [Owner](built-in-roles.md#owner) role grant full access to manage all resources, including the ability to assign roles in Azure RBAC. You should have a maximum of 3 subscription owners to reduce the potential for breach by a compromised owner. -- In the **Role** list, select the **Owner** role.
+1. On the **Roles** tab, select the **Owner** role.
- ![Select Owner role in Add role assignment pane](./media/role-assignments-portal-subscription-admin/add-role-assignment-role-owner.png)
+ You can search for a role by name or by description. You can also filter roles by type and category.
+
+ ![Screenshot of Add role assignment page with Roles tab for preview experience.](./media/shared/roles.png)
+
+1. Click **Next**.
## Step 4: Select who needs access
-1. In the **Assign access to** list, select **User, group, or service principal**.
+1. On the **Members** tab, select **User, group, or service principal**.
+
+ ![Screenshot of Add role assignment page with Add members tab for preview experience.](./media/shared/members.png)
+
+1. Click **Select members**.
+
+1. Find and select the user.
+
+ You can type in the **Select** box to search the directory for display name or email address.
+
+ ![Screenshot of Select members pane for preview experience.](./media/shared/select-members.png)
+
+1. Click **Save** to add the user to the Members list.
-1. In the **Select** section, search for the user by entering a string or scrolling through the list.
+1. In the **Description** box enter an optional description for this role assignment.
- ![Select user in Add role assignment](./media/role-assignments-portal-subscription-admin/add-role-assignment-user-admin.png)
+ Later you can show this description in the role assignments list.
-1. Once you have found the user, click to select it.
+1. Click **Next**.
## Step 5: Assign role
-1. To assign the role, click **Save**.
+1. On the **Review + assign** tab, review the role assignment settings.
- After a few moments, the user is assigned the role at the selected scope.
+1. Click **Review + assign** to assign the role.
-1. On the **Role assignments** tab, verify that you see the role assignment in the list.
+ After a few moments, the user is assigned the Owner role for the subscription.
- ![Add role assignment saved](./media/role-assignments-portal-subscription-admin/sub-role-assignments-owner.png)
+ ![Screenshot of role assignment list after assigning role for preview experience.](./media/role-assignments-portal-subscription-admin/sub-role-assignments-owner.png)
## Next steps
role-based-access-control Role Assignments Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/role-based-access-control/role-assignments-portal.md
Previously updated : 05/07/2021 Last updated : 06/25/2021
Azure RBAC has a new experience for assigning Azure roles in the Azure portal th
1. In the Search box at the top, search for the scope you want to grant access to. For example, search for **Management groups**, **Subscriptions**, **Resource groups**, or a specific resource.
- ![Screenshot of Azure portal search for resource group.](./media/shared/rg-portal-search.png)
- 1. Click the specific resource for that scope. The following shows an example resource group.
Azure RBAC has a new experience for assigning Azure roles in the Azure portal th
1. In the Search box at the top, search for the scope you want to grant access to. For example, search for **Management groups**, **Subscriptions**, **Resource groups**, or a specific resource.
- ![Screenshot of Azure portal search for resource group for preview experience.](./media/shared/rg-portal-search.png)
- 1. Click the specific resource for that scope. The following shows an example resource group.
Azure RBAC has a new experience for assigning Azure roles in the Azure portal th
1. Click the **Role assignments** tab to view the role assignments at this scope.
-1. Click **Add** > **Add role assignment (preview)**.
+1. Click **Add** > **Add role assignment (Preview)**.
If you don't have permissions to assign roles, the Add role assignment option will be disabled.
Azure RBAC has a new experience for assigning Azure roles in the Azure portal th
You can search for a role by name or by description. You can also filter roles by type and category.
- ![Screenshot of Add role assignment page with Select role tab for preview experience.](./media/role-assignments-portal/roles.png)
+ ![Screenshot of Add role assignment page with Roles tab for preview experience.](./media/shared/roles.png)
1. In the **Details** column, click **View** to get more details about a role.
Azure RBAC has a new experience for assigning Azure roles in the Azure portal th
1. On the **Members** tab, select **User, group, or service principal** to assign the selected role to one or more Azure AD users, groups, or service principals (applications).
- ![Screenshot of Add role assignment page with Add members tab for preview experience.](./media/role-assignments-portal/members.png)
+ ![Screenshot of Add role assignment page with Members tab for preview experience.](./media/shared/members.png)
-1. Click **Add members**.
+1. Click **Select members**.
1. Find and select the users, groups, or service principals.
- You can type in the **Select** box to search the directory for display names, email addresses, and object identifiers.
+ You can type in the **Select** box to search the directory for display name or email address.
- ![Screenshot of Add members using Select members pane for preview experience.](./media/role-assignments-portal/select-principal.png)
+ ![Screenshot of Select members pane for preview experience.](./media/shared/select-members.png)
1. Click **Save** to add the users, groups, or service principals to the Members list. 1. To assign the selected role to one or more managed identities, select **Managed identity**.
-1. Click **Add members**.
+1. Click **Select members**.
1. In the **Select managed identities** pane, select whether the type is [system-assigned managed identity](../active-directory/managed-identities-azure-resources/overview.md) or [user-assigned managed identity](../active-directory/managed-identities-azure-resources/overview.md).
search Cognitive Search Attach Cognitive Services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/cognitive-search-attach-cognitive-services.md
Title: Attach Cognitive Services to a skillset
description: Learn how to attach a Cognitive Services all-in-one subscription to an AI enrichment pipeline in Azure Cognitive Search. --++ Last updated 02/16/2021
search Cognitive Search Concept Annotations Syntax https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/cognitive-search-concept-annotations-syntax.md
Title: Reference inputs and outputs in skillsets
description: Explains the annotation syntax and how to reference an annotation in the inputs and outputs of a skillset in an AI enrichment pipeline in Azure Cognitive Search. ---++ Last updated 11/04/2019
search Cognitive Search Concept Image Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/cognitive-search-concept-image-scenarios.md
Title: Extract text from images
description: Process and extract text and other information from images in Azure Cognitive Search pipelines. ---++ Last updated 11/04/2019
search Cognitive Search Concept Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/cognitive-search-concept-troubleshooting.md
Title: Tips for AI enrichment design
description: Tips and troubleshooting for setting up AI enrichment pipelines in Azure Cognitive Search. ---++ Last updated 06/08/2020
search Cognitive Search Create Custom Skill Example https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/cognitive-search-create-custom-skill-example.md
Title: 'Custom skill example using Bing Entity Search API'
description: Demonstrates using the Bing Entity Search service in a custom skill mapped to an AI-enriched indexing pipeline in Azure Cognitive Search. ---++ Last updated 11/04/2019
search Cognitive Search Custom Skill Interface https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/cognitive-search-custom-skill-interface.md
Title: Interface definition for custom skills
description: Custom data extraction interface for web-api custom skill in an AI enrichment pipeline in Azure Cognitive Search. ---++ Last updated 05/06/2020
search Cognitive Search Custom Skill Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/cognitive-search-custom-skill-python.md
Title: 'Custom skill example (Python)'
description: For Python developers, learn the tools and techniques for building a custom skill using Azure Functions and Visual Studio. Custom skills contain user-defined models or logic that you can add to an AI-enriched indexing pipeline in Azure Cognitive Search. ---++ Last updated 01/15/2020
search Cognitive Search Custom Skill Web Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/cognitive-search-custom-skill-web-api.md
Title: Custom Web API skill in skillsets
description: Extend capabilities of Azure Cognitive Search skillsets by calling out to Web APIs. Use the Custom Web API skill to integrate your custom code. ---++ Last updated 06/17/2020
search Cognitive Search Defining Skillset https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/cognitive-search-defining-skillset.md
Title: Create a skillset
description: Define data extraction, natural language processing, or image analysis steps to enrich and extract structured information from your data for use in Azure Cognitive Search. ---++ Last updated 11/04/2019
search Cognitive Search Output Field Mapping https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/cognitive-search-output-field-mapping.md
Title: Map input to output fields
description: Extract and enrich source data fields, and map to output fields in an Azure Cognitive Search index. ---++ Last updated 11/04/2019
search Cognitive Search Predefined Skills https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/cognitive-search-predefined-skills.md
Title: Built-in text and image processing during indexing
description: Data extraction, natural language, image processing cognitive skills add semantics and structure to raw content in an Azure Cognitive Search pipeline. ---++ Last updated 11/04/2019
search Cognitive Search Skill Conditional https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/cognitive-search-skill-conditional.md
Title: Conditional cognitive skill
description: The conditional skill in Azure Cognitive Search enables filtering, creating defaults, and merging values in a skillset definition. ---++ Last updated 11/04/2019
search Cognitive Search Skill Custom Entity Lookup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/cognitive-search-skill-custom-entity-lookup.md
Title: Custom Entity Lookup cognitive search skill
description: Extract different custom entities from text in an Azure Cognitive Search cognitive search pipeline. ---++ Last updated 06/17/2020
search Cognitive Search Skill Deprecated https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/cognitive-search-skill-deprecated.md
Title: Deprecated cognitive skills
description: This page contains a list of cognitive skills that are considered deprecated and will not be supported in the near future in Azure Cognitive Search skillsets. ---++ Last updated 11/04/2019
search Cognitive Search Skill Entity Recognition https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/cognitive-search-skill-entity-recognition.md
Title: Entity Recognition cognitive skill
description: Extract different types of entities from text in an enrichment pipeline in Azure Cognitive Search. ---++ Last updated 06/17/2020
search Cognitive Search Skill Image Analysis https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/cognitive-search-skill-image-analysis.md
Title: Image Analysis cognitive skill
description: Extract semantic text through image analysis using the Image Analysis cognitive skill in an AI enrichment pipeline in Azure Cognitive Search. ---++ Last updated 06/17/2020
Last updated 06/17/2020
The **Image Analysis** skill extracts a rich set of visual features based on the image content. For example, you can generate a caption from an image, generate tags, or identify celebrities and landmarks. This skill uses the machine learning models provided by [Computer Vision](../cognitive-services/computer-vision/overview.md) in Cognitive Services.
+**Image Analysis** works on images that meet the following requirements:
+++ The image must be presented in JPEG, PNG, GIF, or BMP format++ The file size of the image must be less than 4 megabytes (MB)++ The dimensions of the image must be greater than 50 x 50 pixels+ > [!NOTE] > Small volumes (under 20 transactions) can be executed for free in Azure Cognitive Search, but larger workloads require [attaching a billable Cognitive Services resource](cognitive-search-attach-cognitive-services.md). Charges accrue when calling APIs in Cognitive Services, and for image extraction as part of the document-cracking stage in Azure Cognitive Search. There are no charges for text extraction from documents. > > Execution of built-in skills is charged at the existing [Cognitive Services pay-as-you go price](https://azure.microsoft.com/pricing/details/cognitive-services/). Image extraction pricing is described on the [Azure Cognitive Search pricing page](https://azure.microsoft.com/pricing/details/search/).
+## @odata.type
-## @odata.type
Microsoft.Skills.Vision.ImageAnalysisSkill ## Skill parameters
Parameters are case-sensitive.
||| | `image` | Complex Type. Currently only works with "/document/normalized_images" field, produced by the Azure Blob indexer when ```imageAction``` is set to a value other than ```none```. See the [sample](#sample-output) for more information.| -- ## Sample skill definition ```json
Parameters are case-sensitive.
] } ```+ ### Sample index (for only the categories, description, faces and tags fields)+ ```json { "fields": [
Parameters are case-sensitive.
} ```+ ### Sample output field mapping (for the above index)+ ```json "outputFieldMappings": [ {
Parameters are case-sensitive.
"targetFieldName": "brands" } ```+ ### Variation on output field mappings (nested properties) You can define output field mappings to lower-level properties, such as just landmarks or celebrities. In this case, make sure your index schema has a field to contain landmarks specifically.
You can define output field mappings to lower-level properties, such as just lan
"targetFieldName": "celebrities" } ```+ ## Sample input ```json
If you get the error similar to `"One or more skills are invalid. Details: Error
## See also ++ [What is Image Analysis?](../cognitive-services/computer-vision/overview-image-analysis.md) + [Built-in skills](cognitive-search-predefined-skills.md) + [How to define a skillset](cognitive-search-defining-skillset.md) + [Create Indexer (REST)](/rest/api/searchservice/create-indexer)
search Cognitive Search Skill Keyphrases https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/cognitive-search-skill-keyphrases.md
Title: Key Phrase Extraction cognitive skill
description: Evaluates unstructured text, and for each record, returns a list of key phrases in an AI enrichment pipeline in Azure Cognitive Search. ---++ Last updated 11/04/2019
search Cognitive Search Skill Language Detection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/cognitive-search-skill-language-detection.md
Title: Language detection cognitive skill
description: Evaluates unstructured text, and for each record, returns a language identifier with a score indicating the strength of the analysis in an AI enrichment pipeline in Azure Cognitive Search. ---++ Last updated 06/17/2020
search Cognitive Search Skill Named Entity Recognition https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/cognitive-search-skill-named-entity-recognition.md
Title: Named Entity Recognition cognitive skill
description: Extract named entities for person, location and organization from text in an AI enrichment pipeline in Azure Cognitive Search. ---++ Last updated 11/04/2019
search Cognitive Search Skill Ocr https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/cognitive-search-skill-ocr.md
Title: OCR cognitive skill
description: Extract text from image files using optical character recognition (OCR) in an enrichment pipeline in Azure Cognitive Search. ---++ Last updated 06/17/2020
The above skillset example assumes that a normalized-images field exists. To gen
``` ## See also+++ [What is optical character recognition](../cognitive-services/computer-vision/overview-ocr.md) + [Built-in skills](cognitive-search-predefined-skills.md) + [TextMerger skill](cognitive-search-skill-textmerger.md) + [How to define a skillset](cognitive-search-defining-skillset.md)
search Cognitive Search Skill Sentiment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/cognitive-search-skill-sentiment.md
Title: Sentiment cognitive skill
description: Extract a positive-negative sentiment score from text in an AI enrichment pipeline in Azure Cognitive Search. ---++ Last updated 06/17/2020
search Cognitive Search Skill Shaper https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/cognitive-search-skill-shaper.md
Title: Shaper cognitive skill
description: Extract metadata and structured information from unstructured data and shape it as a complex type in an AI enrichment pipeline in Azure Cognitive Search. ---++ Last updated 11/04/2019
search Cognitive Search Skill Textmerger https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/cognitive-search-skill-textmerger.md
Title: Text Merge cognitive skill
description: Merge text from a collection of fields into one consolidated field. Use this cognitive skill in an AI enrichment pipeline in Azure Cognitive Search. ---++ Last updated 06/17/2020
search Cognitive Search Skill Textsplit https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/cognitive-search-skill-textsplit.md
Title: Text split cognitive skill
description: Break text into chunks or pages of text based on length in an AI enrichment pipeline in Azure Cognitive Search. ---++ Last updated 06/17/2020
search Cognitive Search Tutorial Blob https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/cognitive-search-tutorial-blob.md
Title: 'Tutorial: REST and AI over Azure blobs'
description: Step through an example of text extraction and natural language processing over content in Blob Storage using Postman and the Azure Cognitive Search REST APIs. ---++ Last updated 11/17/2020
search Index Ranking Similarity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/index-ranking-similarity.md
Title: Configure the similarity algorithm description: Learn how to enable BM25 on older search services, and how BM25 parameters can be modified to better accommodate the content of your indexes.---+++ Last updated 03/12/2021
search Index Similarity And Scoring https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/index-similarity-and-scoring.md
Title: Similarity and scoring overview description: Explains the concepts of similarity and scoring, and what a developer can do to customize the scoring result.---+++ Last updated 03/02/2021
search Resource Partners Knowledge Mining https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/resource-partners-knowledge-mining.md
Get expert help from Microsoft partners who build comprehensive solutions that i
| ![Neudesic](media/resource-partners/neudesic-logo.png "Neudesic company logo") | [**Neudesic**](https://www.neudesic.com/) is the trusted technology partner in business innovation, delivering impactful business results to clients through digital modernization and evolution. Our consultants bring business and technology expertise together, offering a wide range of cloud and data-driven solutions, including custom application development, data and artificial intelligence, comprehensive managed services, and business software products. Founded in 2002, Neudesic is a privately held company headquartered in Irvine, California. | [Product page](https://www.neudesic.com/services/modern-workplace/document-intelligence-platform-schedule-demo/)| | ![OrangeNXT](media/resource-partners/orangenxt-beldmerk-boven-160px.png "OrangeNXT company logo") | [**OrangeNXT**](https://orangenxt.com/) offers expertise in data consolidation, data modeling, and building skillsets that include custom logic developed for specific use-cases.</br></br>digitalNXT Search is an OrangeNXT solution that combines AI, optical character recognition (OCR), and natural language processing in Azure Cognitive Search pipeline to help you extract search results from multiple structured and unstructured data sources. Integral to digitalNXT Search is advanced custom cognitive skills for interpreting and correlating selected data.</br></br>| [Product page](https://orangenxt.com/solutions/digitalnxt/digitalnxt-search/)| | ![Plain Concepts](media/resource-partners/plain-concepts-logo.png "Plain Concepts company logo") | [**Plain Concepts**](https://www.plainconcepts.com/contact/) is a Microsoft Partner with over 15 years of cloud, data, and AI expertise on Azure, and more than 12 Microsoft MVP awards. We specialize in the creation of new data relationships among heterogeneous information sources, which combined with our experience with Artificial Intelligence, Machine Learning, and Cognitive Services, exponentially increases the productivity of both machines and human teams. We help customers to face the digital revolution with the AI-based solutions that best suits their company requirements.| [Product page](https://www.plainconcepts.com/artificial-intelligence/) |
-| ![Raytion](media/resource-partners/raytion-logo-blue.png "Raytion company logo") | [**Raytion**](https://www.raytion.com/) is an internationally operating IT business consultancy with a strategic focus on collaboration, search and cloud. Raytion offers intelligent and fully featured search solutions based on Microsoft Azure Cognitive Search and the Raytion product suite. Raytion's solutions enable an easy indexation of a broad range of enterprise content systems and provide a sophisticated search experience, which can be tailored to individual requirements. They are the foundation of enterprise search, site searches, product finders and many more applications. | [Product page](https://www.raytion.com/connectors) |
+| ![Raytion](media/resource-partners/raytion-logo-blue.png "Raytion company logo") | [**Raytion**](https://www.raytion.com/) is an internationally operating IT business consultancy with a strategic focus on collaboration, search and cloud. Raytion offers intelligent and fully featured search solutions based on Microsoft Azure Cognitive Search and the Raytion product suite. Raytion's solutions enable an easy indexation of a broad range of enterprise content systems and provide a sophisticated search experience, which can be tailored to individual requirements. They are the foundation of enterprise search, knowledge searches, service desk agent support and many more applications. | [Product page](https://www.raytion.com/connectors) |
search Search Howto Powerapps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-howto-powerapps.md
Title: 'Tutorial: Query from Power Apps' description: Step-by-step guidance on how to build a Power App that connects to an Azure Cognitive Search index, sends queries, and renders results.---+++ Last updated 11/17/2020
search Search Security Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-security-overview.md
A search service is hosted on Azure and typically accessed over public network c
Cognitive Search has three basic network traffic patterns:
-+ Inbound requests made to the search service (the predominant pattern)
++ Inbound requests made by a client to the search service (the predominant pattern) + Outbound requests issued by the search service to other services on Azure and elsewhere + Internal service-to-service requests over the secure Microsoft backbone network
-Inbound requests range from creating objects, loading data, and queries. For inbound access, there is a progression of security measures protecting the search service endpoint: from API keys on the request, to inbound rules in the firewall, to private endpoints that fully shield your service from the public internet.
+Inbound requests range from creating objects, loading data, and querying. For inbound access to data and operations, you can implement a progression of security measures, starting with API keys on the request (required). You can then supplement with either inbound rules in an IP firewall, or create private endpoints that fully shield your service from the public internet.
-Outbound requests are mostly made by indexers, and include both read and write operations. Read operations include data ingestion or document cracking when loading content from external sources. Write operations to external services are few: a search service writes to log files, and it will write to Azure Storage when creating knowledge stores, persisting cached enrichments, and persisting debug sessions. Finally, a skillset can also include custom skills that run external code, for example in Azure Functions or in a web app.
+Outbound requests can include both read and write operations. The primary agent of an outbound call is an indexer, but the service itself writes to log files if you enable diagnostic logging through Azure Monitor. For indexers, read operations include document cracking and data ingestion. An indexer can also write to Azure Storage when creating knowledge stores, persisting cached enrichments, and persisting debug sessions. Finally, a skillset can also include custom skills that run external code, for example in Azure Functions or in a web app.
-Internal requests include service-to-service calls, such as calls made to Cognitive Services if you are using built-in skills, or to Azure Private Link if you set up a private endpoint.
+Internal requests include service-to-service calls, such as calls made to Cognitive Services that provides the built-in skills, or to Azure Private Link if you set up a private endpoint.
## Network security
service-bus-messaging Configure Customer Managed Key https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/configure-customer-managed-key.md
In this step, you will update the Service Bus namespace with key vault informati
"properties":{ "encryption":{ "keySource":"Microsoft.KeyVault",
+ "requireInfrastructureEncryption":"true",
"keyVaultProperties":[ { "keyName":"[parameters('keyName')]",
In this step, you will update the Service Bus namespace with key vault informati
## Next steps See the following articles: - [Service Bus overview](service-bus-messaging-overview.md)-- [Key Vault overview](../key-vault/general/overview.md)
+- [Key Vault overview](../key-vault/general/overview.md)
storage Table Storage Design Patterns https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/tables/table-storage-design-patterns.md
Previously updated : 04/08/2019 Last updated : 06/24/2021 # Table design patterns+ This article describes some patterns appropriate for use with Table service solutions. Also, you will see how you can practically address some of the issues and trade-offs discussed in other Table storage design articles. The following diagram summarizes the relationships between the different patterns: ![to look up related data](media/storage-table-design-guide/storage-table-design-IMAGE05.png) - The pattern map above highlights some relationships between patterns (blue) and anti-patterns (orange) that are documented in this guide. There are of many other patterns that are worth considering. For example, one of the key scenarios for Table Service is to use the [Materialized View Pattern](/previous-versions/msp-n-p/dn589782(v=pandp.10)) from the [Command Query Responsibility Segregation (CQRS)](/previous-versions/msp-n-p/jj554200(v=pandp.10)) pattern. ## Intra-partition secondary index pattern+ Store multiple copies of each entity using different **RowKey** values (in the same partition) to enable fast and efficient lookups and alternate sort orders by using different **RowKey** values. Updates between copies can be kept consistent using entity group transactions (EGTs). ### Context and problem+ The Table service automatically indexes entities using the **PartitionKey** and **RowKey** values. This enables a client application to retrieve an entity efficiently using these values. For example, using the table structure shown below, a client application can use a point query to retrieve an individual employee entity by using the department name and the employee ID (the **PartitionKey** and **RowKey** values). A client can also retrieve entities sorted by employee ID within each department.
-![Image06](media/storage-table-design-guide/storage-table-design-IMAGE06.png)
+![Graphic of employee entity where a client application can use a point query to retrieve an individual employee entity by using the department name and the employee ID (the PartitionKey and RowKey values).](media/storage-table-design-guide/storage-table-design-IMAGE06.png)
If you also want to be able to find an employee entity based on the value of another property, such as email address, you must use a less efficient partition scan to find a match. This is because the table service does not provide secondary indexes. In addition, there is no option to request a list of employees sorted in a different order than **RowKey** order. ### Solution+ To work around the lack of secondary indexes, you can store multiple copies of each entity with each copy using a different **RowKey** value. If you store an entity with the structures shown below, you can efficiently retrieve employee entities based on email address or employee ID. The prefix values for the **RowKey**, "empid_" and "email_" enable you to query for a single employee or a range of employees by using a range of email addresses or employee IDs.
-![Employee entities](media/storage-table-design-guide/storage-table-design-IMAGE07.png)
+![Graphic showing employee entity with varying RowKey values](media/storage-table-design-guide/storage-table-design-IMAGE07.png)
The following two filter criteria (one looking up by employee ID and one looking up by email address) both specify point queries:
If you query for a range of employee entities, you can specify a range sorted in
The filter syntax used in the examples above is from the Table service REST API, for more information, see [Query Entities](/rest/api/storageservices/Query-Entities). ### Issues and considerations+ Consider the following points when deciding how to implement this pattern: * Table storage is relatively cheap to use so the cost overhead of storing duplicate data should not be a major concern. However, you should always evaluate the cost of your design based on your anticipated storage requirements and only add duplicate entities to support the queries your client application will execute.
Consider the following points when deciding how to implement this pattern:
* Padding numeric values in the **RowKey** (for example, the employee ID 000223), enables correct sorting and filtering based on upper and lower bounds. * You do not necessarily need to duplicate all the properties of your entity. For example, if the queries that look up the entities using the email address in the **RowKey** never need the employee's age, these entities could have the following structure:
- ![Employee entity structure](media/storage-table-design-guide/storage-table-design-IMAGE08.png)
-
+ ![Graphic of employee entity](media/storage-table-design-guide/storage-table-design-IMAGE08.png)
* It is typically better to store duplicate data and ensure that you can retrieve all the data you need with a single query, than to use one query to locate an entity and another to look up the required data. ### When to use this pattern+ Use this pattern when your client application needs to retrieve entities using a variety of different keys, when your client needs to retrieve entities in different sort orders, and where you can identify each entity using a variety of unique values. However, you should be sure that you do not exceed the partition scalability limits when you are performing entity lookups using the different **RowKey** values. ### Related patterns and guidance+ The following patterns and guidance may also be relevant when implementing this pattern: * [Inter-partition secondary index pattern](#inter-partition-secondary-index-pattern)
The following patterns and guidance may also be relevant when implementing this
* [Working with heterogeneous entity types](#working-with-heterogeneous-entity-types) ## Inter-partition secondary index pattern+ Store multiple copies of each entity using different **RowKey** values in separate partitions or in separate tables to enable fast and efficient lookups and alternate sort orders by using different **RowKey** values. ### Context and problem+ The Table service automatically indexes entities using the **PartitionKey** and **RowKey** values. This enables a client application to retrieve an entity efficiently using these values. For example, using the table structure shown below, a client application can use a point query to retrieve an individual employee entity by using the department name and the employee ID (the **PartitionKey** and **RowKey** values). A client can also retrieve entities sorted by employee ID within each department.
-![Employee ID](media/storage-table-design-guide/storage-table-design-IMAGE09.png)
+![Graphic of employee entity structure that, when used, a client application can use a point query to retrieve an individual employee entity by using the department name and the employee ID (the PartitionKey and RowKey values).](media/storage-table-design-guide/storage-table-design-IMAGE09.png)
If you also want to be able to find an employee entity based on the value of another property, such as email address, you must use a less efficient partition scan to find a match. This is because the table service does not provide secondary indexes. In addition, there is no option to request a list of employees sorted in a different order than **RowKey** order. You are anticipating a high volume of transactions against these entities and want to minimize the risk of the Table service throttling your client. ### Solution
-To work around the lack of secondary indexes, you can store multiple copies of each entity with each copy using different **PartitionKey** and **RowKey** values. If you store an entity with the structures shown below, you can efficiently retrieve employee entities based on email address or employee ID. The prefix values for the **PartitionKey**, "empid_" and "email_" enable you to identify which index you want to use for a query.
-![Primary index and secondary index](media/storage-table-design-guide/storage-table-design-IMAGE10.png)
+To work around the lack of secondary indexes, you can store multiple copies of each entity with each copy using different **PartitionKey** and **RowKey** values. If you store an entity with the structures shown below, you can efficiently retrieve employee entities based on email address or employee ID. The prefix values for the **PartitionKey**, "empid_" and "email_" enable you to identify which index you want to use for a query.
+![Graphic showing employee entity with primary index and employee entity with secondary index](media/storage-table-design-guide/storage-table-design-IMAGE10.png)
The following two filter criteria (one looking up by employee ID and one looking up by email address) both specify point queries:
If you query for a range of employee entities, you can specify a range sorted in
The filter syntax used in the examples above is from the Table service REST API, for more information, see [Query Entities](/rest/api/storageservices/Query-Entities). ### Issues and considerations+ Consider the following points when deciding how to implement this pattern: * You can keep your duplicate entities eventually consistent with each other by using the [Eventually consistent transactions pattern](#eventually-consistent-transactions-pattern) to maintain the primary and secondary index entities.
Consider the following points when deciding how to implement this pattern:
* Padding numeric values in the **RowKey** (for example, the employee ID 000223), enables correct sorting and filtering based on upper and lower bounds. * You do not necessarily need to duplicate all the properties of your entity. For example, if the queries that look up the entities using the email address in the **RowKey** never need the employee's age, these entities could have the following structure:
- ![Employee entity (secondary index)](media/storage-table-design-guide/storage-table-design-IMAGE11.png)
+ ![Graphic showing employee entity with secondary index](media/storage-table-design-guide/storage-table-design-IMAGE11.png)
* It is typically better to store duplicate data and ensure that you can retrieve all the data you need with a single query than to use one query to locate an entity using the secondary index and another to look up the required data in the primary index. ### When to use this pattern+ Use this pattern when your client application needs to retrieve entities using a variety of different keys, when your client needs to retrieve entities in different sort orders, and where you can identify each entity using a variety of unique values. Use this pattern when you want to avoid exceeding the partition scalability limits when you are performing entity lookups using the different **RowKey** values. ### Related patterns and guidance+ The following patterns and guidance may also be relevant when implementing this pattern: * [Eventually consistent transactions pattern](#eventually-consistent-transactions-pattern)
The following patterns and guidance may also be relevant when implementing this
* [Working with heterogeneous entity types](#working-with-heterogeneous-entity-types) ## Eventually consistent transactions pattern+ Enable eventually consistent behavior across partition boundaries or storage system boundaries by using Azure queues. ### Context and problem+ EGTs enable atomic transactions across multiple entities that share the same partition key. For performance and scalability reasons, you might decide to store entities that have consistency requirements in separate partitions or in a separate storage system: in such a scenario, you cannot use EGTs to maintain consistency. For example, you might have a requirement to maintain eventual consistency between: * Entities stored in two different partitions in the same table, in different tables, or in different storage accounts.
EGTs enable atomic transactions across multiple entities that share the same par
By using Azure queues, you can implement a solution that delivers eventual consistency across two or more partitions or storage systems. To illustrate this approach, assume you have a requirement to be able to archive old employee entities. Old employee entities are rarely queried and should be excluded from any activities that deal with current employees. To implement this requirement, you store active employees in the **Current** table and old employees in the **Archive** table. Archiving an employee requires you to delete the entity from the **Current** table and add the entity to the **Archive** table, but you cannot use an EGT to perform these two operations. To avoid the risk that a failure causes an entity to appear in both or neither tables, the archive operation must be eventually consistent. The following sequence diagram outlines the steps in this operation. More detail is provided for exception paths in the text following.
-![Azure queues solution](media/storage-table-design-guide/storage-table-design-IMAGE12.png)
+![Solution diagram for eventual consistency](media/storage-table-design-guide/storage-table-design-IMAGE12.png)
A client initiates the archive operation by placing a message on an Azure queue, in this example to archive employee #456. A worker role polls the queue for new messages; when it finds one, it reads the message and leaves a hidden copy on the queue. The worker role next fetches a copy of the entity from the **Current** table, inserts a copy in the **Archive** table, and then deletes the original from the **Current** table. Finally, if there were no errors from the previous steps, the worker role deletes the hidden message from the queue. In this example, step 4 inserts the employee into the **Archive** table. It could add the employee to a blob in the Blob service or a file in a file system. ### Recovering from failures+ It is important that the operations in steps **4** and **5** must be *idempotent* in case the worker role needs to restart the archive operation. If you are using the Table service, for step **4** you should use an "insert or replace" operation; for step **5** you should use a "delete if exists" operation in the client library you are using. If you are using another storage system, you must use an appropriate idempotent operation. If the worker role never completes step **6**, then after a timeout the message reappears on the queue ready for the worker role to try to reprocess it. The worker role can check how many times a message on the queue has been read and, if necessary, flag it is a "poison" message for investigation by sending it to a separate queue. For more information about reading queue messages and checking the dequeue count, see [Get Messages](/rest/api/storageservices/Get-Messages).
If the worker role never completes step **6**, then after a timeout the message
Some errors from the Table and Queue services are transient errors, and your client application should include suitable retry logic to handle them. ### Issues and considerations+ Consider the following points when deciding how to implement this pattern: * This solution does not provide for transaction isolation. For example, a client could read the **Current** and **Archive** tables when the worker role was between steps **4** and **5**, and see an inconsistent view of the data. The data will be consistent eventually.
Consider the following points when deciding how to implement this pattern:
* You can scale the solution by using multiple queues and worker role instances. ### When to use this pattern+ Use this pattern when you want to guarantee eventual consistency between entities that exist in different partitions or tables. You can extend this pattern to ensure eventual consistency for operations across the Table service and the Blob service and other non-Azure Storage data sources such as database or the file system. ### Related patterns and guidance+ The following patterns and guidance may also be relevant when implementing this pattern: * Entity Group Transactions
The following patterns and guidance may also be relevant when implementing this
> [!NOTE] > If transaction isolation is important to your solution, you should consider redesigning your tables to enable you to use EGTs.
->
->
+>
+>
## Index entities pattern+ Maintain index entities to enable efficient searches that return lists of entities. ### Context and problem+ The Table service automatically indexes entities using the **PartitionKey** and **RowKey** values. This enables a client application to retrieve an entity efficiently using a point query. For example, using the table structure shown below, a client application can efficiently retrieve an individual employee entity by using the department name and the employee ID (the **PartitionKey** and **RowKey**).
-![Employee entity](media/storage-table-design-guide/storage-table-design-IMAGE13.png)
+![Graphic of employee entity structure where a client application can efficiently retrieve an individual employee entity by using the department name and the employee ID (the PartitionKey and RowKey).](media/storage-table-design-guide/storage-table-design-IMAGE13.png)
If you also want to be able to retrieve a list of employee entities based on the value of another non-unique property, such as their last name, you must use a less efficient partition scan to find matches rather than using an index to look them up directly. This is because the table service does not provide secondary indexes. ### Solution+ To enable lookup by last name with the entity structure shown above, you must maintain lists of employee IDs. If you want to retrieve the employee entities with a particular last name, such as Jones, you must first locate the list of employee IDs for employees with Jones as their last name, and then retrieve those employee entities. There are three main options for storing the lists of employee IDs: * Use blob storage.
For the first option, you create a blob for every unique last name, and in each
For the second option, use index entities that store the following data:
-![Employee index entity](media/storage-table-design-guide/storage-table-design-IMAGE14.png)
+![Graphic showing employee entity, with string containing a list of employee IDs with same last name](media/storage-table-design-guide/storage-table-design-IMAGE14.png)
The **EmployeeIDs** property contains a list of employee IDs for employees with the last name stored in the **RowKey**.
The following steps outline the process you should follow when you need to look
For the third option, use index entities that store the following data:
-![Employee index entity in a separate partition](media/storage-table-design-guide/storage-table-design-IMAGE15.png)
-
-The **EmployeeIDs** property contains a list of employee IDs for employees with the last name stored in the **RowKey**.
+The **EmployeeDetails** property contains a list of employee IDs and department name pairs for employees with the last name stored in the `RowKey`.
With the third option, you cannot use EGTs to maintain consistency because the index entities are in a separate partition from the employee entities. Ensure that the index entities are eventually consistent with the employee entities. ### Issues and considerations+ Consider the following points when deciding how to implement this pattern: * This solution requires at least two queries to retrieve matching entities: one to query the index entities to obtain the list of **RowKey** values, and then queries to retrieve each entity in the list.
Consider the following points when deciding how to implement this pattern:
* You can implement a queue-based solution that delivers eventual consistency (see the [Eventually consistent transactions pattern](#eventually-consistent-transactions-pattern) for more details). ### When to use this pattern+ Use this pattern when you want to look up a set of entities that all share a common property value, such as all employees with the last name Jones. ### Related patterns and guidance+ The following patterns and guidance may also be relevant when implementing this pattern: * [Compound key pattern](#compound-key-pattern)
The following patterns and guidance may also be relevant when implementing this
* [Working with heterogeneous entity types](#working-with-heterogeneous-entity-types) ## Denormalization pattern+ Combine related data together in a single entity to enable you to retrieve all the data you need with a single point query. ### Context and problem+ In a relational database, you typically normalize data to remove duplication resulting in queries that retrieve data from multiple tables. If you normalize your data in Azure tables, you must make multiple round trips from the client to the server to retrieve your related data. For example, with the table structure shown below you need two round trips to retrieve the details for a department: one to fetch the department entity that includes the manager's ID, and then another request to fetch the manager's details in an employee entity.
-![Department entity and Employee entity](media/storage-table-design-guide/storage-table-design-IMAGE16.png)
+![Graphic of department entity and employee entity](media/storage-table-design-guide/storage-table-design-IMAGE16.png)
### Solution+ Instead of storing the data in two separate entities, denormalize the data and keep a copy of the manager's details in the department entity. For example:
-![Department entity](media/storage-table-design-guide/storage-table-design-IMAGE17.png)
+![Graphic of denormalized and combined department entity](media/storage-table-design-guide/storage-table-design-IMAGE17.png)
With department entities stored with these properties, you can now retrieve all the details you need about a department using a point query. ### Issues and considerations+ Consider the following points when deciding how to implement this pattern: * There is some cost overhead associated with storing some data twice. The performance benefit (resulting from fewer requests to the storage service) typically outweighs the marginal increase in storage costs (and this cost is partially offset by a reduction in the number of transactions you require to fetch the details of a department). * You must maintain the consistency of the two entities that store information about managers. You can handle the consistency issue by using EGTs to update multiple entities in a single atomic transaction: in this case, the department entity, and the employee entity for the department manager are stored in the same partition. ### When to use this pattern+ Use this pattern when you frequently need to look up related information. This pattern reduces the number of queries your client must make to retrieve the data it requires. ### Related patterns and guidance+ The following patterns and guidance may also be relevant when implementing this pattern: * [Compound key pattern](#compound-key-pattern)
The following patterns and guidance may also be relevant when implementing this
* [Working with heterogeneous entity types](#working-with-heterogeneous-entity-types) ## Compound key pattern+ Use compound **RowKey** values to enable a client to look up related data with a single point query. ### Context and problem+ In a relational database, it is natural to use joins in queries to return related pieces of data to the client in a single query. For example, you might use the employee ID to look up a list of related entities that contain performance and review data for that employee. Assume you are storing employee entities in the Table service using the following structure:
-![Screenshot that shows how you can store employee entities in the Table service.](media/storage-table-design-guide/storage-table-design-IMAGE18.png)
+![Graphic of employee entity structure you should use to store employee entities in Table storage.](media/storage-table-design-guide/storage-table-design-IMAGE18.png)
You also need to store historical data relating to reviews and performance for each year the employee has worked for your organization and you need to be able to access this information by year. One option is to create another table that stores entities with the following structure:
-![Alternative employee entity structure](media/storage-table-design-guide/storage-table-design-IMAGE19.png)
+![Graphic of employee review entity](media/storage-table-design-guide/storage-table-design-IMAGE19.png)
Notice that with this approach you may decide to duplicate some information (such as first name and last name) in the new entity to enable you to retrieve your data with a single request. However, you cannot maintain strong consistency because you cannot use an EGT to update the two entities atomically. ### Solution+ Store a new entity type in your original table using entities with the following structure:
-![Solution for employee entity structure](media/storage-table-design-guide/storage-table-design-IMAGE20.png)
+![Graphic of employee entity with compound key](media/storage-table-design-guide/storage-table-design-IMAGE20.png)
Notice how the **RowKey** is now a compound key made up of the employee ID and the year of the review data that enables you to retrieve the employee's performance and review data with a single request for a single entity.
The following example outlines how you can retrieve all the review data for a pa
$filter=(PartitionKey eq 'Sales') and (RowKey ge 'empid_000123') and (RowKey lt '000123_2012')&$select=RowKey,Manager Rating,Peer Rating,Comments ### Issues and considerations+ Consider the following points when deciding how to implement this pattern: * You should use a suitable separator character that makes it easy to parse the **RowKey** value: for example, **000123_2012**.
Consider the following points when deciding how to implement this pattern:
* You should consider how frequently you will query the data to determine whether this pattern is appropriate. For example, if you will access the review data infrequently and the main employee data often you should keep them as separate entities. ### When to use this pattern+ Use this pattern when you need to store one or more related entities that you query frequently. ### Related patterns and guidance+ The following patterns and guidance may also be relevant when implementing this pattern: * Entity Group Transactions
The following patterns and guidance may also be relevant when implementing this
* [Eventually consistent transactions pattern](#eventually-consistent-transactions-pattern) ## Log tail pattern+ Retrieve the *n* entities most recently added to a partition by using a **RowKey** value that sorts in reverse date and time order. ### Context and problem+ A common requirement is been able to retrieve the most recently created entities, for example the 10 most recent expense claims submitted by an employee. Table queries support a **$top** query operation to return the first *n* entities from a set: there is no equivalent query operation to return the last n entities in a set. ### Solution+ Store the entities using a **RowKey** that naturally sorts in reverse date/time order by using so the most recent entry is always the first one in the table. For example, to be able to retrieve the 10 most recent expense claims submitted by an employee, you can use a reverse tick value derived from the current date/time. The following C# code sample shows one way to create a suitable "inverted ticks" value for a **RowKey** that sorts from the most recent to the oldest:
The table query looks like this:
`https://myaccount.table.core.windows.net/EmployeeExpense(PartitionKey='empid')?$top=10` ### Issues and considerations+ Consider the following points when deciding how to implement this pattern: * You must pad the reverse tick value with leading zeroes to ensure the string value sorts as expected. * You must be aware of the scalability targets at the level of a partition. Be careful not create hot spot partitions. ### When to use this pattern+ Use this pattern when you need to access entities in reverse date/time order or when you need to access the most recently added entities. ### Related patterns and guidance+ The following patterns and guidance may also be relevant when implementing this pattern: * [Prepend / append anti-pattern](#prepend-append-anti-pattern) * [Retrieving entities](#retrieving-entities) ## High volume delete pattern+ Enable the deletion of a high volume of entities by storing all the entities for simultaneous deletion in their own separate table; you delete the entities by deleting the table. ### Context and problem+ Many applications delete old data that no longer needs to be available to a client application, or that the application has archived to another storage medium. You typically identify such data by a date: for example, you have a requirement to delete records of all login requests that are more than 60 days old. One possible design is to use the date and time of the login request in the **RowKey**:
-![Date and time of login attempt](media/storage-table-design-guide/storage-table-design-IMAGE21.png)
+![Graphic of login attempt entity](media/storage-table-design-guide/storage-table-design-IMAGE21.png)
This approach avoids partition hotspots because the application can insert and delete login entities for each user in a separate partition. However, this approach may be costly and time consuming if you have a large number of entities because first you need to perform a table scan in order to identify all the entities to delete, and then you must delete each old entity. You can reduce the number of round trips to the server required to delete the old entities by batching multiple delete requests into EGTs. ### Solution+ Use a separate table for each day of login attempts. You can use the entity design above to avoid hotspots when you are inserting entities, and deleting old entities is now simply a question of deleting one table every day (a single storage operation) instead of finding and deleting hundreds and thousands of individual login entities every day. ### Issues and considerations+ Consider the following points when deciding how to implement this pattern: * Does your design support other ways your application will use the data such as looking up specific entities, linking with other data, or generating aggregate information?
Consider the following points when deciding how to implement this pattern:
* Expect some throttling when you first use a new table while the Table service learns the access patterns and distributes the partitions across nodes. You should consider how frequently you need to create new tables. ### When to use this pattern+ Use this pattern when you have a high volume of entities that you must delete at the same time. ### Related patterns and guidance+ The following patterns and guidance may also be relevant when implementing this pattern: * Entity Group Transactions * [Modifying entities](#modifying-entities) ## Data series pattern+ Store complete data series in a single entity to minimize the number of requests you make. ### Context and problem+ A common scenario is for an application to store a series of data that it typically needs to retrieve all at once. For example, your application might record how many IM messages each employee sends every hour, and then use this information to plot how many messages each user sent over the preceding 24 hours. One design might be to store 24 entities for each employee:
-![Store 24 entities for each employee](media/storage-table-design-guide/storage-table-design-IMAGE22.png)
+![Graphic of message stats entity](media/storage-table-design-guide/storage-table-design-IMAGE22.png)
With this design, you can easily locate and update the entity to update for each employee whenever the application needs to update the message count value. However, to retrieve the information to plot a chart of the activity for the preceding 24 hours, you must retrieve 24 entities. ### Solution+ Use the following design with a separate property to store the message count for each hour:
-![Message stats entity](media/storage-table-design-guide/storage-table-design-IMAGE23.png)
+![Graphic showing message stats entity with separated properties](media/storage-table-design-guide/storage-table-design-IMAGE23.png)
With this design, you can use a merge operation to update the message count for an employee for a specific hour. Now, you can retrieve all the information you need to plot the chart using a request for a single entity. ### Issues and considerations+ Consider the following points when deciding how to implement this pattern: * If your complete data series does not fit into a single entity (an entity can have up to 252 properties), use an alternative data store such as a blob. * If you have multiple clients updating an entity simultaneously, you will need to use the **ETag** to implement optimistic concurrency. If you have many clients, you may experience high contention. ### When to use this pattern+ Use this pattern when you need to update and retrieve a data series associated with an individual entity. ### Related patterns and guidance+ The following patterns and guidance may also be relevant when implementing this pattern: * [Large entities pattern](#large-entities-pattern)
The following patterns and guidance may also be relevant when implementing this
Use multiple physical entities to store logical entities with more than 252 properties. ### Context and problem+ An individual entity can have no more than 252 properties (excluding the mandatory system properties) and cannot store more than 1 MB of data in total. In a relational database, you would typically get round any limits on the size of a row by adding a new table and enforcing a 1-to-1 relationship between them. ### Solution+ Using the Table service, you can store multiple entities to represent a single large business object with more than 252 properties. For example, if you want to store a count of the number of IM messages sent by each employee for the last 365 days, you could use the following design that uses two entities with different schemas:
-![Multiple entities](media/storage-table-design-guide/storage-table-design-IMAGE24.png)
+![Graphic showing message stats entity with Rowkey 01 and message stats entity with Rowkey 02](media/storage-table-design-guide/storage-table-design-IMAGE24.png)
If you need to make a change that requires updating both entities to keep them synchronized with each other, you can use an EGT. Otherwise, you can use a single merge operation to update the message count for a specific day. To retrieve all the data for an individual employee you must retrieve both entities, which you can do with two efficient requests that use both a **PartitionKey** and a **RowKey** value. ### Issues and considerations+ Consider the following points when deciding how to implement this pattern: * Retrieving a complete logical entity involves at least two storage transactions: one to retrieve each physical entity. ### When to use this pattern+ Use this pattern when need to store entities whose size or number of properties exceeds the limits for an individual entity in the Table service. ### Related patterns and guidance+ The following patterns and guidance may also be relevant when implementing this pattern: * Entity Group Transactions * [Merge or replace](#merge-or-replace) ## Large entities pattern+ Use blob storage to store large property values. ### Context and problem+ An individual entity cannot store more than 1 MB of data in total. If one or several of your properties store values that cause the total size of your entity to exceed this value, you cannot store the entire entity in the Table service. ### Solution+ If your entity exceeds 1 MB in size because one or more properties contain a large amount of data, you can store data in the Blob service and then store the address of the blob in a property in the entity. For example, you can store the photo of an employee in blob storage and store a link to the photo in the **Photo** property of your employee entity:
-![Photo property](media/storage-table-design-guide/storage-table-design-IMAGE25.png)
+![Graphic showing employee entity with string for Photo pointing to Blob storage](media/storage-table-design-guide/storage-table-design-IMAGE25.png)
### Issues and considerations+ Consider the following points when deciding how to implement this pattern: * To maintain eventual consistency between the entity in the Table service and the data in the Blob service, use the [Eventually consistent transactions pattern](#eventually-consistent-transactions-pattern) to maintain your entities. * Retrieving a complete entity involves at least two storage transactions: one to retrieve the entity and one to retrieve the blob data. ### When to use this pattern+ Use this pattern when you need to store entities whose size exceeds the limits for an individual entity in the Table service. ### Related patterns and guidance+ The following patterns and guidance may also be relevant when implementing this pattern: * [Eventually consistent transactions pattern](#eventually-consistent-transactions-pattern)
The following patterns and guidance may also be relevant when implementing this
<a name="prepend-append-anti-pattern"></a> ## Prepend/append anti-pattern+ Increase scalability when you have a high volume of inserts by spreading the inserts across multiple partitions. ### Context and problem+ Prepending or appending entities to your stored entities typically results in the application adding new entities to the first or last partition of a sequence of partitions. In this case, all of the inserts at any given time are taking place in the same partition, creating a hotspot that prevents the table service from load-balancing inserts across multiple nodes, and possibly causing your application to hit the scalability targets for partition. For example, if you have an application that logs network and resource access by employees, then an entity structure as shown below could result in the current hour's partition becoming a hotspot if the volume of transactions reaches the scalability target for an individual partition: ![Entity structure](media/storage-table-design-guide/storage-table-design-IMAGE26.png) ### Solution+ The following alternative entity structure avoids a hotspot on any particular partition as the application logs events: ![Alternative entity structure](media/storage-table-design-guide/storage-table-design-IMAGE27.png)
The following alternative entity structure avoids a hotspot on any particular pa
Notice with this example how both the **PartitionKey** and **RowKey** are compound keys. The **PartitionKey** uses both the department and employee ID to distribute the logging across multiple partitions. ### Issues and considerations+ Consider the following points when deciding how to implement this pattern: * Does the alternative key structure that avoids creating hot partitions on inserts efficiently support the queries your client application makes? * Does your anticipated volume of transactions mean that you are likely to reach the scalability targets for an individual partition and be throttled by the storage service? ### When to use this pattern+ Avoid the prepend/append anti-pattern when your volume of transactions is likely to result in throttling by the storage service when you access a hot partition. ### Related patterns and guidance+ The following patterns and guidance may also be relevant when implementing this pattern: * [Compound key pattern](#compound-key-pattern)
The following patterns and guidance may also be relevant when implementing this
* [Modifying entities](#modifying-entities) ## Log data anti-pattern+ Typically, you should use the Blob service instead of the Table service to store log data. ### Context and problem+ A common use case for log data is to retrieve a selection of log entries for a specific date/time range: for example, you want to find all the error and critical messages that your application logged between 15:04 and 15:06 on a specific date. You do not want to use the date and time of the log message to determine the partition you save log entities to: that results in a hot partition because at any given time, all the log entities will share the same **PartitionKey** value (see the section [Prepend/append anti-pattern](#prepend-append-anti-pattern)). For example, the following entity schema for a log message results in a hot partition because the application writes all log messages to the partition for the current date and hour: ![Log message entity](media/storage-table-design-guide/storage-table-design-IMAGE28.png)
Another approach is to use a **PartitionKey** that ensures that the application
However, the problem with this schema is that to retrieve all the log messages for a specific time span you must search every partition in the table. ### Solution+ The previous section highlighted the problem of trying to use the Table service to store log entries and suggested two, unsatisfactory, designs. One solution led to a hot partition with the risk of poor performance writing log messages; the other solution resulted in poor query performance because of the requirement to scan every partition in the table to retrieve log messages for a specific time span. Blob storage offers a better solution for this type of scenario and this is how Azure Storage Analytics stores the log data it collects. This section outlines how Storage Analytics stores log data in blob storage as an illustration of this approach to storing data that you typically query by range.
Storage Analytics buffers log messages internally and then periodically updates
If you are implementing a similar solution in your own application, you must consider how to manage the trade-off between reliability (writing every log entry to blob storage as it happens) and cost and scalability (buffering updates in your application and writing them to blob storage in batches). ### Issues and considerations+ Consider the following points when deciding how to store log data: * If you create a table design that avoids potential hot partitions, you may find that you cannot access your log data efficiently.
Consider the following points when deciding how to store log data:
* Although log data is often structured, blob storage may be a better solution. ## Implementation considerations+ This section discusses some of the considerations to bear in mind when you implement the patterns described in the previous sections. Most of this section uses examples written in C# that use the Storage Client Library (version 4.3.0 at the time of writing). ## Retrieving entities+ As discussed in the section Design for querying, the most efficient query is a point query. However, in some scenarios you may need to retrieve multiple entities. This section describes some common approaches to retrieving entities using the Storage Client Library. ### Executing a point query using the Storage Client Library+ The easiest way to execute a point query is to use the **Retrieve** table operation as shown in the following C# code snippet that retrieves an entity with a **PartitionKey** of value "Sales" and a **RowKey** of value "212": ```csharp
if (retrieveResult.Result != null)
Notice how this example expects the entity it retrieves to be of type **EmployeeEntity**. ### Retrieving multiple entities using LINQ
-You can use LINQ to retrieve multiple entities from the Table service when working with Microsoft Azure Cosmos Table Standard Library.
+
+You can use LINQ to retrieve multiple entities from the Table service when working with Microsoft Azure Cosmos Table Standard Library.
```azurecli dotnet add package Microsoft.Azure.Cosmos.Table
using Microsoft.Azure.Cosmos.Table.Queryable;
The employeeTable is a CloudTable object that implements a CreateQuery\<ITableEntity>() method, which returns a TableQuery\<ITableEntity>. Objects of this type implement an IQueryable and allow using both LINQ Query Expressions and dot notation syntax.
-Retrieving multiple entities and be achieved by specifying a query with a **where** clause. To avoid a table scan, you should always include the **PartitionKey** value in the where clause, and if possible the **RowKey** value to avoid table and partition scans. The table service supports a limited set of comparison operators (greater than, greater than or equal, less than, less than or equal, equal, and not equal) to use in the where clause.
+Retrieving multiple entities and be achieved by specifying a query with a **where** clause. To avoid a table scan, you should always include the **PartitionKey** value in the where clause, and if possible the **RowKey** value to avoid table and partition scans. The table service supports a limited set of comparison operators (greater than, greater than or equal, less than, less than or equal, equal, and not equal) to use in the where clause.
The following C# code snippet finds all the employees whose last name starts with "B" (assuming that the **RowKey** stores the last name) in the sales department (assuming the **PartitionKey** stores the department name):
var employees = employeeTable.ExecuteQuery(employeeQuery);
> [!NOTE] > The sample nests multiple **CombineFilters** methods to include the three filter conditions.
->
->
+>
+>
### Retrieving large numbers of entities from a query+ An optimal query returns an individual entity based on a **PartitionKey** value and a **RowKey** value. However, in some scenarios you may have a requirement to return many entities from the same partition or even from many partitions. You should always fully test the performance of your application in such scenarios.
By using continuation tokens explicitly, you can control when your application r
> [!NOTE] > A continuation token typically returns a segment containing 1,000 entities, although it may be fewer. This is also the case if you limit the number of entries a query returns by using **Take** to return the first n entities that match your lookup criteria: the table service may return a segment containing fewer than n entities along with a continuation token to enable you to retrieve the remaining entities.
->
->
+>
+>
The following C# code shows how to modify the number of entities returned inside a segment:
employeeQuery.TakeCount = 50;
``` ### Server-side projection+ A single entity can have up to 255 properties and be up to 1 MB in size. When you query the table and retrieve entities, you may not need all the properties and can avoid transferring data unnecessarily (to help reduce latency and cost). You can use server-side projection to transfer just the properties you need. The following example is retrieves just the **Email** property (along with **PartitionKey**, **RowKey**, **Timestamp**, and **ETag**) from the entities selected by the query. ```csharp
foreach (var e in entities)
Notice how the **RowKey** value is available even though it was not included in the list of properties to retrieve. ## Modifying entities+ The Storage Client Library enables you to modify your entities stored in the table service by inserting, deleting, and updating entities. You can use EGTs to batch multiple inserts, update, and delete operations together to reduce the number of round trips required and improve the performance of your solution. Exceptions thrown when the Storage Client Library executes an EGT typically include the index of the entity that caused the batch to fail. This is helpful when you are debugging code that uses EGTs.
Exceptions thrown when the Storage Client Library executes an EGT typically incl
You should also consider how your design affects how your client application handles concurrency and update operations. ### Managing concurrency+ By default, the table service implements optimistic concurrency checks at the level of individual entities for **Insert**, **Merge**, and **Delete** operations, although it is possible for a client to force the table service to bypass these checks. For more information about how the table service manages concurrency, see [Managing Concurrency in Microsoft Azure Storage](../blobs/concurrency-manage.md). ### Merge or replace+ The **Replace** method of the **TableOperation** class always replaces the complete entity in the Table service. If you do not include a property in the request when that property exists in the stored entity, the request removes that property from the stored entity. Unless you want to remove a property explicitly from a stored entity, you must include every property in the request. You can use the **Merge** method of the **TableOperation** class to reduce the amount of data that you send to the Table service when you want to update an entity. The **Merge** method replaces any properties in the stored entity with property values from the entity included in the request, but leaves intact any properties in the stored entity that are not included in the request. This is useful if you have large entities and only need to update a small number of properties in a request. > [!NOTE] > The **Replace** and **Merge** methods fail if the entity does not exist. As an alternative, you can use the **InsertOrReplace** and **InsertOrMerge** methods that create a new entity if it doesn't exist.
->
->
+>
+>
## Working with heterogeneous entity types+ The Table service is a *schema-less* table store that means that a single table can store entities of multiple types providing great flexibility in your design. The following example illustrates a table storing both employee and department entities: <table>
The techniques discussed in this section are especially relevant to the discussi
The remainder of this section describes some of the features in the Storage Client Library that facilitate working with multiple entity types in the same table. ### Retrieving heterogeneous entity types+ If you are using the Storage Client Library, you have three options for working with multiple entity types. If you know the type of the entity stored with a specific **RowKey** and **PartitionKey** values, then you can specify the entity type when you retrieve the entity as shown in the previous two examples that retrieve entities of type **EmployeeEntity**: [Executing a point query using the Storage Client Library](#executing-a-point-query-using-the-storage-client-library) and [Retrieving multiple entities using LINQ](#retrieving-multiple-entities-using-linq).
foreach (var e in entities)
``` ### Modifying heterogeneous entity types+ You do not need to know the type of an entity to delete it, and you always know the type of an entity when you insert it. However, you can use **DynamicTableEntity** type to update an entity without knowing its type and without using a POCO entity class. The following code sample retrieves a single entity, and checks the **EmployeeCount** property exists before updating it. ```csharp
employeeTable.Execute(TableOperation.Merge(department));
``` ## Controlling access with Shared Access Signatures+ You can use Shared Access Signature (SAS) tokens to enable client applications to modify (and query) table entities without the need to include your storage account key in your code. Typically, there are three main benefits to using SAS in your application: * You do not need to distribute your storage account key to an insecure platform (such as a mobile device) in order to allow that device to access and modify entities in the Table service.
However, you must still generate the SAS tokens that grant a client application
It is possible to generate a SAS token that grants access to a subset of the entities in a table. By default, you create a SAS token for an entire table, but it is also possible to specify that the SAS token grant access to either a range of **PartitionKey** values, or a range of **PartitionKey** and **RowKey** values. You might choose to generate SAS tokens for individual users of your system such that each user's SAS token only allows them access to their own entities in the table service. ## Asynchronous and parallel operations+ Provided you are spreading your requests across multiple partitions, you can improve throughput and client responsiveness by using asynchronous or parallel queries. For example, you might have two or more worker role instances accessing your tables in parallel. You could have individual worker roles responsible for particular sets of partitions, or simply have multiple worker role instances, each able to access all the partitions in a table.
stream-analytics Stream Analytics Troubleshoot Output https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/stream-analytics/stream-analytics-troubleshoot-output.md
When you configure an Azure SQL database as output to a Stream Analytics job, it
When you set up unique key constraints on the SQL table, Azure Stream Analytics removes duplicate records. It splits the data into batches and recursively inserts the batches until a single duplicate record is found. The split and insert process ignores the duplicates one at a time. For a streaming job that has many duplicate rows, the process is inefficient and time-consuming. If you see multiple key violation warning messages in your Activity log for the previous hour, it's likely that your SQL output is slowing down the entire job.
-To resolve this issue, [configure the index]( https://docs.microsoft.com/sql/t-sql/statements/create-index-transact-sql) that's causing the key violation by enabling the IGNORE_DUP_KEY option. This option allows SQL to ignore duplicate values during bulk inserts. Azure SQL Database simply produces a warning message instead of an error. As a result, Azure Stream Analytics no longer produces primary key violation errors.
+To resolve this issue, [configure the index](/sql/t-sql/statements/create-index-transact-sql) that's causing the key violation by enabling the IGNORE_DUP_KEY option. This option allows SQL to ignore duplicate values during bulk inserts. Azure SQL Database simply produces a warning message instead of an error. As a result, Azure Stream Analytics no longer produces primary key violation errors.
Note the following observations when configuring IGNORE_DUP_KEY for several types of indexes:
stream-analytics Stream Analytics Use Reference Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/stream-analytics/stream-analytics-use-reference-data.md
Previously updated : 12/18/2020 Last updated : 06/25/2021 # Using reference data for lookups in Stream Analytics
It is recommended to use reference datasets which are less than 300 MB for best
|3 |150 MB or lower | |6 and beyond |5 GB or lower. |
-Support for compression is not available for reference data.
+Support for compression is not available for reference data. For reference datasets larger than 300 MB, it is recommended to use Azure SQL Database as the source with [delta query](https://docs.microsoft.com/azure/stream-analytics/sql-reference-data#delta-query) option for optimal performance. If delta query is not used in such scenarios, you will see in spikes in watermark delay metric every time the reference dataset is refreshed.
## Joining multiple reference datasets in a job You can join only one stream input with one reference data input in a single step of your query. However, you can join multiple reference datasets by breaking down your query into multiple steps. An example is shown below.
stream-analytics Vs Code Intellisense https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/stream-analytics/vs-code-intellisense.md
# IntelliSense in Azure Stream Analytics tools for Visual Studio Code
-IntelliSense is available for [Stream Analytics Query Language](/stream-analytics-query/stream-analytics-query-language-reference?bc=https%253a%2f%2fdocs.microsoft.com%2fazure%2fbread%2ftoc.json&toc=https%253a%2f%2fdocs.microsoft.com%2fazure%2fstream-analytics%2ftoc.json) in [Azure Stream Analytics tools for VS Code](https://marketplace.visualstudio.com/items?itemName=ms-bigdatatools.vscode-asa&ssr=false#overview). IntelliSense is a code-completion aid that includes a number of features: List Members, Parameter Info, Quick Info, and Complete Word. IntelliSense features are sometimes called by other names such as "code completion", "content assist", and "code hinting".
+IntelliSense is available for [Stream Analytics Query Language](/stream-analytics-query/stream-analytics-query-language-reference?toc=/azure/stream-analytics/toc.json) in [Azure Stream Analytics tools for VS Code](https://marketplace.visualstudio.com/items?itemName=ms-bigdatatools.vscode-asa&ssr=false#overview). IntelliSense is a code-completion aid that includes a number of features: List Members, Parameter Info, Quick Info, and Complete Word. IntelliSense features are sometimes called by other names such as "code completion", "content assist", and "code hinting".
![IntelliSense demo](./media/vs-code-intellisense/intellisense.gif)
synapse-analytics Sql Pool Stored Procedure Activity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/data-integration/sql-pool-stored-procedure-activity.md
Title: Transform data by using the SQL Pool Stored Procedure activity
-description: Explains how to use SQL Pool Stored Procedure Activity to invoke a stored procedure in Azure Synapse Analytics.
+ Title: Transform data by using the SQL pool stored procedure activity
+description: Explains how to use SQL pool stored procedure activity to invoke a stored procedure in Azure Synapse Analytics.
Last updated 05/13/2021
-# Transform data by using SQL Pool Stored Procedure activity in Azure Synapse Analytics
+# Transform data by using SQL pool stored procedure activity in Azure Synapse Analytics
<Token>**APPLIES TO:** ![not supported](../media/applies-to/no.png)Azure Data Factory ![supported](../media/applies-to/yes.png)Azure Synapse Analytics </Token> You use data transformation activities in a [pipeline](../../data-factory/concepts-pipelines-activities.md) to transform and process raw data into predictions and insights. This article builds on the [transform data](../../data-factory/transform-data.md) article, which presents a general overview of data transformation and the supported transformation activities.
-In Azure Synapse Analytics, you can use the SQL Pool Stored Procedure Activity to invoke a stored procedure in a dedicated SQL pool.
+In Azure Synapse Analytics, you can use the SQL pool Stored Procedure Activity to invoke a stored procedure in a dedicated SQL pool.
## Syntax details
-The following settings are supported in SQL Pool Stored Procedure activity:
+The following settings are supported in SQL pool stored procedure activity:
| Property | Description | Required | | - | - | -- | | name | Name of the activity | Yes | | description | Text describing what the activity is used for | No |
-| type | For SQL Pool Stored Procedure Activity, the activity type is **SqlPoolStoredProcedure** | Yes |
-| sqlPool | Reference to a [dedicated SQL pool](../sql-data-warehouse/sql-data-warehouse-overview-what-is.md) in the current Azure Synapse workspace. | Yes |
+| type | For SQL pool stored procedure activity, the activity type is **SqlPoolStoredProcedure** | Yes |
+| sqlPool | Reference to a [dedicated SQL pool](../sql/overview-architecture.md) in the current Azure Synapse workspace. | Yes |
| storedProcedureName | Specify the name of the stored procedure to invoke. | Yes | | storedProcedureParameters | Specify the values for stored procedure parameters. Use `"param1": { "value": "param1Value","type":"param1Type" }` to pass parameter values and their type supported by the data source. If you need to pass null for a parameter, use `"param1": { "value": null }` (all lower case). | No |
synapse-analytics Apache Spark Applications https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/monitoring/apache-spark-applications.md
Last updated 04/15/2020 + # Use Synapse Studio to monitor your Apache Spark applications
With Azure Synapse Analytics, you can use Apache Spark to run notebooks, jobs, a
This article explains how to monitor your Apache Spark applications, allowing you to keep an eye on the latest status, issues, and progress.
-This tutorial covers the following tasks:
-
-* Monitor running Apache Spark Application
-* View completed Apache Spark Application
-* View canceled Apache Spark Application
-* Debug failed Apache Spark Application
-* View input data and output data for Apache Spark Application
-* Compare Apache Spark Applications
-
-## Prerequisites
-
-Before you start with this tutorial, make sure to meet the following requirements:
--- A Synapse Studio workspace. For instructions, see [Create a Synapse Studio workspace](../../machine-learning/how-to-manage-workspace.md#create-a-workspace).--- An Apache Spark pool.- ## View Apache Spark applications You can view all Apache Spark applications from **Monitor** -> **Apache Spark applications**. ![apache spark applications](./media/how-to-monitor-spark-applications/apache-spark-applications.png)
Select an Apache Spark application, and click on **Input data/Output data tab**
There are two ways to compare applications. You can compare by choose a **Compare Application**, or click the **Compare in notebook** button to view it in the notebook.
-### Compare by choose an application
+### Compare by choosing an application
Click on **Compare applications** button and choose an application to compare performance, you can intuitively see the difference between the two applications.
synapse-analytics Quickstart Transform Data Using Spark Job Definition https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/quickstart-transform-data-using-spark-job-definition.md
In this quickstart, we use the workspace named "sampletest" as an example. It wi
A pipeline contains the logical flow for an execution of a set of activities. In this section, you'll create a pipeline that contains an Apache Spark job definition activity.
-1. Go to the **Integrate** tab. Select on the plus icon next to the pipelines header and select Pipeline.
+1. Go to the **Integrate** tab. Select the plus icon next to the pipelines header and select **Pipeline**.
![Create a new pipeline](media/doc-common-process/new-pipeline.png) 2. In the **Properties** settings page of the pipeline, enter **demo** for **Name**.
-3. Under *Synapse* in the *Activities* pane, drag **Spark job definition** onto the pipeline canvas.
+3. Under **Synapse** in the **Activities** pane, drag **Spark job definition** onto the pipeline canvas.
![drag spark job definition](media/quickstart-transform-data-using-spark-job-definition/drag-spark-job-definition.png)
Once you create your Apache Spark job definition, you'll be automatically sent t
### General settings
-1. Select the spark job definition module in the canvas.
+1. Select the spark job definition module on the canvas.
-2. In General tab, enter **sample** for **Name**.
+2. In the **General** tab, enter **sample** for **Name**.
3. (Option) You can also enter a description.
Once you create your Apache Spark job definition, you'll be automatically sent t
6. Retry interval: The number of seconds between each retry attempt.
-7. Secure output: When checked, output from the activity will not be captured in logging.
+7. Secure output: When checked, output from the activity won't be captured in logging.
-8. Secure input: When checked, input from the activity will not be captured in logging.
+8. Secure input: When checked, input from the activity won't be captured in logging.
![spark job definition general](media/quickstart-transform-data-using-spark-job-definition/spark-job-definition-general.png)
Once you create your Apache Spark job definition, you'll be automatically sent t
On this panel, you can reference to the Spark job definition to run.
-* Expand the Spark job definition list, you can choose an existing Apache Spark job definition. You can also create a new Apache Spark job definition by clicking the new button to reference the Spark job definition to be run.
+* Expand the Spark job definition list, you can choose an existing Apache Spark job definition. You can also create a new Apache Spark job definition by selecting the **New** button to reference the Spark job definition to be run.
-* You can add command-line arguments by clicking the **New** button. It should be noted that this will override the command-line arguments defined by the Spark job definition. <br> *Sample: `abfss://…/path/to/shakespeare.txt` `abfss://…/path/to/result`* <br>
+* You can add command-line arguments by clicking the **New** button. It should be noted that adding command-line arguments will override the command-line arguments defined by the Spark job definition. <br> *Sample: `abfss://…/path/to/shakespeare.txt` `abfss://…/path/to/result`* <br>
![spark job definition pipline settings](media/quickstart-transform-data-using-spark-job-definition/spark-job-definition-pipline-settings.png)
Advance to the following articles to learn about Azure Synapse Analytics support
> [!div class="nextstepaction"] > [Pipeline and activities](../data-factory/concepts-pipelines-activities.md?bc=%2fazure%2fsynapse-analytics%2fbreadcrumb%2ftoc.json&toc=%2fazure%2fsynapse-analytics%2ftoc.json) > [Mapping data flow overview](../data-factory/concepts-data-flow-overview.md?bc=%2fazure%2fsynapse-analytics%2fbreadcrumb%2ftoc.json&toc=%2fazure%2fsynapse-analytics%2ftoc.json)
-> [Data flow expression language](../data-factory/data-flow-expression-functions.md?bc=%2fazure%2fsynapse-analytics%2fbreadcrumb%2ftoc.json&toc=%2fazure%2fsynapse-analytics%2ftoc.json)
+> [Data flow expression language](../data-factory/data-flow-expression-functions.md?bc=%2fazure%2fsynapse-analytics%2fbreadcrumb%2ftoc.json&toc=%2fazure%2fsynapse-analytics%2ftoc.json)
synapse-analytics Develop Openrowset https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql/develop-openrowset.md
ROWTERMINATORΓÇ»='row_terminator'`
Specifies the row terminator to be used. If row terminator is not specified, one of default terminators will be used. Default terminators for PARSER_VERSION = '1.0' are \r\n, \n and \r. Default terminators for PARSER_VERSION = '2.0' are \r\n and \n.
-ESCAPECHAR = 'char'
+> [!NOTE]
+> When you use PARSER_VERSION='1.0' and specify \n (newline) as the row terminator, it will be automatically prefixed with a \r (carriage return) character, which results in a row terminator of \r\n.
+
+ESCAPE_CHAR = 'char'
Specifies the character in the file that is used to escape itself and all delimiter values in the file. If the escape character is followed by a value other than itself, or any of the delimiter values, the escape character is dropped when reading the value.
CSV parser version 1.0 is default and feature rich. Version 2.0 is built for per
CSV parser version 1.0 specifics: - Following options aren't supported: HEADER_ROW.
+- Default terminators are \r\n, \n and \r.
+- If you specify \n (newline) as the row terminator, it will be automatically prefixed with a \r (carriage return) character, which results in a row terminator of \r\n.
CSV parser version 2.0 specifics:
CSV parser version 2.0 specifics:
- Supported format for DATE data type: YYYY-MM-DD - Supported format for TIME data type: HH:MM:SS[.fractional seconds] - Supported format for DATETIME2 data type: YYYY-MM-DD HH:MM:SS[.fractional seconds]
+- Default terminators are \r\n and \n.
HEADER_ROW = { TRUE | FALSE }
synapse-analytics Resources Self Help Sql On Demand https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql/resources-self-help-sql-on-demand.md
There are some general system constraints that may affect your workload:
| Max number of Synapse workspaces per subscription | 20 | | Max number of databases per serverless pool | 20 (not including databases synchronized from Apache Spark pool) | | Max number of databases synchronized from Apache Spark pool | Not limited |
-| Max number of databases objects per database | The sum of the number of all objects in a database cannot exceed 2,147,483,647 (see [limitations in SQL Server database engine](https://docs.microsoft.com/sql/sql-server/maximum-capacity-specifications-for-sql-server#objects) ) |
-| Max identifier length (in characters) | 128 (see [limitations in SQL Server database engine](https://docs.microsoft.com/sql/sql-server/maximum-capacity-specifications-for-sql-server#objects) )|
+| Max number of databases objects per database | The sum of the number of all objects in a database cannot exceed 2,147,483,647 (see [limitations in SQL Server database engine](/sql/sql-server/maximum-capacity-specifications-for-sql-server#objects) ) |
+| Max identifier length (in characters) | 128 (see [limitations in SQL Server database engine](/sql/sql-server/maximum-capacity-specifications-for-sql-server#objects) )|
| Max query duration | 30 min | | Max size of the result set | 80 GB (shared between all currently executing concurrent queries) | | Max concurrency | Not limited and depends on the query complexity and amount of data scanned. One serverless SQL pool can concurrently handle 1000 active sessions that are executing lightweight queries, but the numbers will drop if the queries are more complex or scan a larger amount of data. |
synapse-analytics Synapse Notebook Activity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/synapse-notebook-activity.md
Title: Transform data by running a Synapse notebook
-description: In this article, you learn how to create and develop Synapse notebook activity a Synapse pipeline.
+description: In this article, you learn how to create and develop a Synapse notebook activity and a Synapse pipeline.
You can create a Synapse notebook activity directly from the Synapse pipeline ca
### Add a Synapse notebook activity from pipeline canvas
-Drag and drop **Synapse notebook** under **Activities** to the Synapse pipeline canvas. Select on the Synapse notebook activity box and config the notebook content for current activity in the **settings**. You can select an existing notebook from current workspace or add a new one.
+Drag and drop **Synapse notebook** under **Activities** onto the Synapse pipeline canvas. Select on the Synapse notebook activity box and config the notebook content for current activity in the **settings**. You can select an existing notebook from the current workspace or add a new one.
![screenshot-showing-create-notebook-activity](./media/synapse-notebook-activity/create-synapse-notebook-activity.png)
Select the **Add to pipeline** button on the upper right corner to add a noteboo
### Designate a parameters cell
-# [Classical notebook](#tab/classical)
+# [Classic notebook](#tab/classical)
To parameterize your notebook, select the ellipses (...) to access the other cell actions menu at the far right. Then select **Toggle parameter cell** to designate the cell as the parameters cell.
To parameterize your notebook, select the ellipses (...) to access the **more co
-Azure Data Factory looks for the parameters cell and treats this cell as defaults for the parameters passed in at execution time. The execution engine will add a new cell beneath the parameters cell with input parameters in order to overwrite the default values. When a parameters cell isn't designated, the injected cell will be inserted at the top of the notebook.
+Azure Data Factory looks for the parameters cell and uses the values as defaults for the parameters passed in at execution time. The execution engine will add a new cell beneath the parameters cell with input parameters to overwrite the default values. When a parameters cell isn't designated, the injected cell will be inserted at the top of the notebook.
### Assign parameters values from a pipeline
-Once you've created a notebook with parameters, you can execute it from a pipeline with the Synapse notebook activity. After you add the activity to your pipeline canvas, you will be able to set the parameters values under **Base parameters** section on the **Settings** tab.
+Once you've created a notebook with parameters, you can execute it from a pipeline with the Synapse notebook activity. After you add the activity to your pipeline canvas, you'll be able to set the parameters values under **Base parameters** section on the **Settings** tab.
[![screenshot-showing-assign-a-parameter](./media/synapse-notebook-activity/assign-parameter.png)](./media/synapse-notebook-activity/assign-parameter.png#lightbox)
When assigning parameter values, you can use the [pipeline expression language](
## Read Synapse notebook cell output value
-You can read notebook cell output value in subsequent activities follow steps below:
+You can read the notebook cell output value in the next activities by following the steps below:
1. Call [mssparkutils.notebook.exit](./spark/microsoft-spark-utilities.md#exit-a-notebook) API in your Synapse notebook activity to return the value that you want to show in activity output, for example: ```python mssparkutils.notebook.exit("hello world") ```
- Saving the notebook content and retrigger the pipeline, the notebook activity output will contain the exitValue that can be consumed for subsequent activities in step 2.
+ Saving the notebook content and retrigger the pipeline, the notebook activity output will contain the exitValue that can be consumed for the following activities in step 2.
2. Read exitValue property from notebook activity output.
-Here is a sample expression that is used to check whether the exitValue fetched from the notebook activity output equals to ΓÇ£hello worldΓÇ¥ or not:
+Here is a sample expression that is used to check whether the exitValue fetched from the notebook activity output equals ΓÇ£hello worldΓÇ¥:
[![screenshot-showing-read-exit-value](./media/synapse-notebook-activity/synapse-read-exit-value.png)](./media/synapse-notebook-activity/synapse-read-exit-value.png#lightbox) ## Run another Synapse notebook
-You can reference other notebook in a Synapse notebook activity via calling [%run magic](./spark/apache-spark-development-using-notebooks.md#notebook-reference) or [mssparkutils notebook utilities](./spark/microsoft-spark-utilities.md#notebook-utilities). Both support nesting function calls. Here are key differences of these two methods. You can decide which one to use based on your scenario.
+You can reference other notebooks in a Synapse notebook activity via calling [%run magic](./spark/apache-spark-development-using-notebooks.md#notebook-reference) or [mssparkutils notebook utilities](./spark/microsoft-spark-utilities.md#notebook-utilities). Both support nesting function calls. The key differences of these two methods that you should consider based on your scenario are:
-- [%run magic](./spark/apache-spark-development-using-notebooks.md#notebook-reference) copies all cells from the referenced notebook to the %run cell and share the variable context. When notebook1 reference notebook2 via `%run notebook2` and notebook2 calls a [mssparkutils.notebook.exit](./spark/microsoft-spark-utilities.md#exit-a-notebook) function. The cell execution in notebook1 will be stopped. We recommend you to use %run magic when you want to "include" a notebook file.-- [mssparkutils notebook utilities](./spark/microsoft-spark-utilities.md#notebook-utilities) calls the referenced notebook as a method or a function. The variable context is not shared. When notebook1 reference notebook2 via `mssparkutils.notebook.run("notebook2")` and notebook2 calls a [mssparkutils.notebook.exit](./spark/microsoft-spark-utilities.md#exit-a-notebook) function. The cell execution in notebook1 will continue. We recommend you to use mssparkutils notebook utilities when you want to "import" a notebook.
+- [%run magic](./spark/apache-spark-development-using-notebooks.md#notebook-reference) copies all cells from the referenced notebook to the %run cell and shares the variable context. When notebook1 references notebook2 via `%run notebook2` and notebook2 calls a [mssparkutils.notebook.exit](./spark/microsoft-spark-utilities.md#exit-a-notebook) function, the cell execution in notebook1 will be stopped. We recommend you use %run magic when you want to "include" a notebook file.
+- [mssparkutils notebook utilities](./spark/microsoft-spark-utilities.md#notebook-utilities) calls the referenced notebook as a method or a function. The variable context isn't shared. When notebook1 references notebook2 via `mssparkutils.notebook.run("notebook2")` and notebook2 calls a [mssparkutils.notebook.exit](./spark/microsoft-spark-utilities.md#exit-a-notebook) function, the cell execution in notebook1 will continue. We recommend you use mssparkutils notebook utilities when you want to "import" a notebook.
>[!Note]
-> Run another Synapse notebook from a Synapse pipeline only work for notebook with Preview enabled.
+> Run another Synapse notebook from a Synapse pipeline will only work for a notebook with Preview enabled.
## See notebook activity run history
-Go to **Pipeline runs** under **Monitor** tab, you can see the pipeline you have triggered. Open the pipeline that contains notebook activity to see the run history.
-You can see the latest notebook run snapshot including both cells input and output via clicking the **open notebook** button.
+Go to **Pipeline runs** under the **Monitor** tab, you'll see the pipeline you have triggered. Open the pipeline that contains notebook activity to see the run history.
+
+You can see the latest notebook run snapshot including both cells input and output by selecting the **open notebook** button.
![see-notebook-activity-history](./media/synapse-notebook-activity/input-output-open-notebook.png)
-You can see the notebook activity input or output via clicking the **input** or **Output** button. If your pipeline failed with user error, you can click the **output** to check the **result** field to see the detailed user error traceback.
+You can see the notebook activity input or output by selecting the **input** or **Output** button. If your pipeline failed with a user error, select the **output** to check the **result** field to see the detailed user error traceback.
![screenshot-showing-see-output-user-error](./media/synapse-notebook-activity/notebook-output-user-error.png) - ## Synapse notebook activity definition -
-Here is the sample JSON definition of a Synapse notebook activity:
+Here's the sample JSON definition of a Synapse notebook activity:
```json {
Here is the sample JSON definition of a Synapse notebook activity:
## Synapse notebook activity output
-Here is the sample JSON of a Synapse notebook activity output:
+Here's the sample JSON of a Synapse notebook activity output:
```json
virtual-desktop Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-desktop/whats-new.md
The Azure Virtual Desktop agent updates at least once per month.
Here's what's changed in the Azure Virtual Desktop Agent: -- Version 1.0.2944.1400 for production and version 1.0.2990.800 for all validation host pools: This update was released April 27, 2021.-- Version 1.0.2990.800: This update was released April 13, 2021 and has the following changes:
+- Version 1.0.3130.1200: This update was released May 2021 for validation pools and has the following changes:
+ - General improvements and bug fixes.
+ - Fixes an issue with getting the host pool path for Intune registration.
+ - Added logging to better diagnose agent issues.
+- Version 1.0.3050.1200: This update was released May 2021 for validation pools and has the following changes:
+ - Updated internal monitors for agent health.
+ - Updated retry logic for stack health.
+- Version 1.0.2990.1500: This update was released April 2021 and has the following changes:
- Updated agent error messages.
- - Adds an exception that prevents you from installing non-Windows 7 agents on Windows 7 VMs.
+ - Added an exception that prevents you from installing non-Windows 7 agents on Windows 7 VMs.
- Has updated heartbeat service logic.-- Version 1.0.2944.1400: This update was released April 7, 2021 and has the following changes:
- - Placed links to the Azure Virtual Desktop Agent troubleshooting guide in the event viewer logs for agent errors.
+- Version 1.0.2944.1400: This update was released April 2021 and has the following changes:
+ - Placed links to the Windows Virtual Desktop Agent troubleshooting guide in the event viewer logs for agent errors.
- Added an additional exception for better error handling. - Added the WVDAgentUrlTool.exe that allows customers to check which required URLs they can access.-- Version 1.0.2866.1500: This update was released March 26, 2021 and it fixes an issue with the stack health check.-- Version 1.0.2800.2802: This update was released March 10, 2021 and it has general improvements and bug fixes.-- Version 1.0.2800.2800: This update was released March 2, 2021 and it fixes a reverse connection issue.-- Version 1.0.2800.2700: This update was released February 10, 2021 and it has general improvements and bug fixes.-- Version 1.0.2800.2700: This update was released February 4, 2021 and it fixes an access denied orchestration issue.
+- Version 1.0.2866.1500: This update was released March 2021 and it fixes an issue with the stack health check.
+- Version 1.0.2800.2802: This update was released March 2021 and it has general improvements and bug fixes.
+- Version 1.0.2800.2800: This update was released March 2021 and it fixes a reverse connection issue.
+- Version 1.0.2800.2700: This update was released February 2021 and it fixes an access denied orchestration issue.
## FSLogix updates
virtual-machines Disk Bursting https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/disk-bursting.md
Title: Managed disk bursting
description: Learn about disk bursting for Azure disks and Azure virtual machines. Previously updated : 06/16/2021 Last updated : 06/24/2021
virtual-machines Disks Types https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/disks-types.md
Title: Select a disk type for Azure IaaS VMs - managed disks
description: Learn about the available Azure disk types for virtual machines, including ultra disks, premium SSDs, standard SSDs, and Standard HDDs. Previously updated : 05/12/2021 Last updated : 06/24/2021
To learn more about individual VM types and sizes in Azure for Windows or Linux,
### Disk size [!INCLUDE [disk-storage-premium-ssd-sizes](../../includes/disk-storage-premium-ssd-sizes.md)]
-When you provision a premium storage disk, unlike standard storage, you are guaranteed the capacity, IOPS, and throughput of that disk. For example, if you create a P50 disk, Azure provisions 4,095-GB storage capacity, 7,500 IOPS, and 250-MB/s throughput for that disk. Your application can use all or part of the capacity and performance. Premium SSD disks are designed to provide low single-digit millisecond latencies and target IOPS and throughput described in the preceding table 99.9% of the time.
+When you provision a premium storage disk, unlike standard storage, you are guaranteed the capacity, IOPS, and throughput of that disk. For example, if you create a P50 disk, Azure provisions 4,095-GB storage capacity, 7,500 IOPS, and 250-MB/s throughput for that disk. Your application can use all or part of the capacity and performance. Premium SSDs are designed to provide low single-digit millisecond latencies and target IOPS and throughput described in the preceding table 99.9% of the time.
## Bursting
-Premium SSD sizes smaller than P30 now offer disk bursting and can burst their IOPS per disk up to 3,500 and their bandwidth up to 170 MB/s. Bursting is automated and operates based on a credit system. Credits are automatically accumulated in a burst bucket when disk traffic is below the provisioned performance target and credits are automatically consumed when traffic bursts beyond the target, up to the max burst limit. The max burst limit defines the ceiling of disk IOPS & Bandwidth even if you have burst credits to consume from. Disk bursting provides better tolerance on unpredictable changes of IO patterns. You can best leverage it for OS disk boot and applications with spiky traffic.
-
-Disks bursting support will be enabled on new deployments of applicable disk sizes by default, with no user action required. For existing disks of the applicable sizes, you can enable bursting with either of two the options: detach and reattach the disk or stop and restart the attached VM. All burst applicable disk sizes will start with a full burst credit bucket when the disk is attached to a Virtual Machine that supports a max duration at peak burst limit of 30 mins. To learn more about how bursting work on Azure Disks, see [Premium SSD bursting](./disk-bursting.md).
+Premium SSDs offer disk bursting. Disk bursting provides better tolerance on unpredictable changes of IO patterns. You can best leverage it for OS disk boot and applications with spiky traffic. To learn more about how bursting for Azure disks works, see [Disk-level bursting](disk-bursting.md#disk-level-bursting).
### Transactions
Standard SSDs are designed to provide single-digit millisecond latencies and the
For standard SSDs, each I/O operation less than or equal to 256 KiB of throughput is considered a single I/O operation. I/O operations larger than 256 KiB of throughput are considered multiple I/Os of size 256 KiB. These transactions have a billing impact.
+### Bursting
+
+Standard SSDs offer disk bursting. Disk bursting provides better tolerance on unpredictable changes of IO patterns. You can best leverage it for OS disk boot and applications with spiky traffic. To learn more about how bursting for Azure disks works, see [Disk-level bursting](disk-bursting.md#disk-level-bursting).
+ ## Standard HDD Azure standard HDDs deliver reliable, low-cost disk support for VMs running latency-insensitive workloads. With standard storage, the data is stored on hard disk drives (HDDs). Latency, IOPS, and Throughput of Standard HDD disks may vary more widely as compared to SSD-based disks. Standard HDD Disks are designed to deliver write latencies under 10ms and read latencies under 20ms for most IO operations, however the actual performance may vary depending on the IO size and workload pattern. When working with VMs, you can use standard HDD disks for dev/test scenarios and less critical workloads. Standard HDDs are available in all Azure regions and can be used with all Azure VMs.
virtual-machines Guest Configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/extensions/guest-configuration.md
To deploy the extension for Linux:
"properties": { "publisher": "Microsoft.GuestConfiguration", "type": "ConfigurationforLinux",
- "typeHandlerVersion": "1.0"
+ "typeHandlerVersion": "1.0",
"autoUpgradeMinorVersion": true, "settings": {}, "protectedSettings": {}
To deploy the extension for Windows:
"properties": { "publisher": "Microsoft.GuestConfiguration", "type": "ConfigurationforWindows",
- "typeHandlerVersion": "1.0"
+ "typeHandlerVersion": "1.0",
"autoUpgradeMinorVersion": true, "settings": {}, "protectedSettings": {}
virtual-machines Os Upgrade Hana Large Instance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/sap/os-upgrade-hana-large-instance.md
Title: Operating system upgrade for the SAP HANA on Azure (Large Instances)| Microsoft Docs
-description: Learn to perform an operating system upgrade for SAP HANA on Azure (Large Instances).
+description: Learn to do an operating system upgrade for SAP HANA on Azure (Large Instances).
documentationcenter:
vm-linux Previously updated : 06/23/2021 Last updated : 06/24/2021 # Operating System Upgrade
-This article describes the details of operating system (OS) upgrades on HANA Large Instances, otherwise known as BareMetal Infrastructure.
+This article describes the details of operating system (OS) upgrades on HANA Large Instances (HLI), otherwise known as BareMetal Infrastructure.
>[!NOTE]
->The OS upgrade is customer's responsibility, Microsoft operations support can guide you to the key areas to watch out during the upgrade. You should consult your operating system vendor as well before you plan for an upgrade.
+>Upgrading the OS is your responsibility. Microsoft operations support can guide you in key areas of the upgrade, but consult your operating system vendor as well when planning an upgrade.
-During HLI unit provisioning, the Microsoft operations team installs the operating system.
-Over the time, you are required to maintain the operating system (Example: Patching, tuning, upgrading etc.) on the HLI unit.
-
-Before you do major changes to the operating system (for example, Upgrade SP1 to SP2), you shall contact Microsoft Operations team by opening a support ticket to consult.
+During HLI provisioning, the Microsoft operations team installs the operating system.
+You're required to maintain the operating system. For example, you need to do the patching, tuning, upgrading, and so on, on the HLI. Before you make major changes to the operating system, for example, upgrade SP1 to SP2, contact the Microsoft Operations team by opening a support ticket. Then they can consult with you. We recommend opening this ticket at least one week before the upgrade.
Include in your ticket: * Your HLI subscription ID. * Your server name.
-* The patch level you are planning to apply.
-* The date you are planning this change.
-
-We would recommend you open this ticket at least one week prior to the desirable upgrade, which will let opration team know about the desired firmware version.
+* The patch level you're planning to apply.
+* The date you're planning this change.
For the support matrix of the different SAP HANA versions with the different Linux versions, see [SAP Note #2235581](https://launchpad.support.sap.com/#/notes/2235581). ## Known issues
-The following are the few common known issues during the upgrade:
-- On SKU Type II class SKU, the software foundation software (SFS) is removed after the OS upgrade. You need to reinstall the compatible SFS after the OS upgrade.-- Ethernet card drivers (ENIC and FNIC) rolled back to older version. You need to reinstall the compatible version of the drivers after the upgrade.
+There are a couple of known issues with the upgrade:
+- On SKU Type II class SKU, the software foundation software (SFS) is removed during the OS upgrade. You'll need to reinstall the compatible SFS after the OS upgrade is complete.
+- Ethernet card drivers (ENIC and FNIC) are rolled back to an older version. You'll need to reinstall the compatible version of the drivers after the upgrade.
## SAP HANA Large Instance (Type I) recommended configuration
-Operating system configuration can drift from the recommended settings over time due to patching, system upgrades, and changes made by customers. Additionally, Microsoft identifies updates needed for existing systems to ensure they are optimally configured for the best performance and resiliency. Following instructions outline recommendations that address network performance, system stability, and optimal HANA performance.
+The OS configuration can drift from the recommended settings over time. This drift can occur because of patching, system upgrades, and other changes you may make. Microsoft identifies updates needed to ensure HANA Large Instances are optimally configured for the best performance and resiliency. The following instructions outline recommendations that address network performance, system stability, and optimal HANA performance.
### Compatible eNIC/fNIC driver versions
- In order to have proper network performance and system stability, it is advised to ensure the OS-specific appropriate version of eNIC and fNIC drivers are installed as depicted in following compatibility table. Servers are delivered to customers with compatible versions. In some cases, during OS/Kernel patching, drivers can get rolled back to the default driver versions. Ensure appropriate driver version is running post OS/Kernel patching operations.
+ To have proper network performance and system stability, ensure the appropriate OS-specific version of eNIC and fNIC drivers are installed per the following compatibility table. Servers are delivered to customers with compatible versions. However, drivers can get rolled back to default versions during OS/kernel patching. Ensure the appropriate driver version is running post OS/kernel patching operations.
| OS Vendor | OS Package Version | Firmware Version | eNIC Driver | fNIC Driver |
rpm -qa | grep enic/fnic
``` rpm -e <old-rpm-package> ```
-#### Install the recommended eNIC/fNIC driver packages
+#### Install recommended eNIC/fNIC driver packages
``` rpm -ivh <enic/fnic.rpm> ```
-#### Commands to confirm the installation
+#### Commands to confirm installation
``` modinfo enic modinfo fnic ```
-#### Steps for eNIC/fNIC drivers installation during OS Upgrade
+#### Steps for eNIC/fNIC drivers installation during OS upgrade
* Upgrade OS version * Remove old rpm packages
modinfo fnic
### SuSE HLIs GRUB update failure
-SAP on Azure HANA Large Instances (Type I) can be in a non-bootable state after upgrade. The below procedure fixes this issue.
+SAP on Azure HANA Large Instances (Type I) can be in a non-bootable state after upgrade. The following procedure fixes this issue.
+ #### Execution Steps
+- Execute the `multipath -ll` command.
+- Get the logical unit number (LUN) ID or use the command: `fdisk -l | grep mapper`
+- Update the `/etc/default/grub_installdevice` file with line `/dev/mapper/<LUN ID>`. Example: /dev/mapper/3600a09803830372f483f495242534a56
-* Execute `multipath -ll` command.
-* Get the LUN ID whose size is approximately 50G or use the command: `fdisk -l | grep mapper`
-* Update `/etc/default/grub_installdevice` file with line `/dev/mapper/<LUN ID>`. Example: /dev/mapper/3600a09803830372f483f495242534a56
>[!NOTE]
->LUN ID varies from server to server.
+>The LUN ID varies from server to server.
-### Disable EDAC
- The Error Detection And Correction (EDAC) module helps in detecting and correcting memory errors. However, the underlying hardware for SAP HANA on Azure Large Instances (Type I) is already performing the same function. Having the same feature enabled at the hardware and operating system (OS) levels can cause conflicts and can lead to occasional, unplanned shutdowns of the server. Therefore, it is recommended to disable the module from the OS.
+### Disable Error Detection And Correction
+ Error Detection And Correction (EDAC) modules help detect and correct memory errors. However, the underlying HLI Type I hardware already detects and corrects memory errors. Enabling the same feature at the hardware and OS levels can cause conflicts and lead to unplanned shutdowns of the server. We recommend disabling the EDAC modules from the OS.
#### Execution Steps
-* Check if EDAC module is enabled. If an output is returned in below command, that means the module is enabled.
+- Check whether the EDAC modules are enabled. If an output is returned from the following command, the modules are enabled.
+ ``` lsmod | grep -i edac ```
-* Disable the modules by appending the following lines to the file `/etc/modprobe.d/blacklist.conf`
+- Disable the modules by appending the following lines to the file `/etc/modprobe.d/blacklist.conf`
``` blacklist sb_edac blacklist edac_core ```
-A reboot is required to take changes in place. Execute `lsmod` command and verify the module is not present there in output.
+A reboot is required for the changes to take place. After reboot, execute the `lsmod` command again and verify the modules aren't enabled.
### Kernel parameters
- Make sure the correct setting for `transparent_hugepage`, `numa_balancing`, `processor.max_cstate`, `ignore_ce` and `intel_idle.max_cstate` are applied.
+Make sure the correct settings for `transparent_hugepage`, `numa_balancing`, `processor.max_cstate`, `ignore_ce`, and `intel_idle.max_cstate` are applied.
* intel_idle.max_cstate=1 * processor.max_cstate=1
A reboot is required to take changes in place. Execute `lsmod` command and verif
#### Execution Steps
-* Add these parameters to the `GRB_CMDLINE_LINUX` line in the file `/etc/default/grub`
+- Add these parameters to the `GRB_CMDLINE_LINUX` line in the file `/etc/default/grub`:
+ ``` intel_idle.max_cstate=1 processor.max_cstate=1 transparent_hugepage=never numa_balancing=disable mce=ignore_ce ```
-* Create a new grub file.
+- Create a new grub file.
``` grub2-mkconfig -o /boot/grub2/grub.cfg ```
-* Reboot system.
-
+- Reboot your system.
## Next steps-- Refer [Backup and restore](hana-overview-high-availability-disaster-recovery.md) for OS backup Type I SKU class.-- Refer [OS Backup](./large-instance-os-backup.md) for HLI.+
+Learn to set up an SMT server for SUSE Linux.
+
+> [!div class="nextstepaction"]
+> [Set up SMT server for SUSE Linux](hana-setup-smt.md)