Updates from: 05/09/2022 01:05:42
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Multi Service Web App Access Microsoft Graph As App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/multi-service-web-app-access-microsoft-graph-as-app.md
+
+ Title: Tutorial - Web app accesses Microsoft Graph as the app| Azure
+description: In this tutorial, you learn how to access data in Microsoft Graph by using managed identities.
+++++++ Last updated : 04/25/2022++
+ms.devlang: csharp, javascript
+
+#Customer intent: As an application developer, I want to learn how to access data in Microsoft Graph by using managed identities.
++
+# Tutorial: Access Microsoft Graph from a secured app as the app
+
+Learn how to access Microsoft Graph from a web app running on Azure App Service.
++
+You want to call Microsoft Graph for the web app. A safe way to give your web app access to data is to use a [system-assigned managed identity](../managed-identities-azure-resources/overview.md). A managed identity from Azure Active Directory allows App Service to access resources through role-based access control (RBAC), without requiring app credentials. After assigning a managed identity to your web app, Azure takes care of the creation and distribution of a certificate. You don't have to worry about managing secrets or app credentials.
+
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+>
+> * Create a system-assigned managed identity on a web app.
+> * Add Microsoft Graph API permissions to a managed identity.
+> * Call Microsoft Graph from a web app by using managed identities.
++
+## Prerequisites
+
+* A web application running on Azure App Service that has the [App Service authentication/authorization module enabled](multi-service-web-app-authentication-app-service.md).
+
+## Enable managed identity on app
+
+If you create and publish your web app through Visual Studio, the managed identity was enabled on your app for you. In your app service, select **Identity** in the left pane and then select **System assigned**. Verify that **Status** is set to **On**. If not, select **Save** and then select **Yes** to enable the system-assigned managed identity. When the managed identity is enabled, the status is set to **On** and the object ID is available.
+
+Take note of the **Object ID** value, which you'll need in the next step.
++
+## Grant access to Microsoft Graph
+
+When accessing the Microsoft Graph, the managed identity needs to have proper permissions for the operation it wants to perform. Currently, there's no option to assign such permissions through the Azure portal. The following script will add the requested Microsoft Graph API permissions to the managed identity service principal object.
+
+# [PowerShell](#tab/azure-powershell)
+
+```powershell
+# Install the module. (You need admin on the machine.)
+# Install-Module AzureAD.
+
+# Your tenant ID (in the Azure portal, under Azure Active Directory > Overview).
+$TenantID="<tenant-id>"
+$resourceGroup = "securewebappresourcegroup"
+$webAppName="SecureWebApp-20201102125811"
+
+# Get the ID of the managed identity for the web app.
+$spID = (Get-AzWebApp -ResourceGroupName $resourceGroup -Name $webAppName).identity.principalid
+
+# Check the Microsoft Graph documentation for the permission you need for the operation.
+$PermissionName = "User.Read.All"
+
+Connect-AzureAD -TenantId $TenantID
+
+# Get the service principal for Microsoft Graph.
+# First result should be AppId 00000003-0000-0000-c000-000000000000
+$GraphServicePrincipal = Get-AzureADServicePrincipal -SearchString "Microsoft Graph" | Select-Object -first 1
+
+# Assign permissions to the managed identity service principal.
+$AppRole = $GraphServicePrincipal.AppRoles | `
+Where-Object {$_.Value -eq $PermissionName -and $_.AllowedMemberTypes -contains "Application"}
+
+New-AzureAdServiceAppRoleAssignment -ObjectId $spID -PrincipalId $spID `
+-ResourceId $GraphServicePrincipal.ObjectId -Id $AppRole.Id
+```
+
+# [Azure CLI](#tab/azure-cli)
+
+```azurecli-interactive
+az login
+
+webAppName="SecureWebApp-20201106120003"
+
+spId=$(az resource list -n $webAppName --query [*].identity.principalId --out tsv)
+
+graphResourceId=$(az ad sp list --display-name "Microsoft Graph" --query [0].objectId --out tsv)
+
+appRoleId=$(az ad sp list --display-name "Microsoft Graph" --query "[0].appRoles[?value=='User.Read.All' && contains(allowedMemberTypes, 'Application')].id" --output tsv)
+
+uri=https://graph.microsoft.com/v1.0/servicePrincipals/$spId/appRoleAssignments
+
+body="{'principalId':'$spId','resourceId':'$graphResourceId','appRoleId':'$appRoleId'}"
+
+az rest --method post --uri $uri --body $body --headers "Content-Type=application/json"
+```
+++
+After executing the script, you can verify in the [Azure portal](https://portal.azure.com) that the requested API permissions are assigned to the managed identity.
+
+Go to **Azure Active Directory**, and then select **Enterprise applications**. This pane displays all the service principals in your tenant. In **Managed Identities**, select the service principal for the managed identity.
+
+If you're following this tutorial, there are two service principals with the same display name (SecureWebApp2020094113531, for example). The service principal that has a **Homepage URL** represents the web app in your tenant. The service principal that appears in **Managed Identities** should *not* have a **Homepage URL** listed and the **Object ID** should match the object ID value of the managed identity in the [previous step](#enable-managed-identity-on-app).
+
+Select the service principal for the managed identity.
++
+In **Overview**, select **Permissions**, and you'll see the added permissions for Microsoft Graph.
++
+## Call Microsoft Graph
+
+# [C#](#tab/programming-language-csharp)
+
+The [ChainedTokenCredential](/dotnet/api/azure.identity.chainedtokencredential), [ManagedIdentityCredential](/dotnet/api/azure.identity.managedidentitycredential), and [EnvironmentCredential](/dotnet/api/azure.identity.environmentcredential) classes are used to get a token credential for your code to authorize requests to Microsoft Graph. Create an instance of the [ChainedTokenCredential](/dotnet/api/azure.identity.chainedtokencredential) class, which uses the managed identity in the App Service environment or the development environment variables to fetch tokens and attach them to the service client. The following code example gets the authenticated token credential and uses it to create a service client object, which gets the users in the group.
+
+To see this code as part of a sample application, see the [sample on GitHub](https://github.com/Azure-Samples/ms-identity-easyauth-dotnet-storage-graphapi/tree/main/3-WebApp-graphapi-managed-identity).
+
+### Install the Microsoft.Identity.Web.MicrosoftGraph client library package
+
+Install the [Microsoft.Identity.Web.MicrosoftGraph NuGet package](https://www.nuget.org/packages/Microsoft.Identity.Web.MicrosoftGraph) in your project by using the .NET Core command-line interface or the Package Manager Console in Visual Studio.
+
+#### .NET Core command-line
+
+Open a command line, and switch to the directory that contains your project file.
+
+Run the install commands.
+
+```dotnetcli
+dotnet add package Microsoft.Identity.Web.MicrosoftGraph
+```
+
+#### Package Manager Console
+
+Open the project/solution in Visual Studio, and open the console by using the **Tools** > **NuGet Package Manager** > **Package Manager Console** command.
+
+Run the install commands.
+```powershell
+Install-Package Microsoft.Identity.Web.MicrosoftGraph
+```
+
+### Example
+
+```csharp
+using System;
+using System.Collections.Generic;
+using System.Threading.Tasks;
+using Microsoft.AspNetCore.Mvc.RazorPages;
+using Azure.Identity;ΓÇï
+using Microsoft.Graph.Core;ΓÇïΓÇï
+using System.Net.Http.Headers;
+
+...
+
+public IList<MSGraphUser> Users { get; set; }
+
+public async Task OnGetAsync()
+{
+ // Create the Graph service client with a ChainedTokenCredential which gets an access
+ // token using the available Managed Identity or environment variables if running
+ // in development.
+ var credential = new ChainedTokenCredential(
+ new ManagedIdentityCredential(),
+ new EnvironmentCredential());
+ var token = credential.GetToken(
+ new Azure.Core.TokenRequestContext(
+ new[] { "https://graph.microsoft.com/.default" }));
+
+ var accessToken = token.Token;
+ var graphServiceClient = new GraphServiceClient(
+ new DelegateAuthenticationProvider((requestMessage) =>
+ {
+ requestMessage
+ .Headers
+ .Authorization = new AuthenticationHeaderValue("bearer", accessToken);
+
+ return Task.CompletedTask;
+ }));
+
+ // MSGraphUser is a DTO class being used to hold User information from the graph service client call
+ List<MSGraphUser> msGraphUsers = new List<MSGraphUser>();
+ try
+ {
+ var users =await graphServiceClient.Users.Request().GetAsync();
+ foreach(var u in users)
+ {
+ MSGraphUser user = new MSGraphUser();
+ user.userPrincipalName = u.UserPrincipalName;
+ user.displayName = u.DisplayName;
+ user.mail = u.Mail;
+ user.jobTitle = u.JobTitle;
+
+ msGraphUsers.Add(user);
+ }
+ }
+ catch(Exception ex)
+ {
+ string msg = ex.Message;
+ }
+
+ Users = msGraphUsers;
+}
+```
+
+# [Node.js](#tab/programming-language-nodejs)
+
+The `DefaultAzureCredential` class from [@azure/identity](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/identity/identity/README.md) package is used to get a token credential for your code to authorize requests to Azure Storage. Create an instance of the `DefaultAzureCredential` class, which uses the managed identity to fetch tokens and attach them to the service client. The following code example gets the authenticated token credential and uses it to create a service client object, which gets the users in the group.
+
+To see this code as part of a sample application, see the [sample on GitHub](https://github.com/Azure-Samples/ms-identity-easyauth-nodejs-storage-graphapi/tree/main/3-WebApp-graphapi-managed-identity).
+
+### Example
+
+```nodejs
+const graphHelper = require('../utils/graphHelper');
+const { DefaultAzureCredential } = require("@azure/identity");
+
+exports.getUsersPage = async(req, res, next) => {
+
+ const defaultAzureCredential = new DefaultAzureCredential();
+
+ try {
+ const tokenResponse = await defaultAzureCredential.getToken("https://graph.microsoft.com/.default");
+
+ const graphClient = graphHelper.getAuthenticatedClient(tokenResponse.token);
+
+ const users = await graphClient
+ .api('/users')
+ .get();
+
+ res.render('users', { user: req.session.user, users: users });
+ } catch (error) {
+ next(error);
+ }
+}
+```
+
+To query Microsoft Graph, the sample uses the [Microsoft Graph JavaScript SDK](https://github.com/microsoftgraph/msgraph-sdk-javascript). The code for this is located in [utils/graphHelper.js](https://github.com/Azure-Samples/ms-identity-easyauth-nodejs-storage-graphapi/blob/main/3-WebApp-graphapi-managed-identity/controllers/graphController.js) of the full sample:
+
+```nodejs
+getAuthenticatedClient = (accessToken) => {
+ // Initialize Graph client
+ const client = graph.Client.init({
+ // Use the provided access token to authenticate requests
+ authProvider: (done) => {
+ done(null, accessToken);
+ }
+ });
+
+ return client;
+}
+```
++
+## Clean up resources
+
+If you're finished with this tutorial and no longer need the web app or associated resources, [clean up the resources you created](multi-service-web-app-clean-up-resources.md).
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Clean up resources](multi-service-web-app-clean-up-resources.md))
active-directory Multi Service Web App Access Microsoft Graph As User https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/multi-service-web-app-access-microsoft-graph-as-user.md
+
+ Title: Tutorial - Web app accesses Microsoft Graph as the user | Azure
+description: In this tutorial, you learn how to access data in Microsoft Graph from a web app for a signed-in user.
+++++++ Last updated : 04/25/2022++
+ms.devlang: csharp, javascript
+
+#Customer intent: As an application developer, I want to learn how to access data in Microsoft Graph from a web app for a signed-in user.
++
+# Tutorial: Access Microsoft Graph from a secured app as the user
+
+Learn how to access Microsoft Graph from a web app running on Azure App Service.
++
+You want to add access to Microsoft Graph from your web app and perform some action as the signed-in user. This section describes how to grant delegated permissions to the web app and get the signed-in user's profile information from Azure Active Directory (Azure AD).
+
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+>
+> * Grant delegated permissions to a web app.
+> * Call Microsoft Graph from a web app for a signed-in user.
++
+## Prerequisites
+
+* A web application running on Azure App Service that has the [App Service authentication/authorization module enabled](multi-service-web-app-authentication-app-service.md).
+
+## Grant front-end access to call Microsoft Graph
+
+Now that you've enabled authentication and authorization on your web app, the web app is registered with the Microsoft identity platform and is backed by an Azure AD application. In this step, you give the web app permissions to access Microsoft Graph for the user. (Technically, you give the web app's Azure AD application the permissions to access the Microsoft Graph Azure AD application for the user.)
+
+In the [Azure portal](https://portal.azure.com) menu, select **Azure Active Directory** or search for and select **Azure Active Directory** from any page.
+
+Select **App registrations** > **Owned applications** > **View all applications in this directory**. Select your web app name, and then select **API permissions**.
+
+Select **Add a permission**, and then select Microsoft APIs and Microsoft Graph.
+
+Select **Delegated permissions**, and then select **User.Read** from the list. Select **Add permissions**.
+
+## Configure App Service to return a usable access token
+
+The web app now has the required permissions to access Microsoft Graph as the signed-in user. In this step, you configure App Service authentication and authorization to give you a usable access token for accessing Microsoft Graph. For this step, you need to add the User.Read scope for the downstream service (Microsoft Graph): `https://graph.microsoft.com/User.Read`.
+
+> [!IMPORTANT]
+> If you don't configure App Service to return a usable access token, you receive a ```CompactToken parsing failed with error code: 80049217``` error when you call Microsoft Graph APIs in your code.
+
+# [Azure Resource Explorer](#tab/azure-resource-explorer)
+Go to [Azure Resource Explorer](https://resources.azure.com/) and using the resource tree, locate your web app. The resource URL should be similar to `https://resources.azure.com/subscriptions/subscriptionId/resourceGroups/SecureWebApp/providers/Microsoft.Web/sites/SecureWebApp20200915115914`.
+
+The Azure Resource Explorer is now opened with your web app selected in the resource tree. At the top of the page, select **Read/Write** to enable editing of your Azure resources.
+
+In the left browser, drill down to **config** > **authsettingsV2**.
+
+In the **authsettingsV2** view, select **Edit**. Find the **login** section of **identityProviders** -> **azureActiveDirectory** and add the following **loginParameters** settings: `"loginParameters":[ "response_type=code id_token","scope=openid offline_access profile https://graph.microsoft.com/User.Read" ]` .
+
+```json
+"identityProviders": {
+ "azureActiveDirectory": {
+ "enabled": true,
+ "login": {
+ "loginParameters":[
+ "response_type=code id_token",
+ "scope=openid offline_access profile https://graph.microsoft.com/User.Read"
+ ]
+ }
+ }
+ }
+},
+```
+
+Save your settings by selecting **PUT**. This setting can take several minutes to take effect. Your web app is now configured to access Microsoft Graph with a proper access token. If you don't, Microsoft Graph returns an error saying that the format of the compact token is incorrect.
+
+# [Azure CLI](#tab/azure-cli)
+
+Use the Azure CLI to call the App Service Web App REST APIs to [get](/rest/api/appservice/web-apps/get-auth-settings) and [update](/rest/api/appservice/web-apps/update-auth-settings) the auth configuration settings so your web app can call Microsoft Graph. Open a command window and login to Azure CLI:
+
+```azurecli
+az login
+```
+
+Get your existing 'config/authsettingsv2ΓÇÖ settings and save to a local *authsettings.json* file.
+
+```azurecli
+az rest --method GET --url '/subscriptions/{SUBSCRIPTION_ID}/resourceGroups/{RESOURCE_GROUP}/providers/Microsoft.Web/sites/{WEBAPP_NAME}/config/authsettingsv2/list?api-version=2020-06-01' > authsettings.json
+```
+
+Open the authsettings.json file using your preferred text editor. Find the **login** section of **identityProviders** -> **azureActiveDirectory** and add the following **loginParameters** settings: `"loginParameters":[ "response_type=code id_token","scope=openid offline_access profile https://graph.microsoft.com/User.Read" ]` .
+
+```json
+"identityProviders": {
+ "azureActiveDirectory": {
+ "enabled": true,
+ "login": {
+ "loginParameters":[
+ "response_type=code id_token",
+ "scope=openid offline_access profile https://graph.microsoft.com/User.Read"
+ ]
+ }
+ }
+ }
+},
+```
+
+Save your changes to the *authsettings.json* file and upload the local settings to your web app:
+
+```azurecli
+az rest --method PUT --url '/subscriptions/{SUBSCRIPTION_ID}/resourceGroups/{RESOURCE_GROUP}/providers/Microsoft.Web/sites/{WEBAPP_NAME}/config/authsettingsv2?api-version=2020-06-01' --body @./authsettings.json
+```
++
+## Call Microsoft Graph
+
+Your web app now has the required permissions and also adds Microsoft Graph's client ID to the login parameters.
+
+# [C#](#tab/programming-language-csharp)
+Using the [Microsoft.Identity.Web library](https://github.com/AzureAD/microsoft-identity-web/), the web app gets an access token for authentication with Microsoft Graph. In version 1.2.0 and later, the Microsoft.Identity.Web library integrates with and can run alongside the App Service authentication/authorization module. Microsoft.Identity.Web detects that the web app is hosted in App Service and gets the access token from the App Service authentication/authorization module. The access token is then passed along to authenticated requests with the Microsoft Graph API.
+
+To see this code as part of a sample application, see the [sample on GitHub](https://github.com/Azure-Samples/ms-identity-easyauth-dotnet-storage-graphapi/tree/main/2-WebApp-graphapi-on-behalf).
+
+> [!NOTE]
+> The Microsoft.Identity.Web library isn't required in your web app for basic authentication/authorization or to authenticate requests with Microsoft Graph. It's possible to [securely call downstream APIs](/azure/app-service/tutorial-auth-aad#call-api-securely-from-server-code) with only the App Service authentication/authorization module enabled.
+>
+> However, the App Service authentication/authorization is designed for more basic authentication scenarios. For more complex scenarios (handling custom claims, for example), you need the Microsoft.Identity.Web library or [Microsoft Authentication Library](msal-overview.md). There's a little more setup and configuration work in the beginning, but the Microsoft.Identity.Web library can run alongside the App Service authentication/authorization module. Later, when your web app needs to handle more complex scenarios, you can disable the App Service authentication/authorization module and Microsoft.Identity.Web will already be a part of your app.
+
+### Install client library packages
+
+Install the [Microsoft.Identity.Web](https://www.nuget.org/packages/Microsoft.Identity.Web/) and [Microsoft.Identity.Web.MicrosoftGraph](https://www.nuget.org/packages/Microsoft.Identity.Web.MicrosoftGraph) NuGet packages in your project by using the .NET Core command-line interface or the Package Manager Console in Visual Studio.
+
+#### .NET Core command line
+
+Open a command line, and switch to the directory that contains your project file.
+
+Run the install commands.
+
+```dotnetcli
+dotnet add package Microsoft.Identity.Web.MicrosoftGraph
+
+dotnet add package Microsoft.Identity.Web
+```
+
+#### Package Manager Console
+
+Open the project/solution in Visual Studio, and open the console by using the **Tools** > **NuGet Package Manager** > **Package Manager Console** command.
+
+Run the install commands.
+```powershell
+Install-Package Microsoft.Identity.Web.MicrosoftGraph
+
+Install-Package Microsoft.Identity.Web
+```
+
+### Startup.cs
+
+In the *Startup.cs* file, the ```AddMicrosoftIdentityWebApp``` method adds Microsoft.Identity.Web to your web app. The ```AddMicrosoftGraph``` method adds Microsoft Graph support.
+
+```csharp
+using Microsoft.AspNetCore.Builder;
+using Microsoft.AspNetCore.Hosting;
+using Microsoft.Extensions.Configuration;
+using Microsoft.Extensions.DependencyInjection;
+using Microsoft.Extensions.Hosting;
+using Microsoft.Identity.Web;
+using Microsoft.AspNetCore.Authentication.OpenIdConnect;
+
+// Some code omitted for brevity.
+public class Startup
+{
+ // This method gets called by the runtime. Use this method to add services to the container.
+ public void ConfigureServices(IServiceCollection services)
+ {
+ services.AddAuthentication(OpenIdConnectDefaults.AuthenticationScheme)
+ .AddMicrosoftIdentityWebApp(Configuration.GetSection("AzureAd"))
+ .EnableTokenAcquisitionToCallDownstreamApi()
+ .AddMicrosoftGraph(Configuration.GetSection("Graph"))
+ .AddInMemoryTokenCaches();
+
+ services.AddRazorPages();
+ }
+}
+
+```
+
+### appsettings.json
+
+*AzureAd* specifies the configuration for the Microsoft.Identity.Web library. In the [Azure portal](https://portal.azure.com), select **Azure Active Directory** from the portal menu and then select **App registrations**. Select the app registration created when you enabled the App Service authentication/authorization module. (The app registration should have the same name as your web app.) You can find the tenant ID and client ID in the app registration overview page. The domain name can be found in the Azure AD overview page for your tenant.
+
+*Graph* specifies the Microsoft Graph endpoint and the initial scopes needed by the app.
+
+```json
+{
+ "AzureAd": {
+ "Instance": "https://login.microsoftonline.com/",
+ "Domain": "fourthcoffeetest.onmicrosoft.com",
+ "TenantId": "[tenant-id]",
+ "ClientId": "[client-id]",
+ // To call an API
+ "ClientSecret": "[secret-from-portal]", // Not required by this scenario
+ "CallbackPath": "/signin-oidc"
+ },
+
+ "Graph": {
+ "BaseUrl": "https://graph.microsoft.com/v1.0",
+ "Scopes": "user.read"
+ },
+ "Logging": {
+ "LogLevel": {
+ "Default": "Information",
+ "Microsoft": "Warning",
+ "Microsoft.Hosting.Lifetime": "Information"
+ }
+ },
+ "AllowedHosts": "*"
+}
+```
+
+### Index.cshtml.cs
+
+The following example shows how to call Microsoft Graph as the signed-in user and get some user information. The ```GraphServiceClient``` object is injected into the controller, and authentication has been configured for you by the Microsoft.Identity.Web library.
+
+```csharp
+using System.Threading.Tasks;
+using Microsoft.AspNetCore.Mvc.RazorPages;
+using Microsoft.Graph;
+using System.IO;
+using Microsoft.Identity.Web;
+using Microsoft.Extensions.Logging;
+
+// Some code omitted for brevity.
+
+[AuthorizeForScopes(Scopes = new[] { "user.read" })]
+public class IndexModel : PageModel
+{
+ private readonly ILogger<IndexModel> _logger;
+ private readonly GraphServiceClient _graphServiceClient;
+
+ public IndexModel(ILogger<IndexModel> logger, GraphServiceClient graphServiceClient)
+ {
+ _logger = logger;
+ _graphServiceClient = graphServiceClient;
+ }
+
+ public async Task OnGetAsync()
+ {
+ try
+ {
+ var user = await _graphServiceClient.Me.Request().GetAsync();
+ ViewData["Me"] = user;
+ ViewData["name"] = user.DisplayName;
+
+ using (var photoStream = await _graphServiceClient.Me.Photo.Content.Request().GetAsync())
+ {
+ byte[] photoByte = ((MemoryStream)photoStream).ToArray();
+ ViewData["photo"] = Convert.ToBase64String(photoByte);
+ }
+ }
+ catch (Exception ex)
+ {
+ ViewData["photo"] = null;
+ }
+ }
+}
+```
+
+# [Node.js](#tab/programming-language-nodejs)
+
+The web app gets the user's access token from the incoming requests header, which is then passed down to Microsoft Graph client to make an authenticated request to the `/me` endpoint.
+
+To see this code as part of a sample application, see *graphController.js* in the [sample on GitHub](https://github.com/Azure-Samples/ms-identity-easyauth-nodejs-storage-graphapi/tree/main/2-WebApp-graphapi-on-behalf).
+
+```nodejs
+const graphHelper = require('../utils/graphHelper');
+
+// Some code omitted for brevity.
+
+exports.getProfilePage = async(req, res, next) => {
+
+ try {
+ const graphClient = graphHelper.getAuthenticatedClient(req.session.protectedResources["graphAPI"].accessToken);
+
+ const profile = await graphClient
+ .api('/me')
+ .get();
+
+ res.render('profile', { isAuthenticated: req.session.isAuthenticated, profile: profile, appServiceName: appServiceName });
+ } catch (error) {
+ next(error);
+ }
+}
+```
+
+To query Microsoft Graph, use the [Microsoft Graph JavaScript SDK](https://github.com/microsoftgraph/msgraph-sdk-javascript). The code for this is located in [utils/graphHelper.js](https://github.com/Azure-Samples/ms-identity-easyauth-nodejs-storage-graphapi/blob/main/2-WebApp-graphapi-on-behalf/utils/graphHelper.js):
+
+```nodejs
+const graph = require('@microsoft/microsoft-graph-client');
+
+// Some code omitted for brevity.
+
+getAuthenticatedClient = (accessToken) => {
+ // Initialize Graph client
+ const client = graph.Client.init({
+ // Use the provided access token to authenticate requests
+ authProvider: (done) => {
+ done(null, accessToken);
+ }
+ });
+
+ return client;
+}
+```
++
+## Clean up resources
+
+If you're finished with this tutorial and no longer need the web app or associated resources, [clean up the resources you created](multi-service-web-app-clean-up-resources.md).
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [App service accesses Microsoft Graph as the app](multi-service-web-app-access-microsoft-graph-as-app.md)
active-directory Multi Service Web App Access Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/multi-service-web-app-access-storage.md
+
+ Title: Tutorial - Web app accesses storage by using managed identities | Azure
+description: In this tutorial, you learn how to access Azure Storage for an app by using managed identities.
++++++ Last updated : 04/25/2021++
+ms.devlang: csharp, javascript
+
+#Customer intent: As an application developer, I want to learn how to access Azure Storage for an app by using managed identities.
++
+# Tutorial: Access Azure Storage from a web app
+
+Learn how to access Azure Storage for a web app (not a signed-in user) running on Azure App Service by using managed identities.
++
+You want to add access to the Azure data plane (Azure Storage, Azure SQL Database, Azure Key Vault, or other services) from your web app. You could use a shared key, but then you have to worry about operational security of who can create, deploy, and manage the secret. It's also possible that the key could be checked into GitHub, which hackers know how to scan for. A safer way to give your web app access to data is to use [managed identities](../managed-identities-azure-resources/overview.md).
+
+A managed identity from Azure Active Directory (Azure AD) allows App Service to access resources through role-based access control (RBAC), without requiring app credentials. After assigning a managed identity to your web app, Azure takes care of the creation and distribution of a certificate. People don't have to worry about managing secrets or app credentials.
+
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+>
+> * Create a system-assigned managed identity on a web app.
+> * Create a storage account and an Azure Blob Storage container.
+> * Access storage from a web app by using managed identities.
++
+## Prerequisites
+
+* A web application running on Azure App Service that has the [App Service authentication/authorization module enabled](multi-service-web-app-authentication-app-service.md).
+
+## Enable managed identity on an app
+
+If you create and publish your web app through Visual Studio, the managed identity was enabled on your app for you. In your app service, select **Identity** in the left pane, and then select **System assigned**. Verify that the **Status** is set to **On**. If not, select **Save** and then select **Yes** to enable the system-assigned managed identity. When the managed identity is enabled, the status is set to **On** and the object ID is available.
++
+This step creates a new object ID, different than the app ID created in the **Authentication/Authorization** pane. Copy the object ID of the system-assigned managed identity. You'll need it later.
+
+## Create a storage account and Blob Storage container
+
+Now you're ready to create a storage account and Blob Storage container.
+
+Every storage account must belong to an Azure resource group. A resource group is a logical container for grouping your Azure services. When you create a storage account, you have the option to either create a new resource group or use an existing resource group. This article shows how to create a new resource group.
+
+A general-purpose v2 storage account provides access to all of the Azure Storage
+
+Blobs in Azure Storage are organized into containers. Before you can upload a blob later in this tutorial, you must first create a container.
+
+# [Portal](#tab/azure-portal)
+
+To create a general-purpose v2 storage account in the Azure portal, follow these steps.
+
+1. On the Azure portal menu, select **All services**. In the list of resources, enter **Storage Accounts**. As you begin typing, the list filters based on your input. Select **Storage Accounts**.
+
+1. In the **Storage Accounts** window that appears, select **Add**.
+
+1. Select the subscription in which to create the storage account.
+
+1. Under the **Resource group** field, select the resource group that contains your web app from the drop-down menu.
+
+1. Next, enter a name for your storage account. The name you choose must be unique across Azure. The name also must be between 3 and 24 characters in length and can include numbers and lowercase letters only.
+
+1. Select a location for your storage account, or use the default location.
+
+1. Leave these fields set to their default values:
+
+ |Field|Value|
+ |--|--|
+ |Deployment model|Resource Manager|
+ |Performance|Standard|
+ |Account kind|StorageV2 (general-purpose v2)|
+ |Replication|Read-access geo-redundant storage (RA-GRS)|
+ |Access tier|Hot|
+
+1. Select **Review + Create** to review your storage account settings and create the account.
+
+1. Select **Create**.
+
+To create a Blob Storage container in Azure Storage, follow these steps.
+
+1. Go to your new storage account in the Azure portal.
+
+1. In the left menu for the storage account, scroll to the **Blob service** section, and then select **Containers**.
+
+1. Select the **+ Container** button.
+
+1. Type a name for your new container. The container name must be lowercase, must start with a letter or number, and can include only letters, numbers, and the dash (-) character.
+
+1. Set the level of public access to the container. The default level is **Private (no anonymous access)**.
+
+1. Select **OK** to create the container.
+
+# [PowerShell](#tab/azure-powershell)
+
+To create a general-purpose v2 storage account and Blob Storage container, run the following script. Specify the name of the resource group that contains your web app. Enter a name for your storage account. The name you choose must be unique across Azure. The name also must be between 3 and 24 characters in length and can include numbers and lowercase letters only.
+
+Specify the location for your storage account. To see a list of locations valid for your subscription, run ```Get-AzLocation | select Location```. The container name must be lowercase, must start with a letter or number, and can include only letters, numbers, and the dash (-) character.
+
+Remember to replace placeholder values in angle brackets with your own values.
+
+```powershell
+Connect-AzAccount
+
+$resourceGroup = "securewebappresourcegroup"
+$location = "<location>"
+$storageName="securewebappstorage"
+$containerName = "securewebappblobcontainer"
+
+$storageAccount = New-AzStorageAccount -ResourceGroupName $resourceGroup `
+ -Name $storageName `
+ -Location $location `
+ -SkuName Standard_RAGRS `
+ -Kind StorageV2
+
+$ctx = $storageAccount.Context
+
+New-AzStorageContainer -Name $containerName -Context $ctx -Permission blob
+```
+
+# [Azure CLI](#tab/azure-cli)
+
+To create a general-purpose v2 storage account and Blob Storage container, run the following script. Specify the name of the resource group that contains your web app. Enter a name for your storage account. The name you choose must be unique across Azure. The name also must be between 3 and 24 characters in length and can include numbers and lowercase letters only.
+
+Specify the location for your storage account. The container name must be lowercase, must start with a letter or number, and can include only letters, numbers, and the dash (-) character.
+
+The following example uses your Azure AD account to authorize the operation to create the container. Before you create the container, assign the Storage Blob Data Contributor role to yourself. Even if you're the account owner, you need explicit permissions to perform data operations against the storage account.
+
+Remember to replace placeholder values in angle brackets with your own values.
+
+```azurecli-interactive
+az login
+
+az storage account create \
+ --name securewebappstorage \
+ --resource-group securewebappresourcegroup \
+ --location <location> \
+ --sku Standard_ZRS \
+ --encryption-services blob
+
+storageId=$(az storage account show -n securewebappstorage -g securewebappresourcegroup --query id --out tsv)
+
+az ad signed-in-user show --query objectId -o tsv | az role assignment create \
+ --role "Storage Blob Data Contributor" \
+ --assignee @- \
+ --scope $storageId
+
+az storage container create \
+ --account-name securewebappstorage \
+ --name securewebappblobcontainer \
+ --auth-mode login
+```
+++
+## Grant access to the storage account
+
+You need to grant your web app access to the storage account before you can create, read, or delete blobs. In a previous step, you configured the web app running on App Service with a managed identity. Using Azure RBAC, you can give the managed identity access to another resource, just like any security principal. The Storage Blob Data Contributor role gives the web app (represented by the system-assigned managed identity) read, write, and delete access to the blob container and data.
+
+# [Portal](#tab/azure-portal)
+
+In the [Azure portal](https://portal.azure.com), go into your storage account to grant your web app access. Select **Access control (IAM)** in the left pane, and then select **Role assignments**. You'll see a list of who has access to the storage account. Now you want to add a role assignment to a robot, the app service that needs access to the storage account. Select **Add** > **Add role assignment** to open the **Add role assignment** page.
+
+Assign the **Storage Blob Data Contributor** role to the **App Service** at subscription scope. For detailed steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
+
+Your web app now has access to your storage account.
+
+# [PowerShell](#tab/azure-powershell)
+
+Run the following script to assign your web app (represented by a system-assigned managed identity) the Storage Blob Data Contributor role on your storage account.
+
+```powershell
+$resourceGroup = "securewebappresourcegroup"
+$webAppName="SecureWebApp20201102125811"
+$storageName="securewebappstorage"
+
+$spID = (Get-AzWebApp -ResourceGroupName $resourceGroup -Name $webAppName).identity.principalid
+$storageId= (Get-AzStorageAccount -ResourceGroupName $resourceGroup -Name $storageName).Id
+New-AzRoleAssignment -ObjectId $spID -RoleDefinitionName "Storage Blob Data Contributor" -Scope $storageId
+```
+
+# [Azure CLI](#tab/azure-cli)
+
+Run the following script to assign your web app (represented by a system-assigned managed identity) the Storage Blob Data Contributor role on your storage account.
+
+```azurecli-interactive
+spID=$(az resource list -n SecureWebApp20201102125811 --query [*].identity.principalId --out tsv)
+
+storageId=$(az storage account show -n securewebappstorage -g securewebappresourcegroup --query id --out tsv)
+
+az role assignment create --assignee $spID --role 'Storage Blob Data Contributor' --scope $storageId
+```
+++
+## Access Blob Storage
+# [C#](#tab/programming-language-csharp)
+The [DefaultAzureCredential](/dotnet/api/azure.identity.defaultazurecredential) class is used to get a token credential for your code to authorize requests to Azure Storage. Create an instance of the [DefaultAzureCredential](/dotnet/api/azure.identity.defaultazurecredential) class, which uses the managed identity to fetch tokens and attach them to the service client. The following code example gets the authenticated token credential and uses it to create a service client object, which uploads a new blob.
+
+To see this code as part of a sample application, see the [sample on GitHub](https://github.com/Azure-Samples/ms-identity-easyauth-dotnet-storage-graphapi/tree/main/1-WebApp-storage-managed-identity).
+
+### Install client library packages
+
+Install the [Blob Storage NuGet package](https://www.nuget.org/packages/Azure.Storage.Blobs/) to work with Blob Storage and the [Azure Identity client library for .NET NuGet package](https://www.nuget.org/packages/Azure.Identity/) to authenticate with Azure AD credentials. Install the client libraries by using the .NET Core command-line interface or the Package Manager Console in Visual Studio.
+
+#### .NET Core command-line
+
+Open a command line, and switch to the directory that contains your project file.
+
+Run the install commands.
+
+```dotnetcli
+dotnet add package Azure.Storage.Blobs
+
+dotnet add package Azure.Identity
+```
+
+#### Package Manager Console
+Open the project or solution in Visual Studio, and open the console by using the **Tools** > **NuGet Package Manager** > **Package Manager Console** command.
+
+Run the install commands.
+```powershell
+Install-Package Azure.Storage.Blobs
+
+Install-Package Azure.Identity
+```
+
+### Example
+
+```csharp
+using System;
+using Azure.Storage.Blobs;
+using Azure.Storage.Blobs.Models;
+using System.Collections.Generic;
+using System.Threading.Tasks;
+using System.Text;
+using System.IO;
+using Azure.Identity;
+
+// Some code omitted for brevity.
+
+static public async Task UploadBlob(string accountName, string containerName, string blobName, string blobContents)
+{
+ // Construct the blob container endpoint from the arguments.
+ string containerEndpoint = string.Format("https://{0}.blob.core.windows.net/{1}",
+ accountName,
+ containerName);
+
+ // Get a credential and create a client object for the blob container.
+ BlobContainerClient containerClient = new BlobContainerClient(new Uri(containerEndpoint),
+ new DefaultAzureCredential());
+
+ try
+ {
+ // Create the container if it does not exist.
+ await containerClient.CreateIfNotExistsAsync();
+
+ // Upload text to a new block blob.
+ byte[] byteArray = Encoding.ASCII.GetBytes(blobContents);
+
+ using (MemoryStream stream = new MemoryStream(byteArray))
+ {
+ await containerClient.UploadBlobAsync(blobName, stream);
+ }
+ }
+ catch (Exception e)
+ {
+ throw e;
+ }
+}
+```
+
+# [Node.js](#tab/programming-language-nodejs)
+The `DefaultAzureCredential` class from [@azure/identity](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/identity/identity/README.md) package is used to get a token credential for your code to authorize requests to Azure Storage. The `BlobServiceClient` class from [@azure/storage-blob](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/storage/storage-blob) package is used to upload a new blob to storage. Create an instance of the `DefaultAzureCredential` class, which uses the managed identity to fetch tokens and attach them to the blob service client. The following code example gets the authenticated token credential and uses it to create a service client object, which uploads a new blob.
+
+To see this code as part of a sample application, see *StorageHelper.js* in the [sample on GitHub](https://github.com/Azure-Samples/ms-identity-easyauth-nodejs-storage-graphapi/tree/main/1-WebApp-storage-managed-identity).
+
+### Example
+
+```nodejs
+const { DefaultAzureCredential } = require("@azure/identity");
+const { BlobServiceClient } = require("@azure/storage-blob");
+const defaultAzureCredential = new DefaultAzureCredential();
+
+// Some code omitted for brevity.
+
+async function uploadBlob(accountName, containerName, blobName, blobContents) {
+ const blobServiceClient = new BlobServiceClient(
+ `https://${accountName}.blob.core.windows.net`,
+ defaultAzureCredential
+ );
+
+ const containerClient = blobServiceClient.getContainerClient(containerName);
+
+ try {
+ await containerClient.createIfNotExists();
+ const blockBlobClient = containerClient.getBlockBlobClient(blobName);
+ const uploadBlobResponse = await blockBlobClient.upload(blobContents, blobContents.length);
+ console.log(`Upload block blob ${blobName} successfully`, uploadBlobResponse.requestId);
+ } catch (error) {
+ console.log(error);
+ }
+}
+```
++
+## Clean up resources
+
+If you're finished with this tutorial and no longer need the web app or associated resources, [clean up the resources you created](multi-service-web-app-clean-up-resources.md).
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [App Service accesses Microsoft Graph on behalf of the user](multi-service-web-app-access-microsoft-graph-as-user.md)
active-directory Multi Service Web App Authentication App Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/multi-service-web-app-authentication-app-service.md
+
+ Title: Tutorial - Add authentication to a web app on Azure App Service | Azure
+description: In this tutorial, you learn how to enable authentication and authorization for a web app running on Azure App Service. Limit access to the web app to users in your organizationΓÇï.
+++++++ Last updated : 04/25/2022+++
+#Customer intent: As an application developer, enable authentication and authorization for a web app running on Azure App Service.
++
+# Tutorial: Add authentication to your web app running on Azure App Service
+
+Learn how to enable authentication for your web app running on Azure App Service and limit access to users in your organization.
++
+App Service provides built-in authentication and authorization support, so you can sign in users and access data by writing minimal or no code in your web app. Using the App Service authentication/authorization module isn't required, but helps simplify authentication and authorization for your app. This article shows how to secure your web app with the App Service authentication/authorization module by using Azure Active Directory (Azure AD) as the identity provider.
+
+The authentication/authorization module is enabled and configured through the Azure portal and app settings. No SDKs, specific languages, or changes to application code are required.ΓÇï A variety of identity providers are supported, which includes Azure AD, Microsoft Account, Facebook, Google, and TwitterΓÇïΓÇï. When the authentication/authorization module is enabled, every incoming HTTP request passes through it before being handled by app code.ΓÇïΓÇï To learn more, see [Authentication and authorization in Azure App Service](/azure/app-service/overview-authentication-authorization.md).
+
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+>
+> * Configure authentication for the web app.
+> * Limit access to the web app to users in your organization.
+
+## Prerequisites
++
+## Create and publish a web app on App Service
+
+For this tutorial, you need a web app deployed to App Service. You can use an existing web app, or you can follow one of the [ASP.NET Core](/azure/app-service/quickstart-dotnetcore), [Node.js](/azure/app-service/quickstart-nodejs), [Python](/azure/app-service/quickstart-python), or [Java](/azure/app-service/quickstart-java) quickstarts to create and publish a new web app to App Service.
+
+Whether you use an existing web app or create a new one, take note of the following:
+
+- web app name
+- name of the resource group that the web app is deployed to
+
+You need these names throughout this tutorial.
+
+## Configure authentication and authorization
+
+You now have a web app running on App Service. Next, you enable authentication and authorization for the web app. You use Azure AD as the identity provider. For more information, see [Configure Azure AD authentication for your App Service application](/azure/app-service/configure-authentication-provider-aad.md).
+
+In the [Azure portal](https://portal.azure.com) menu, select **Resource groups**, or search for and select **Resource groups** from any page.
+
+In **Resource groups**, find and select your resource group. In **Overview**, select your app's management page.
++
+On your app's left menu, select **Authentication**, and then click **Add identity provider**.
+
+In the **Add an identity provider** page, select **Microsoft** as the **Identity provider** to sign in Microsoft and Azure AD identities.
+
+For **App registration** > **App registration type**, select **Create new app registration**.
+
+For **App registration** > **Supported account types**, select **Current tenant-single tenant**.
+
+In the **App Service authentication settings** section, leave **Authentication** set to **Require authentication** and **Unauthenticated requests** set to **HTTP 302 Found redirect: recommended for websites**.
+
+At the bottom of the **Add an identity provider** page, click **Add** to enable authentication for your web app.
++
+You now have an app that's secured by the App Service authentication and authorization.
+
+> [!NOTE]
+> To allow accounts from other tenants, change the 'Issuer URL' to 'https://login.microsoftonline.com/common/v2.0' by editing your 'Identity Provider' from the 'Authentication' blade.
+>
+
+## Verify limited access to the web app
+
+When you enabled the App Service authentication/authorization module, an app registration was created in your Azure AD tenant. The app registration has the same display name as your web app. To check the settings, select **Azure Active Directory** from the portal menu, and select **App registrations**. Select the app registration that was created. In the overview, verify that **Supported account types** is set to **My organization only**.
++
+To verify that access to your app is limited to users in your organization, start a browser in incognito or private mode and go to `https://<app-name>.azurewebsites.net`. You should be directed to a secured sign-in page, verifying that unauthenticated users aren't allowed access to the site. Sign in as a user in your organization to gain access to the site. You can also start up a new browser and try to sign in by using a personal account to verify that users outside the organization don't have access.
+
+## Clean up resources
+
+If you're finished with this tutorial and no longer need the web app or associated resources, [clean up the resources you created](multi-service-web-app-clean-up-resources.md).
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [App service accesses storage](multi-service-web-app-access-storage.md)
active-directory Multi Service Web App Clean Up Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/multi-service-web-app-clean-up-resources.md
+
+ Title: Tutorial - Clean up resources | Azure
+description: In this tutorial, you learn how to clean up the Azure resources allocated while creating the web app.
+++++++ Last updated : 04/25/2022+++
+#Customer intent: As an application developer, I want to learn how to access Azure Storage for an app using managed identities.
++
+# Tutorial: Clean up resources
+
+If you completed all the steps in this multipart tutorial, you created an app service, app service hosting plan, and a storage account in a resource group. You also created an app registration in Azure Active Directory. When no longer needed, delete these resources and app registration so that you don't continue to accrue charges.
+
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+>
+> * Delete the Azure resources created while following the tutorial.
+
+## Delete the resource group
+
+In the [Azure portal](https://portal.azure.com), select **Resource groups** from the portal menu and select the resource group that contains your app service and app service plan.
+
+Select **Delete resource group** to delete the resource group and all the resources.
++
+This command might take several minutes to run.
+
+## Delete the app registration
+
+From the portal menu, select **Azure Active Directory** > **App registrations**. Then select the application you created.
+
+In the app registration overview, select **Delete**.
+
active-directory Multi Service Web App Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/multi-service-web-app-overview.md
+
+ Title: Tutorial - Build a secure web app on Azure App Service | Azure
+description: In this tutorial, you learn how to build a web app by using Azure App Service, sign in users to the web app, call Azure Storage, and call Microsoft Graph.
+++++++ Last updated : 04/25/2022+++
+#Customer intent: As an application developer, I want to learn how to secure access to a web app running on Azure App Service.
++
+# Tutorial: Sign in users in App Service and access storage and Microsoft Graph
+
+This tutorial describes a common application scenario: an internal employee dashboard web application. Your web app will be hosted in Azure App Service and needs to connect to Microsoft Graph and Azure Storage in order to get data to visualize in the dashboard. In some cases, the web app needs to get data that only the signed-in user can access. In other cases, the web app needs to access data under the identity of the app itself, and not the signed-in user. Access to the web application needs to be restricted to users in your organization.
+
+The goal of this tutorial is *not* to show how to build the dashboard itself or visualize data. Rather, the tutorial focuses on the identity-related aspects of the described scenario. Learn how to:
+
+- [Configure authentication for a web app](multi-service-web-app-authentication-app-service.md) and limit access to users in your organizationΓÇï. See A in the diagram.
+- [Securely access Azure Storage](multi-service-web-app-access-storage.md) from the web application using managed identitiesΓÇï. See B in the diagram.
+- Access data in Microsoft Graph from the web application (See C in the diagram):
+ - [as the signed-in userΓÇï](multi-service-web-app-access-microsoft-graph-as-user.md)
+ - [as the web application](multi-service-web-app-access-microsoft-graph-as-app.md) using managed identitiesΓÇï
+- [Clean up the resources](multi-service-web-app-clean-up-resources.md) you created for this tutorial.
++
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Configure authentication for a web app](multi-service-web-app-authentication-app-service.md)
active-directory Custom Security Attributes Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/custom-security-attributes-overview.md
Previously updated : 03/28/2022 Last updated : 05/09/2022
Here are some of the limits and constraints for custom security attributes.
> | Predefined values per attribute definition | 100 | | > | Attribute value length | 64 | Unicode characters | > | Attribute values assigned per object | 50 | Values can be distributed across single and multi-valued attributes.<br/>Example: 5 attributes with 10 values each or 50 attributes with 1 value each |
-> | Characters not allowed for:<br/>Attribute set name<br/>Attribute name | ``<space> ` ~ ! @ # $ % ^ & * ( ) _ - + = { [ } ] \| \ : ; " ' < , > . ? /`` | Attribute set name and attribute name cannot start with a number |
-> | Characters not allowed for:<br/>Attribute values | `# % & * + \ : " / < > ?` | |
+> | Special characters **not** allowed for:<br/>Attribute set name<br/>Attribute name | ``<space> ` ~ ! @ # $ % ^ & * ( ) _ - + = { [ } ] \| \ : ; " ' < , > . ? /`` | Attribute set name and attribute name cannot start with a number |
+> | Special characters allowed for attribute values | All special characters | |
+> | Special characters allowed for attribute values when used with blob index tags | `<space> + - . : = _ /` | If you plan to use [attribute values with blob index tags](../../role-based-access-control/conditions-custom-security-attributes.md), these are the only special characters allowed for blob index tags. For more information, see [Setting blob index tags](../../storage/blobs/storage-manage-find-blobs.md#setting-blob-index-tags). |
## Custom security attribute roles
active-directory Access Reviews Application Preparation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/access-reviews-application-preparation.md
+
+ Title: Preparing for an access review of users' access to an application - Azure AD
+description: Planning for a successful access reviews campaign for a particular application starts with understanding how to model access for that application in Azure AD.
+
+documentationCenter: ''
++
+editor:
++
+ na
++ Last updated : 04/25/2022+++++
+#Customer intent: As an IT admin, I want to ensure access to specific applications is governed, by setting up access reviews for those applications.
+++
+# Prepare for an access review of users' access to an application
+
+[Azure Active Directory (Azure AD) Identity Governance](identity-governance-overview.md) allows you to balance your organization's need for security and employee productivity with the right processes and visibility. It provides you with capabilities to ensure that the right people have the right access to the right resources.
+
+Organizations with compliance requirements or risk management plans will have sensitive or business-critical applications. The application sensitivity may be based on its purpose or the data it contains, such as financial information or personal information of the organization's customers. For those applications, only a subset of all the users in the organization will typically be authorized to have access, and access should only be permitted based on documented business requirements. Azure AD can be integrated with many popular SaaS applications, on-premises applications, and applications that your organization has developed, using [standard protocol](../fundamentals/auth-sync-overview.md) and API interfaces. Through these interfaces, Azure AD can be the authoritative source to control who has access to those applications. As you integrate your applications with Azure AD, you can then use Azure AD access reviews to recertify the users who have access to those applications, and remove access of those users who no longer need access.
+
+## Prerequisites for reviewing access
+
+To use Azure AD for an access review of access to an application, you must have one of the following licenses in your tenant:
+
+* Azure AD Premium P2
+* Enterprise Mobility + Security (EMS) E5 license
+
+While using the access reviews feature does not require users to have those licenses assigned to them to use the feature, you'll need to have at least as many licenses in your tenant as the number of member (non-guest) users who will be configured as reviewers.
+
+Also, while not required for reviewing access to an application, we recommend also regularly reviewing the membership of privileged directory roles that have the ability to control other users' access to all applications. Administrators in the `Global Administrator`, `Identity Governance Administrator`, `User Administrator`, `Application Administrator`, `Cloud Application Administrator` and `Privileged Role Administrator` can make changes to users and their application role assignments, so ensure that [access review of these directory roles](../privileged-identity-management/pim-create-azure-ad-roles-and-resource-roles-review.md) have been scheduled.
+
+## Determine how the application is integrated with Azure AD
+
+In order for Azure AD access reviews to be used for an application, then the application must first be integrated with Azure AD. An application being integrated with Azure AD means one of two requirements must be met:
+
+* The application relies upon Azure AD for federated SSO, and Azure AD controls authentication token issuance. If Azure AD is the only identity provider for the application, then only users who are assigned to one of the application's roles in Azure AD are able to sign into the application. Those users that are denied by a review lose their application role assignment and can no longer get a new token to sign in to the application.
+* The application relies upon user or group lists that are provided to the application by Azure AD. This fulfillment could be done through a provisioning protocol such as SCIM or by the application querying Azure AD via Microsoft Graph. Those users that are denied by a review lose their application role assignment or group membership, and when those changes are made available to the application, then the denied users will no longer have access.
+
+If neither of those criteria are met for an application, as the application doesn't rely upon Azure AD, then access reviews can still be used, however there may be some limitations. Users that aren't in your Azure AD or are not assigned to the application roles in Azure AD, won't be included in the review. Also, the changes to remove denied won't be able to be automatically sent to the application if there is no provisioning protocol that the application supports. The organization must instead have a process to send the results of a completed review to the application.
+
+In order to permit a wide variety of applications and IT requirements to be addressed with Azure AD, there are multiple patterns for how an application can be integrated with Azure AD. The following flowchart illustrates how to select from three integration patterns, A-C, that are appropriate for applications for use with identity governance. Knowing what pattern is being used for a particular application helps you to configure the appropriate resources in Azure AD to be ready the access review.
+
+ ![Flowchart for application integration patterns](./media/access-reviews-application-preparation/app-integration-patterns-flowchart.png)
+
+|Pattern|Application integration pattern|Steps to prepare for an access review|
+|:||--|
+|A| The application supports federated SSO, Azure AD is the only identity provider, and the application doesn't rely upon group or role claims. | In this pattern, you'll configure that the application requires individual application role assignments, and that users are assigned to the application. Then to perform the review, you'll create a single access review for the application, of the users assigned to this application role. When the review completes, if a user was denied, then they will be removed from the application role. Azure AD will then no longer issue that user with federation tokens and the user will be unable to sign into that application.|
+|B|If the application uses group claims in addition to application role assignments.| An application may use Azure AD group membership, distinct from application roles to express finer-grained access. Here, you can choose based on your business requirements either to have the users who have application role assignments reviewed, or to review the users who have group memberships. If the groups do not provide comprehensive access coverage, in particular if users may have access to the application even if they aren't a member of those groups, then we recommend reviewing the application role assignments, as in pattern A above.|
+|C| If the application doesn't rely solely on Azure AD for federated SSO, but does support provisioning, via SCIM, or via updates to a SQL table of users or an LDAP directory. | In this pattern, you'll configure Azure AD to provision the users with application role assignments to the application's database or directory, update the application role assignments in Azure AD with a list of the users who currently have access, and then create a single access review of the application role assignments.|
+
+### Other options
+
+The integration patterns listed above are applicable to third party SaaS applications, or applications that have been developed by or for your organization.
+
+* Some Microsoft Online Services, such as Exchange Online, use licenses. While user's licenses can't be reviewed directly, if you're using group-based license assignments, with groups with assigned users, you can review the memberships of those groups instead.
+* Some applications may use delegated user consent to control access to Microsoft Graph or other resources. As consents by each user aren't controlled by an approval process, consents aren't reviewable in Azure AD. Instead, you can review who is able to connect to the application through Conditional Access policies, that could be based on application role assignments or group memberships.
+* If the application doesn't support federation or provisioning protocols, then you'll need a process for manually applying the results when a review completes. For an application that only supports password SSO integration, if an application assignment is removed when a review completes, then the application won't show up on the *myapps* page for the user, but it won't prevent a user who already knows the password from being able to continue to sign into the application. Please [ask the SaaS vendor to onboard to the app gallery](../manage-apps/v2-howto-app-gallery-listing.md) for federation or provisioning by updating their application to support a standard protocol.
+
+## Check the application and groups are ready for the review
+
+Now that you have identified the integration pattern for the application, check the application as represented in Azure AD is ready for review.
+
+1. In the Azure portal, click **Azure Active Directory**, click **Enterprise Applications**, and check whether your application is on the [list of enterprise applications](../manage-apps/view-applications-portal.md) in your Azure AD tenant.
+1. If the application is not already listed, then check if the application is available the [application gallery](../manage-apps/overview-application-gallery.md) for applications that can be integrated for federated SSO or provisioning. If it is in the gallery, then use the [tutorials](../saas-apps/tutorial-list.md) to configure the application for federation, and if it supports provisioning, also [configure the application](/app-provisioning/configure-automatic-user-provisioning-portal.md) for provisioning.
+1. One the application is in the list of enterprise applications in your tenant, select the application from the list.
+1. Change to the **Properties** tab. Verify that the **User assignment required?** option is set to **Yes**. If it's set to **No**, all users in your directory, including external identities, can access the application, and you can't review access to the application.
+
+ ![Screenshot that shows planning app assignments.](./media/deploy-access-review/6-plan-applications-assignment-required.png)
+
+1. Change to the **Roles and administrators** tab. This tab displays the administrative roles, that give rights to control the representation of the application in Azure AD, not the access rights in the application. For each administrative role that has permissions to allow changing the application integration or assignments, and has an assignment to that administrative role, ensure that only authorized users are in that role.
+
+1. Change to the **Provisioning** tab. If automatic provisioning isn't configured, then Azure AD won't have a way to notify the application when a user's access is removed if denied during the review. Provisioning might not be necessary for some integration patterns, if the application is federated and solely relies upon Azure AD as its identity provider. However, if your application integration is pattern C, and the application doesn't support federated SSO with Azure AD as its only identity provider, then you'll need to configure provisioning from Azure AD to the application. Provisioning will be necessary so that Azure AD can automatically remove the reviewed users from the application when a review completes, and this removal step can be done through a change sent from Azure AD to the application through SCIM, LDAP or SQL.
+
+ * If this is a gallery application that supports provisioning, [configure the application for provisioning](../app-provisioning/configure-automatic-user-provisioning-portal.md).
+ * If the application is a cloud application and supports SCIM, configure [user provisioning with SCIM](../app-provisioning/use-scim-to-provision-users-and-groups.md).
+ * If the application is an on-premises application and supports SCIM, configure an application with the [provisioning agent for on-premises SCIM-based apps](../app-provisioning/on-premises-scim-provisioning.md).
+ * If the application relies upon a SQL database, configure an application with the [provisioning agent for on-premises SQL-based applications](../app-provisioning/on-premises-sql-connector-configure.md).
+ * If the application relies upon another LDAP directory, configure an application with the [provisioning agent for on-premises LDAP-based applications](../app-provisioning/on-premises-ldap-connector-configure.md).
+
+1. If provisioning is configured, then click on **Edit Attribute Mappings**, expand the Mapping section and click on **Provision Azure Active Directory Users**. Check that in the list of attribute mappings, there is a mapping for `isSoftDeleted` to the attribute in the application's data store that you would like to set to false when a user loses access. If this mapping isn't present, then Azure AD will not notify the application when a user has gone out of scope, as described in [how provisioning works](../app-provisioning/how-provisioning-works.md).
+1. If the application supports federated SSO, then change to the **Conditional Access** tab. Inspect the enabled policies for this application. If there are policies that are enabled, block access, have users assigned to the policies, but no other conditions, then those users may be already blocked from being able to get federated SSO to the application.
+
+1. Change to the **Users and groups** tab. This list contains all the users who are assigned to the application in Azure AD. If the list is empty, then a review of the application will complete immediately, since there isn't any task for the reviewer to perform.
+1. If your application is integrated with pattern C, then you'll need to confirm that the users in this list are the same as those in the applications' internal data store, prior to starting the review. Azure AD does not automatically import the users or their access rights from an application, but you can [assign users to an application role via PowerShell](../manage-apps/assign-user-or-group-access-portal.md).
+1. Check whether all users are assigned to the same application role, such as **User**. If users are assigned to multiple roles, then if you create an access review of the application, then all assignments to all of the application's roles will be reviewed together.
+
+1. Check the list of directory objects assigned to the roles to confirm that there are no groups assigned to the application roles. It's possible to review this application if there is a group assigned to a role; however, a user who is a member of the group assigned to the role, and whose access was denied, won't be automatically removed from the group. We recommend first converting the application to have direct user assignments, rather than members of groups, so that a user whose access is denied during the access review can have their application role assignment removed automatically.
+
+Next, if the application integration also requires one or more groups to be reviewed, as described in pattern B, then check each group is ready for review.
+
+1. In the Azure portal experience for Azure AD, click **Groups**, and then select the group from the list.
+1. On the **Overview** tab, verify that the **Membership type** is **Assigned**, and the **Source** is **Cloud**. If the application uses a dynamic group, or a group synchronized from on-premises, then those group memberships can't be changed in Azure AD. We recommend converting the application to groups created in Azure AD with assigned memberships, then copy the member users to that new group.
+1. Change to the **Roles and administrators** tab. This tab displays the administrative roles, that give rights to control the representation of the group in Azure AD, not the access rights in the application. For each administrative role that allows changing group membership and has users in that administrative role, ensure that only authorized users are in that role.
+1. Change to the **Members** tab. Verify that the members of the group are users, and that there are no non-user members or nested groups. If there are no members of a group when the review starts, the review of that group will complete immediately.
+1. Change to the **Owners** tab. Make sure that no authorized users are shown as owners. If you'll be asking the group owners to perform the access review of a group, then confirm that the group has one or more owners.
+
+## Select appropriate reviewers
+
+When you create each access review, administrators can choose one or more reviewers. The reviewers can carry out a review by choosing users for continued access to a resource or removing them.
+
+Typically a resource owner is responsible for performing a review. If you're creating a review of a group, as part of reviewing access for an application integrated in pattern B, then you can select the group owners as reviewers. As applications in Azure AD don't necessarily have an owner, the option for selecting the application owner as a reviewer isn't possible. Instead, when creating the review, you can supply the names of the application owners to be the reviewers.
+
+You can also choose, when creating a review of a group or application, to have a [multi-stage review](create-access-review.md#create-a-multi-stage-access-review-preview). For example, you could select to have the manager of each assigned user perform the first stage of the review, and the resource owner the second stage. That way the resource owner can focus on the users who have already been approved by their manager.
+
+Before creating the reviews, check that you have at least as many Azure AD Premium P2 licenses in your tenant as there are member users who are assigned as reviewers. Also, check that all reviewers are active users with email addresses. When the access reviews start, they each review an email from Azure AD. If the reviewer doesn't have a mailbox, they will not receive the email when the review starts or an email reminder. And, if they are blocked from being able to sign in to Azure AD, they will not be able to perform the review.
+
+## Create the reviews
+
+Once you've identified the resources, the application and optionally one or more groups, based on the integration pattern, and who the reviewers should be, then you can configure Azure AD to start the reviews.
+
+1. For this step, you'll need to be in the `Global administrator` or `Identity Governance administrator` role.
+1. In patterns A and C, you'll create one access review, selecting the application. Follow the instructions in the guide for [creating an access review of groups or applications](create-access-review.md), to create the review of the application's role assignments.
+1. If your application is integrated with pattern B, use the same [guide](create-access-review.md) to create additional access reviews for each of the groups.
+
+ > [!NOTE]
+ > If you create an access review and enable review decision helpers, then the decision helper will vary depending upon the resource being reviewed. If the resource is an application, recommendations are based on the 30-day interval period depending on when the user last signed in to the application. If the resource is a group, then the recommendations are based on the interval when the user last signed into to any application in the tenant, not just the application using those groups.
+
+1. When the access reviews start, ask the reviewers to give input. By default, they each receive an email from Azure AD with a link to the access panel, where they [review membership in the groups or access to the application](perform-access-review.md).
+
+## View the assignments that are updated when the reviews complete
+
+Once the reviews have started, you can monitor their progress, and update the approvers if needed, until the review completes. You can then confirm that the users, whose access was denied by the reviewers, are having their access removed from the application.
+
+1. Monitor the access reviews, ensuring the reviewers are making selections to approve or deny user's need for continued access, until the [access review completes](complete-access-review.md).
+
+1. If auto-apply wasn't selected when the review was created, then you'll need to apply the review results when it completes.
+1. Wait for the status of the review to change to **Result applied**. You should expect to see denied users, if any, being removed from the group membership or application assignment in a few minutes.
+
+1. If you had previously configured provisioning of users to the application, then when the results are applied, Azure AD will begin deprovisioning denied users from the application. You can [monitor the process of deprovisioning users](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md). If provisioning indicates an error with the application, you can [download the provisioning log](../reports-monitoring/concept-provisioning-logs.md) to investigate if there was a problem with the application.
+
+1. If provisioning wasn't configured for your application, then you may need to separately copy the list of denied users to the application. For example, in access reviews for a Windows Server AD-managed group, use this [PowerShell sample script](https://github.com/microsoft/access-reviews-samples/tree/master/AzureADAccessReviewsOnPremises). The script outlines the required Microsoft Graph calls and exports the Windows Server AD PowerShell cmdlets to carry out the changes.
+
+1. If you wish, you can also download a [review history report](access-reviews-downloadable-review-history.md) of completed reviews.
+
+1. How long a user who has been denied continued access is able to continue to use a federated application will depend upon the application's own session lifetime, and on the access token lifetime. To learn more about controlling the lifetime of access tokens, see [configurable token lifetimes](../develop/active-directory-configurable-token-lifetimes.md).
+
+## Next steps
+
+* [Plan an Azure Active Directory access reviews deployment](deploy-access-reviews.md)
+* [Create an access review of a group or application](create-access-review.md)
active-directory Create Access Review https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/create-access-review.md
This article describes how to create one or more access reviews for group member
For more information, see [License requirements](access-reviews-overview.md#license-requirements).
+If you are reviewing access to an application, then before creating the review, see the article on how to [prepare for an access review of users' access to an application](access-reviews-application-preparation.md) to ensure the application is integrated with Azure AD.
+ ## Create a single-stage access review ### Scope
For more information, see [License requirements](access-reviews-overview.md#lice
> [!NOTE] > If you selected **All Microsoft 365 groups with guest users**, your only option is to review **Guest users only**.
-1. Or if you are conducting group membership review, you can create access reviews only for inactive users in the group (preview). In the *Users scope* section, check the box next to **Inactive users (on tenant level)**. If you check the box, the scope of the review will focus on inactive users only. Then, specify **Days inactive** with a number of days inactive up to 730 days (two years). Users in the group inactive for the specified number of days will be the only users in the review.
+1. Or if you are conducting group membership review, you can create access reviews for only the inactive users in the group (preview). In the *Users scope* section, check the box next to **Inactive users (on tenant level)**. If you check the box, the scope of the review will focus on inactive users only, those who have not signed in either interactively or non-interactively to the tenant. Then, specify **Days inactive** with a number of days inactive up to 730 days (two years). Users in the group inactive for the specified number of days will be the only users in the review.
1. Select **Next: Reviews**.
active-directory Deploy Access Reviews https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/deploy-access-reviews.md
To create access reviews for an application, set the **User assignment required?
![Screenshot that shows planning app assignments.](./media/deploy-access-review/6-plan-applications-assignment-required.png)
-Then [assign the users and groups](../manage-apps/assign-user-or-group-access-portal.md) that you want to have access.
+Then [assign the users and groups](../manage-apps/assign-user-or-group-access-portal.md) whose access you want to have reviewed.
+
+Read more about how to [prepare for an access review of users' access to an application](access-reviews-application-preparation.md).
### Reviewers for an application
active-directory Entitlement Management Access Package Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-create.md
An access package enables you to do a one-time setup of resources and policies t
All access packages must be put in a container called a catalog. A catalog defines what resources you can add to your access package. If you don't specify a catalog, your access package will be put into the General catalog. Currently, you can't move an existing access package to a different catalog.
+An access package can be used to assign access to roles of multiple resources that are in the catalog. If you're an administrator or catalog owner, you can add resources to the catalog while creating an access package.
If you are an access package manager, you cannot add resources you own to a catalog. You are restricted to using the resources available in the catalog. If you need to add resources to a catalog, you can ask the catalog owner.
-All access packages must have at least one policy. Policies specify who can request the access package and also approval and lifecycle settings. When you create a new access package, you can create an initial policy for users in your directory, for users not in your directory, for administrator direct assignments only, or you can choose to create the policy later.
+All access packages must have at least one policy for users to be assigned to the access package. Policies specify who can request the access package and also approval and lifecycle settings. When you create a new access package, you can create an initial policy for users in your directory, for users not in your directory, for administrator direct assignments only, or you can choose to create the policy later.
![Create an access package](./media/entitlement-management-access-package-create/access-package-create.png)
Here are the high-level steps to create a new access package.
1. Select the catalog you want to create the access package in.
-1. Add resources from catalog to your access package.
+1. Add resource roles from resources in the catalog to your access package.
-1. Assign resource roles for each resource.
-
-1. Specify users that can request access.
+1. Specify an initial policy for users that can request access.
1. Specify any approval settings.
On the **Basics** tab, you give the access package a name and specify which cata
## Resource roles
-On the **Resource roles** tab, you select the resources to include in the access package. Users who request and receive the access package will receive all the resource roles in the access package.
+On the **Resource roles** tab, you select the resources to include in the access package. Users who request and receive the access package will receive all the resource roles, such as group membership, in the access package.
+
+If you're not sure which resource roles to include, you can skip adding resource roles while creating the access package, and then [add resource roles](entitlement-management-access-package-resources.md) after you've created the access package.
1. Click the resource type you want to add (**Groups and Teams**, **Applications**, or **SharePoint sites**).
On the **Resource roles** tab, you select the resources to include in the access
If you are a Global administrator, a User administrator, or catalog owner, you have the additional option of selecting resources you own that are not yet in the catalog. If you select resources not currently in the selected catalog, these resources will also be added to the catalog for other catalog administrators to build access packages with. To see all the resources that can be added to the catalog, check the **See all** check box at the top of the Select pane. If you only want to select resources that are currently in the selected catalog, leave the check box **See all** unchecked (default state).
-1. Once you have selected the resources, in the **Role** list, select the role you want users to be assigned for the resource.
+1. Once you've selected the resources, in the **Role** list, select the role you want users to be assigned for the resource. For more information on selecting the appropriate roles for a resource, read [add resource roles](entitlement-management-access-package-resources.md#add-resource-roles).
![Access package - Resource role selection](./media/entitlement-management-access-package-create/resource-roles-role.png)
active-directory Entitlement Management Access Package Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-resources.md
This video provides an overview of how to change an access package.
## Check catalog for resources
-If you need to add resources to an access package, you should check whether the resources your need are available in the catalog. If you are an access package manager, you cannot add resources to a catalog, even if you own them. You are restricted to using the resources available in the catalog.
+If you need to add resources to an access package, you should check whether the resources you need are available in the access package's catalog. If you're an access package manager, you can't add resources to a catalog, even if you own them. You're restricted to using the resources available in the catalog.
**Prerequisite role:** Global administrator, Identity Governance administrator, User administrator, Catalog owner, or Access package manager
If you need to add resources to an access package, you should check whether the
![List of resources in a catalog](./media/entitlement-management-access-package-resources/catalog-resources.png)
+1. If the resources aren't already in the catalog, and you're an administrator or a catalog owner, you can [add resources to a catalog](entitlement-management-catalog-create.md#add-resources-to-a-catalog).
+ 1. If you are an access package manager and you need to add resources to the catalog, you can ask the catalog owner to add them. ## Add resource roles
-A resource role is a collection of permissions associated with a resource. The way you make resources available for users to request is by adding resource roles from each of the catalog's resources to your access package. You can add resource roles that are provided by groups, teams, applications, and SharePoint sites.
+A resource role is a collection of permissions associated with a resource. Resources can be made available for users to request if you add resource roles from each of the catalog's resources to your access package. You can add resource roles that are provided by groups, teams, applications, and SharePoint sites. When a user receives an assignment to an access package, they'll be added to all the resource roles in the access package.
+
+If you don't want users to receive all of the roles, then you'll need to create multiple access packages in the catalog, with separate access packages for each of the resource roles. You can also mark the access packages as [incompatible](entitlement-management-access-package-incompatible.md) with each other so users can't request access to access packages that would give them excessive access.
**Prerequisite role:** Global administrator, User administrator, Catalog owner, or Access package manager
For more information, see [Compare groups](/office365/admin/create-groups/compar
You can have Azure AD automatically assign users access to an Azure AD enterprise application, including both SaaS applications and your organization's applications integrated with Azure AD, when a user is assigned an access package. For applications that integrate with Azure AD through federated single sign-on, Azure AD will issue federation tokens for users assigned to the application.
-Applications can have multiple roles. When adding an application to an access package, if that application has more than one role, you will need to specify the appropriate role for those users. If you are developing applications, you can read more about how those roles are added to your applications in [How to: Configure the role claim issued in the SAML token for enterprise applications](../develop/active-directory-enterprise-app-role-management.md).
+Applications can have multiple roles. When you add an application to an access package, if that application has more than one role, you'll need to specify the appropriate role for those users in each access package. If you're developing applications, you can read more about how those roles are added to your applications in [How to: Configure the role claim issued in the SAML token for enterprise applications](../develop/active-directory-enterprise-app-role-management.md).
+
+> [!NOTE]
+> If an application has multiple roles, and more than one role of that application are in an access package, then the user will receive all the roles. If instead you want users to only have some of the roles, then you will need to create multiple access packages in the catalog, with separate access packages for each of the roles.
Once an application role is part of an access package:
Azure AD can automatically assign users access to a SharePoint Online site or Sh
Any users with existing assignments to the access package will automatically be given access to this SharePoint Online site when it is added.
+## Add resource roles programmatically
+
+You can also add a resource role to an access package using Microsoft Graph. A user in an appropriate role with an application that has the delegated `EntitlementManagement.ReadWrite.All` permission can call the API to:
+
+1. [List the accessPackageResources in the catalog](/graph/api/entitlementmanagement-list-accesspackagecatalogs?tabs=http&view=graph-rest-beta&preserve-view=true) and [create an accessPackageResourceRequest](/graph/api/entitlementmanagement-post-accesspackageresourcerequests?tabs=http&view=graph-rest-beta&preserve-view=true) for any resources that are not yet in the catalog.
+1. [List the accessPackageResourceRoles](/graph/api/accesspackage-list-accesspackageresourcerolescopes?tabs=http&view=graph-rest-beta&preserve-view=true) of each accessPackageResource in an accessPackageCatalog. This list of roles will then be used to select a role, when subsequently creating an accessPackageResourceRoleScope.
+1. [Create an accessPackageResourceRoleScope](/graph/api/accesspackage-post-accesspackageresourcerolescopes?tabs=http&view=graph-rest-beta&preserve-view=true) for each resource role needed in the access package.
+ ## Remove resource roles **Prerequisite role:** Global administrator, User administrator, Catalog owner, or Access package manager
In entitlement management, Azure AD will process bulk changes for assignment and
When you remove a member of a team, they are removed from the Microsoft 365 Group as well. Removal from the team's chat functionality might be delayed. For more information, see [Group membership](/microsoftteams/office-365-groups#group-membership).
+When a resource role is added to an access package by an admin, users who are in that resource role, but do not have assignments to the access package, will remain in the resource role, but won't be assigned to the access package. For example, if a user is a member of a group and then an access package is created and that group's member role is added to an access package, the user won't automatically receive an assignment to the access package.
+
+If you want the users to also be assigned to the access package, you can [directly assign users](entitlement-management-access-package-assignments.md#directly-assign-a-user) to an access package using the Azure portal, or in bulk via Graph or PowerShell. The users will then also receive access to the other resource roles in the access package. However, as those users already have access prior to being added to the access package, when their access package assignment is removed, they will remain in the resource role. For example, if a user was a member of a group, and was assigned to an access package that included group membership for that group as a resource role, and then that user's access package assignment was removed, the user would retain their group membership.
+ ## Next steps - [Create a basic group and add members using Azure Active Directory](../fundamentals/active-directory-groups-create-azure-portal.md)
active-directory Entitlement Management Catalog Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-catalog-create.md
This article shows you how to create and manage a catalog of resources and acces
## Create a catalog
-A catalog is a container of resources and access packages. You create a catalog when you want to group related resources and access packages. Whoever creates the catalog becomes the first catalog owner. A catalog owner can add more catalog owners.
+A catalog is a container of resources and access packages. You create a catalog when you want to group related resources and access packages. A user who has been delegated the [catalog creator](entitlement-management-delegate.md) role can create a catalog for resources that they own. Whoever creates the catalog becomes the first catalog owner. A catalog owner can add more catalog owners.
**Prerequisite roles:** Global administrator, Identity Governance administrator, User administrator, or Catalog creator
active-directory Manage User Access With Access Reviews https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/manage-user-access-with-access-reviews.md
With Azure Active Directory (Azure AD), you can easily ensure that users have ap
For more information, see [License requirements](access-reviews-overview.md#license-requirements).
+If you are reviewing access to an application, then before creating the review, see the article on how to [prepare for an access review of users' access to an application](access-reviews-application-preparation.md) to ensure the application is integrated with Azure AD.
+ ## Create and perform an access review You can have one or more users as reviewers in an access review.
You can have one or more users as reviewers in an access review.
2. Decide whether to have each user review their own access or to have one or more users review everyone's access.
-3. In one of the following roles: a global administrator, user administrator, or (Preview) a M365 or AAD Security Group owner of the group to be reviewed, go to the [Identity Governance page](https://portal.azure.com/#blade/Microsoft_AAD_ERM/DashboardBlade/).
+3. In one of the following roles: a global administrator, user administrator, or (Preview) an owner of a Microsoft 365 group or Azure AD security group to be reviewed, go to the [Identity Governance page](https://portal.azure.com/#blade/Microsoft_AAD_ERM/DashboardBlade/).
4. Create the access review. For more information, see [Create an access review of groups or applications](create-access-review.md).
active-directory Managed Identities Status https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/managed-identities-status.md
The following Azure services support managed identities for Azure resources:
| Azure Resource Mover | [Move resources across regions (from resource group)](../../resource-mover/move-region-within-resource-group.md) | Azure Site Recovery | [Replicate machines with private endpoints](../../site-recovery/azure-to-azure-how-to-enable-replication-private-endpoints.md#enable-the-managed-identity-for-the-vault) | | Azure Search | [Set up an indexer connection to a data source using a managed identity](../../search/search-howto-managed-identities-data-sources.md) |
+| Azure Service Bus | [Authenticate a managed identity with Azure Active Directory to access Azure Service Bus resources](../../service-bus-messaging/service-bus-managed-service-identity.md) |
| Azure Service Fabric | [Using Managed identities for Azure with Service Fabric](../../service-fabric/concepts-managed-identity.md) | | Azure SignalR Service | [Managed identities for Azure SignalR Service](../../azure-signalr/howto-use-managed-identity.md) | | Azure Spring Cloud | [How to enable system-assigned managed identity for Azure Spring Cloud application](../../spring-cloud/how-to-enable-system-assigned-managed-identity.md) |
aks Csi Storage Drivers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/csi-storage-drivers.md
Title: Enable Container Storage Interface (CSI) drivers on Azure Kubernetes Serv
description: Learn how to enable the Container Storage Interface (CSI) drivers for Azure disks and Azure Files in an Azure Kubernetes Service (AKS) cluster. Previously updated : 03/11/2022 Last updated : 05/06/2022
-# Enable Container Storage Interface (CSI) drivers for Azure disks and Azure Files on Azure Kubernetes Service (AKS)
+# Enable Container Storage Interface (CSI) drivers on Azure Kubernetes Service (AKS)
The Container Storage Interface (CSI) is a standard for exposing arbitrary block and file storage systems to containerized workloads on Kubernetes. By adopting and using CSI, Azure Kubernetes Service (AKS) can write, deploy, and iterate plug-ins to expose new or improve existing storage systems in Kubernetes without having to touch the core Kubernetes code and wait for its release cycles. The CSI storage driver support on AKS allows you to natively use:-- [*Azure disks*](azure-disk-csi.md), which can be used to create a Kubernetes *DataDisk* resource. Disks can use Azure Premium Storage, backed by high-performance SSDs, or Azure Standard Storage, backed by regular HDDs or Standard SSDs. For most production and development workloads, use Premium Storage. Azure disks are mounted as *ReadWriteOnce*, so are only available to a single pod. For storage volumes that can be accessed by multiple pods simultaneously, use Azure Files.-- [*Azure Files*](azure-files-csi.md), which can be used to mount an SMB 3.0/3.1 share backed by an Azure Storage account to pods. With Azure Files, you can share data across multiple nodes and pods. Azure Files can use Azure Standard Storage backed by regular HDDs or Azure Premium Storage backed by high-performance SSDs.
-> [!IMPORTANT]
-> Starting in Kubernetes version 1.21, AKS will use CSI drivers only and by default. CSI migration is also turned on starting from AKS 1.21, existing in-tree persistent volumes continue to function as they always have; however, behind the scenes Kubernetes hands control of all storage management operations (previously targeting in-tree drivers) to CSI drivers.
->
-> Please remove manual installed open source Azure Disk and Azure File CSI drivers before upgrading to AKS 1.21.
->
-> *In-tree drivers* refers to the current storage drivers that are part of the core Kubernetes code versus the new CSI drivers, which are plug-ins.
-
-## Install CSI storage drivers on a new cluster with version < 1.21
-
-Create a new cluster that can use CSI storage drivers for Azure disks and Azure Files by using the following CLI commands. Use the `--aks-custom-headers` flag to set the `EnableAzureDiskFileCSIDriver` feature.
-
-Create an Azure resource group:
-
-```azurecli-interactive
-# Create an Azure resource group
-az group create --name myResourceGroup --location canadacentral
-```
-
-Create the AKS cluster with support for CSI storage drivers:
-
-```azurecli-interactive
-# Create an AKS-managed Azure AD cluster
-az aks create -g MyResourceGroup -n MyManagedCluster --network-plugin azure --aks-custom-headers EnableAzureDiskFileCSIDriver=true
-```
-
-If you want to create clusters in tree storage drivers instead of CSI storage drivers, you can do so by omitting the custom `--aks-custom-headers` parameter. Starting in Kubernetes version 1.21, Kubernetes will use CSI drivers only and by default.
--
-Check how many Azure disk-based volumes you can attach to this node by running:
+- [**Azure disks**](azure-disk-csi.md) can be used to create a Kubernetes *DataDisk* resource. Disks can use Azure Premium Storage, backed by high-performance SSDs, or Azure Standard Storage, backed by regular HDDs or Standard SSDs. For most production and development workloads, use Premium Storage. Azure disks are mounted as *ReadWriteOnce* and are only available to a single pod. For storage volumes that can be accessed by multiple pods simultaneously, use Azure Files.
+- [**Azure Files**](azure-files-csi.md) can be used to mount an SMB 3.0/3.1 share backed by an Azure storage account to pods. With Azure Files, you can share data across multiple nodes and pods. Azure Files can use Azure Standard storage backed by regular HDDs or Azure Premium storage backed by high-performance SSDs.
-```console
-$ kubectl get nodes
-aks-nodepool1-25371499-vmss000000
-aks-nodepool1-25371499-vmss000001
-aks-nodepool1-25371499-vmss000002
-
-$ echo $(kubectl get CSINode <NODE NAME> -o jsonpath="{.spec.drivers[1].allocatable.count}")
-8
-```
+> [!IMPORTANT]
+> Starting with Kubernetes version 1.21, AKS only uses CSI drivers by default and CSI migration is enabled. Existing in-tree persistent volumes will continue to function. However, internally Kubernetes hands control of all storage management operations (previously targeting in-tree drivers) to CSI drivers.
+>
+> *In-tree drivers* refers to the current storage drivers that are part of the core Kubernetes code opposed to the new CSI drivers, which are plug-ins.
-## Install CSI storage drivers on an existing cluster with version < 1.21
+## Migrate custom in-tree storage classes to CSI
-## Migrating custom in-tree storage classes to CSI
-If you have created in-tree driver storage classes, those storage classes will continue to work since CSI migration is turned on after upgrading your cluster to 1.21.x, while if you want to use CSI features (snapshotting etc.) you will need to carry out the migration.
+If you created in-tree driver storage classes, those storage classes continue to work since CSI migration is turned on after upgrading your cluster to 1.21.x. If you want to use CSI features you'll need to perform the migration.
-Migration of these storage classes will involve deleting the existing storage classes, and re-creating them with the provisioner set to **disk.csi.azure.com** if using Azure Disks, and **files.csi.azure.com** if using Azure Files.
+Migrating these storage classes involves deleting the existing ones, and re-creating them with the provisioner set to **disk.csi.azure.com** if using Azure disk storage, and **files.csi.azure.com** if using Azure Files.
-### Migrating Storage Class provisioner
+### Migrate storage class provisioner
-As an example for Azure disks:
+The following example YAML manifest shows the difference between the in-tree storage class definition configured to use Azure disks, and the equivalent using a CSI storage class definition. The CSI storage system supports the same features as the in-tree drivers, so the only change needed would be the value for `provisioner`.
-#### Original In-tree storage class definition
+#### Original in-tree storage class definition
```yaml kind: StorageClass
parameters:
storageAccountType: Premium_LRS ```
-The CSI storage system supports the same features as the In-tree drivers, so the only change needed would be the provisioner.
-
-## Migrating in-tree persistent volumes
+## Migrate in-tree persistent volumes
> [!IMPORTANT]
-> If your in-tree Persistent Volume reclaimPolicy is set to Delete you will need to change the Persistent Volume to Retain to persist your data. This can be achieved via a [patch operation on the PV](https://kubernetes.io/docs/tasks/administer-cluster/change-pv-reclaim-policy/). For example:
+> If your in-tree persistent volume `reclaimPolicy` is set to **Delete**, you need to change its policy to **Retain** to persist your data. This can be achieved using a [patch operation on the PV](https://kubernetes.io/docs/tasks/administer-cluster/change-pv-reclaim-policy/). For example:
+>
> ```console > $ kubectl patch pv pv-azuredisk --type merge --patch '{"spec": {"persistentVolumeReclaimPolicy": "Retain"}}' > ```
-### Migrating in-tree Azure Disk persistent volumes
+### Migrate in-tree Azure disk persistent volumes
-If you have in-tree Azure Disk persistent volumes, get `diskURI` from in-tree persistent volumes and then follow this [guide][azure-disk-static-mount] to set up CSI driver persistent volumes
+If you have in-tree Azure disk persistent volumes, get `diskURI` from in-tree persistent volumes and then follow this [guide][azure-disk-static-mount] to set up CSI driver persistent volumes.
-### Migrating in-tree Azure File persistent volumes
+### Migrate in-tree Azure File persistent volumes
-If you have in-tree Azure File persistent volumes, get `secretName`, `shareName` from in-tree persistent volumes and then follow this [guide][azure-file-static-mount] to set up CSI driver persistent volumes
+If you have in-tree Azure File persistent volumes, get `secretName`, `shareName` from in-tree persistent volumes and then follow this [guide][azure-file-static-mount] to set up CSI driver persistent volumes.
## Next steps -- To use the CSI drive for Azure disks, see [Use Azure disks with CSI drivers](azure-disk-csi.md).-- To use the CSI drive for Azure Files, see [Use Azure Files with CSI drivers](azure-files-csi.md).
+- To use the CSI driver for Azure disks, see [Use Azure disks with CSI drivers](azure-disk-csi.md).
+- To use the CSI driver for Azure Files, see [Use Azure Files with CSI drivers](azure-files-csi.md).
- For more about storage best practices, see [Best practices for storage and backups in Azure Kubernetes Service][operator-best-practices-storage]. - For more information on CSI migration, see [Kubernetes In-Tree to CSI Volume Migration][csi-migration-community].
azure-app-configuration Quickstart Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-bicep.md
+
+ Title: Create an Azure App Configuration store using Bicep
+
+description: Learn how to create an Azure App Configuration store using Bicep.
++ Last updated : 05/06/2022+++++
+# Quickstart: Create an Azure App Configuration store using Bicep
+
+This quickstart describes how you can use Bicep to:
+
+- Deploy an App Configuration store.
+- Create key-values in an App Configuration store.
+- Read key-values in an App Configuration store.
++
+## Prerequisites
+
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+
+## Review the Bicep file
+
+The Bicep file used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/app-configuration-store-kv/).
++
+Two Azure resources are defined in the Bicep file:
+
+- [Microsoft.AppConfiguration/configurationStores](/azure/templates/microsoft.appconfiguration/2020-07-01-preview/configurationstores): create an App Configuration store.
+- [Microsoft.AppConfiguration/configurationStores/keyValues](/azure/templates/microsoft.appconfiguration/2020-07-01-preview/configurationstores/keyvalues): create a key-value inside the App Configuration store.
+
+With this Bicep file, we create one key with two different values, one of which has a unique label.
+
+## Deploy the Bicep file
+
+1. Save the Bicep file as **main.bicep** to your local computer.
+1. Deploy the Bicep file using either Azure CLI or Azure PowerShell.
+
+ # [CLI](#tab/CLI)
+
+ ```azurecli
+ az group create --name exampleRG --location eastus
+ az deployment group create --resource-group exampleRG --template-file main.bicep --parameters configStoreName=<store-name>
+ ```
+
+ # [PowerShell](#tab/PowerShell)
+
+ ```azurepowershell
+ New-AzResourceGroup -Name exampleRG -Location eastus
+ New-AzResourceGroupDeployment -ResourceGroupName exampleRG -TemplateFile ./main.bicep -configStoreName "<store-name>"
+ ```
+
+
+
+ > [!NOTE]
+ > Replace **\<store-name\>** with the name of the App Configuration store.
+
+ When the deployment finishes, you should see a message indicating the deployment succeeded.
+
+## Review deployed resources
+
+Use Azure CLI or Azure PowerShell to list the deployed resources in the resource group.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az resource list --resource-group exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Get-AzResource -ResourceGroupName exampleRG
+```
+++
+You can also use the Azure portal to list the resources:
+
+1. Sign in to the Azure portal.
+1. In the search box, enter *App Configuration*, then select **App Configuration** from the list.
+1. Select the newly created App Configuration resource.
+1. Under **Operations**, select **Configuration explorer**.
+1. Verify that two key-values exist.
+
+## Clean up resources
+
+When no longer needed, use Azure CLI or Azure PowerShell to delete the resource group and its resources.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az group delete --name exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Remove-AzResourceGroup -Name exampleRG
+```
+++
+You can also use the Azure portal to delete the resource group:
+
+1. Navigate to your resource group.
+1. Select **Delete resource group**.
+1. A tab will appear. Enter the resource group name and select **Delete**.
+
+## Next steps
+
+To learn about adding feature flag and Key Vault reference to an App Configuration store, check out the ARM template examples.
+
+- [app-configuration-store-ff](https://azure.microsoft.com/resources/templates/app-configuration-store-ff/)
+- [app-configuration-store-keyvaultref](https://azure.microsoft.com/resources/templates/app-configuration-store-keyvaultref/)
azure-arc Active Directory Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/active-directory-introduction.md
Previously updated : 04/05/2022 Last updated : 04/15/2022 # Azure Arc-enabled SQL Managed Instance with Active Directory authentication
+Azure Arc-enabled data services support Active Directory (AD) for Identity and Access Management (IAM). The Arc-enabled SQL Managed Instance uses an existing on-premises Active Directory (AD) domain for authentication.
-This article describes how to enable Azure Arc-enabled SQL Managed Instance with Active Directory (AD) Authentication. The article demonstrates two possible integration modes:
-- Bring your own keytab mode -- Automatic mode
+This article describes how to enable Azure Arc-enabled SQL Managed Instance with Active Directory (AD) Authentication. The article demonstrates two possible AD integration modes:
+- Customer-managed keytab (CMK)
+- System-managed keytab (SMK)
-In Active Directory, the integration mode describes the management the keytab file.
+The notion of Active Directory(AD) integration mode describes the process for keytab management including:
+- Creating AD account used by SQL Managed Instance
+- Registering Service Principal Names (SPNs) under the above AD account.
+- Generating keytab file
## Background-
-Azure Arc-enabled data services support Active Directory (AD) for Identity and Access Management (IAM). The Arc-enabled SQL Managed Instance uses an existing on-premises Active Directory (AD) domain for authentication. Users need to do the following steps to enable Active Directory authentication for Arc-enabled SQL Managed Instance:
+To enable Active Directory authentication for SQL Server on Linux and Linux containers, use a [keytab file](/sql/linux/sql-server-linux-ad-auth-understanding#what-is-a-keytab-file). The keytab file is a cryptographic file containing service principal names (SPNs), account names and hostnames. SQL Server uses the keytab file for authenticating itself to the Active Directory (AD) domain and authenticating its clients using Active Directory (AD). Do the following steps to enable Active Directory authentication for Arc-enabled SQL Managed Instance:
- [Deploy data controller](create-data-controller-indirect-cli.md) -- [Deploy a bring your own keytab AD connector](deploy-byok-active-directory-connector.md) or [Deploy an automatic AD connector](deploy-automatic-active-directory-connector.md)-- [Deploy managed instances](deploy-active-directory-sql-managed-instance.md)
+- [Deploy a customer-managed keytab AD connector](deploy-customer-managed-keytab-active-directory-connector.md) or [Deploy a system-managed keytab AD connector](deploy-system-managed-keytab-active-directory-connector.md)
+- [Deploy SQL managed instances](deploy-active-directory-sql-managed-instance.md)
The following diagram shows how to enable Active Directory authentication for Azure Arc-enabled SQL Managed Instance:
The following diagram shows how to enable Active Directory authentication for Az
## What is an Active Directory (AD) connector?
-In order to enable Active Directory authentication for SQL Managed Instance, the managed instance must be deployed in an environment that allows it to communicate with the Active Directory domain.
-
-To facilitate this, Azure Arc-enabled data services introduces a new Kubernetes-native [Custom Resource Definition (CRD)](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/) called `Active Directory Connector`, it provides Azure Arc-enabled managed instances running on the same data controller the ability to perform Active Directory authentication.
+In order to enable Active Directory authentication for SQL Managed Instance, the instance must be deployed in an environment that allows it to communicate with the Active Directory domain.
+To facilitate this, Azure Arc-enabled data services introduces a new Kubernetes-native [Custom Resource Definition (CRD)](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/) called `Active Directory Connector`. It provides Azure Arc-enabled SQL managed instances running on the same data controller the ability to perform Active Directory authentication.
## Compare AD integration modes
-What is the difference between the two AD integration modes?
-
-To enable Active Directory Authentication for Arc-enabled SQL Managed Instances, you need an Active Directory (AD) connector where you determine the mode of the AD deployment. The two modes are:
--- Bring your own keytab-- Automatic -
-The following sections describe the compare these modes.
-
-### Bring your own keytab mode
-
-In this mode, you provide:
-
-- An Active Directory account-- Service Principal Names (SPNs) under that AD account-- Your own [keytab file](/sql/linux/sql-server-linux-ad-auth-understanding#what-is-a-keytab-file)-
-When you deploy the bring your own keytab AD connector, you need to create the AD account, register the service principal names (SPN), and create the keytab file. You can create the account using [Active Directory utility (`adutil`)](/sql/linux/sql-server-linux-ad-auth-adutil-introduction).
+What is the difference between the two Active Directory integration modes?
-For more information, see [deploy a bring your own keytab Active Directory (AD) connector](deploy-automatic-active-directory-connector.md)
+To enable Active Directory authentication for Arc-enabled SQL Managed Instance, you need an Active Directory connector where you specify the Active Directory integration deployment mode. The two Active Directory integration modes are:
-### AD automatic integration mode
+- Customer-managed keytab
+- System-managed keytab
-In automatic mode, you need an automatic Active Directory (AD) connector. You will bring an Organizational Unit (OU) and an AD domain service account has sufficient permissions in the Active Directory.
+The following section compares these modes.
-Furthermore, the system:
+| |Customer-managed keytabΓÇï|System-managed keytab - PreviewΓÇï|
+|||--|
+|**Use cases**|Small and medium size businesses who are familiar with managing Active Directory objects and want flexibility in their automation process |All sizes of businesses - seeking to highly automated Active Directory management experience|
+|**User provides**|An Active Directory account and SPNs under that account, and a [keytab file](/sql/linux/sql-server-linux-ad-auth-understanding#what-is-a-keytab-file) for Active Directory authentication |An [Organizational Unit (OU)](../../active-directory-domain-services/create-ou.md) and a domain service account has [sufficient permissions](deploy-system-managed-keytab-active-directory-connector.md?#prerequisites) on that OU in Active Directory.|
+|**Characteristics**|User managed. Users bring the Active Directory account, which impersonates the identity of the managed instance and the keytab file. |System managed. The system creates a domain service account for each managed instance and sets SPNs automatically on that account. It also, creates and delivers a keytab file to the managed instance. |
+|**Deployment process**| 1. Deploy data controller <br/> 2. Create keytab file <br/>3. Set up keytab information to Kubernetes secret<br/> 4. Deploy AD connector, deploy SQL managed instance<br/><br/>For more information, see [Deploy a customer-managed keytab Active Directory connector](deploy-customer-managed-keytab-active-directory-connector.md) | 1. Deploy data controller, deploy AD connector<br/>2. Deploy SQL managed instance<br/><br/>For more information, see [Deploy a system-managed keytab Active Directory connector](deploy-system-managed-keytab-active-directory-connector.md) |
+|**Manageability**|You can create the keytab file by following the instructions from [Active Directory utility (`adutil`)](/sql/linux/sql-server-linux-ad-auth-adutil-introduction). Manual keytab rotation. |Managed keytab rotation.|
+|**Limitations**|We do not recommend sharing keytab files among services. Each service should have a specific keytab file. As the number of keytab files increases the level of effort and complexity increases. |Managed keytab generation and rotation. The service account will require sufficient permissions in Active Directory to manage the credentials. |
-- Creates a domain service AD account for each managed instance.-- Sets SPNs automatically on that AD account.-- Creates and delivers a keytab file to the managed instance.
+For either mode, you need a specific Active Directory account, keytab, and Kubernetes secret for each SQL managed instance.
-The mode of the AD connector is determined by the value of `spec.activeDirectory.serviceAccountProvisioning`. Set to either `manual` for bring your own keytab, or `automatic`. Once this parameter is set to automatic, the following parameters become mandatory:
-- `spec.activeDirectory.ouDistinguishedName`-- `spec.activeDirectory.domainServiceAccountSecret`
+## Enable Active Directory authentication in Arc-enabled SQL Managed Instance
-When you deploy SQL Managed Instance with the intention to enable Active Directory Authentication, the deployment needs to reference the Active Directory Connector instance to use. Referencing the Active Directory Connector in managed instance specification automatically sets up the needed environment in the SQL Managed Instance container for the managed instance to authenticate with Active Directory.
+When you deploy SQL Managed Instance with the intention to enable Active Directory authentication, the deployment needs to reference an Active Directory connector instance to use. Referencing the Active Directory connector in managed instance specification automatically sets up the needed environment in the SQL Managed Instance container for the managed instance to authenticate with Active Directory.
## Next steps
-* [Deploy and bring your own keytab Active Directory (AD) connector](deploy-byok-active-directory-connector.md)
-* [Deploy an automatic Active Directory (AD) connector](deploy-automatic-active-directory-connector.md)
-* [Deploy Azure Arc-enabled SQL Managed Instance in Active Directory (AD)](deploy-active-directory-sql-managed-instance.md)
-* [Connect to AD-integrated Azure Arc-enabled SQL Managed Instance](connect-active-directory-sql-managed-instance.md)
+* [Deploy a customer-managed keytab Active Directory (AD) connector](deploy-customer-managed-keytab-active-directory-connector.md)
+* [Deploy a system-managed keytab Active Directory (AD) connector](deploy-system-managed-keytab-active-directory-connector.md)
+* [Deploy an Azure Arc-enabled SQL Managed Instance in Active Directory (AD)](deploy-active-directory-sql-managed-instance.md)
+* [Connect to Azure Arc-enabled SQL Managed Instance using Active Directory authentication](connect-active-directory-sql-managed-instance.md)
azure-arc Active Directory Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/active-directory-prerequisites.md
+
+ Title: Deploy Azure Arc-enabled data services in Active Directory authentication - prerequisites
+description: Deploy Azure Arc-enabled data services in Active Directory authentication - prerequisites
++++++ Last updated : 04/21/2022+++
+# Azure Arc-enabled SQL Managed Instance in Active Directory authentication with system-managed keytab - prerequisites
+
+This document explains how to prepare to deploy Azure Arc-enabled data services with Active Directory (AD) authentication. Specifically the article describes Active Directory objects you need to configure before the deployment of Kubernetes resources.
+
+[The introduction](active-directory-introduction.md#compare-ad-integration-modes) describes two different integration modes:
+- *System-managed keytab* mode allows the system to create and manage the AD accounts for each SQL Managed Instance.
+- *Customer-managed keytab* mode allows you to create and manage the AD accounts for each SQL Managed Instance.
+
+The requirements and recommendations are different for the two integration modes.
++
+|Active Directory Object|Customer-managed keytab |System-managed keytab |
+||||
+|Organizational unit (OU) |Recommended|Required |
+|Active Directory domain service account (DSA) for Active Directory Connector |Not required|Required |
+|Active directory account for SQL Managed Instance |Created for each managed instance|System creates AD account for each managed instance|
+
+### DSA account - system-managed keytab mode
+
+To be able to create all the required objects in Active Directory automatically, AD Connector needs a domain service account (DSA). The DSA is an Active Directory account that has specific permissions to create, manage and delete users accounts inside the provided organizational unit (OU). This article explains how to configure the permission of this Active Directory account. The examples call the DSA account `arcdsa` as an example in this article.
+
+### Auto generated Active Directory objects
+
+An Arc-enabled SQL Managed Instance deployment automatically generates accounts in system-managed keytab mode. Each of the accounts represents a SQL Managed Instance and will be managed by the system throughout the lifetime of SQL. These accounts own the Service Principal Names (SPNs) required by each SQL.
+
+The steps below assume you already have an Active Directory domain controller. If you don't have a domain controller, the following [guide](https://social.technet.microsoft.com/wiki/contents/articles/37528.create-and-configure-active-directory-domain-controller-in-azure-windows-server.aspx) includes steps that can be helpful.
+
+## Create Active Directory objects
+
+Do the following things before you deploy an Arc-enabled SQL Managed Instance with AD authentication:
+
+1. Create an organizational unit (OU) for all Arc-enabled SQL Managed Instance related AD objects. Alternatively, you can choose an existing OU upon deployment.
+1. Create an AD account for the AD Connector, or use an existing account, and provide this account the right permissions on the OU created in the previous step.
+
+### Create an OU
+
+System-managed keytab mode requires a designated OU. For customer-managed keytab mode an OU is recommended.
+
+On the domain controller, open **Active Directory Users and Computers**. On the left panel, right-click the directory under which you want to create your OU and select **New**\> **Organizational Unit**, then follow the prompts from the wizard to create the OU. Alternatively, you can create an OU with PowerShell:
+
+```powershell
+New-ADOrganizationalUnit -Name "<name>" -Path "<Distinguished name of the directory you wish to create the OU in>"
+```
+
+The examples in this article use `arcou` for the OU name.
+
+![Screenshot of Active Directory Users and computers menu.](media/active-directory-deployment/start-new-organizational-unit.png)
+
+![Screenshot of new object - organizational unit dialog.](media/active-directory-deployment/new-organizational-unit.png)
+
+### Create the domain service account (DSA)
+
+For system-managed keytab mode, you need an AD domain service account.
+
+Create the Active Directory user that you will use as the domain service account. This account requires specific permissions. Make sure that you have an existing Active Directory account or create a new account, which Arc-enabled SQL Managed Instance can use to set up the necessary objects.
+
+To create a new user in AD, you can right-click the domain or the OU and select **New** > **User**:
+
+![Screenshot of user properties.](media/active-directory-deployment/start-ad-new-user.png)
+
+This account will be referred to as *arcdsa* in this article.
+
+### Set permissions for the DSA
+
+For system-managed keytab mode, you need to set the permissions for the DSA.
+
+Whether you have created a new account for the DSA or are using an existing Active Directory user account, there are certain permissions the account needs to have. The DSA needs to be able to create users, groups, and computer accounts in the OU. In the following steps, the Arc-enabled SQL Managed Instance domain service account name is `arcdsa`.
+
+> [!IMPORTANT]
+> You can choose any name for the DSA, but we do not recommend altering the account name once AD Connector is deployed.
+
+1. On the domain controller, open **Active Directory Users and Computers**, click on **View**, select **Advanced Features**
+
+1. In the left panel, navigate to your domain, then the OU which `arcou` will use
+
+1. Right-click the OU, and select **Properties**.
+
+> [!NOTE]
+> Make sure that you have selected **Advanced Features** by right-clicking on the OU, and selecting **View**
+
+1. Go to the Security tab. Select **Advanced Features** right-click on the OU, and select **View**.
+
+ ![AD object properties](./media/active-directory-deployment/start-ad-new-user.png)
+
+1. Select **Add...** and add the **arcdsa** user.
+
+ ![Screenshot of add user dialog.](./media/active-directory-deployment/add-user.png)
+
+1. Select the **arcdsa** user and clear all permissions, then select **Advanced**.
+
+1. Select **Add**
+
+ - Select **Select a Principal**, insert **arcdsa**, and select **Ok**.
+
+ - Set **Type** to **Allow**.
+
+ - Set **Applies To** to **This Object and all descendant objects**.
+
+ ![Screenshot of permission entries.](./media/active-directory-deployment/set-permissions.png)
+
+ - Scroll down to the bottom, and select **Clear all**.
+
+ - Scroll back to the top, and select:
+ - **Read all properties**
+ - **Write all properties**
+ - **Create User objects**
+ - **Delete User objects**
+ - **Reset Password for Descendant User objects**
+
+ - Select **OK**.
+
+1. Select **Add**.
+
+ - Select **Select a Principal**, insert **arcdsa**, and select **Ok**.
+
+ - Set **Type** to **Allow**.
+
+ - Set **Applies To** to **Descendant User objects**.
+
+ - Scroll down to the bottom, and select **Clear all**.
+
+ - Scroll back to the top, and select **Reset password**.
+
+ - Select **OK**.
+
+- Select **OK** twice more to close open dialog boxes.
+
+## Next steps
+
+* [Deploy a customer-managed keytab Active Directory (AD) connector](deploy-customer-managed-keytab-active-directory-connector.md)
+* [Deploy a system-managed keytab Active Directory (AD) connector](deploy-system-managed-keytab-active-directory-connector.md)
+* [Deploy an Azure Arc-enabled SQL Managed Instance in Active Directory (AD)](deploy-active-directory-sql-managed-instance.md)
+* [Connect to Azure Arc-enabled SQL Managed Instance using Active Directory authentication](connect-active-directory-sql-managed-instance.md)
azure-arc Connect Active Directory Sql Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/connect-active-directory-sql-managed-instance.md
This article describes how to connect to SQL Managed Instance endpoint using Active Directory (AD) authentication. Before you proceed, make sure you have an AD-integrated Azure Arc-enabled SQL Managed Instance deployed already.
-See [Tutorial ΓÇô Deploy AD-integrated SQL Managed Instance (Bring Your Own Keytab)](deploy-active-directory-sql-managed-instance.md) to deploy a Azure Arc-enabled SQL Managed Instance with Active Directory (AD) Authentication enabled.
+See [Tutorial ΓÇô Deploy AD-integrated SQL Managed Instance](deploy-active-directory-sql-managed-instance.md) to deploy Azure Arc-enabled SQL Managed Instance with Active Directory authentication enabled.
+
+> [!NOTE]
+> Ensure that a DNS record for the SQL endpoint is created in Active Directory DNS servers before continuing on this page.
## Create Active Directory logins in SQL Managed Instance
-Once SQL Managed Instance is successfully deployed, you will need to provision AD logins in SQL Server.
-In order to do this, first connect to the SQL Managed Instance using the SQL login with administrative privileges and run the following TSQL:
+Once SQL Managed Instance is successfully deployed, you will need to provision Active Directory logins in SQL Server.
-```console
+To provision logins, first connect to the SQL Managed Instance using the SQL login with administrative privileges and run the following T-SQL:
+
+```sql
CREATE LOGIN [<NetBIOS domain name>\<AD account name>] FROM WINDOWS; GO ```
-For an AD domain `contoso.local` with NetBIOS domain name as `CONTOSO`, if you want to create a login for AD account `admin`, the command should look like the following:
+The following example creates a login for an Active Directory account named `admin`, in the domain named `contoso.local`, with NetBIOS domain name as `CONTOSO`:
-```console
+```sql
CREATE LOGIN [CONTOSO\admin] FROM WINDOWS; GO ```
A domain-aware Linux-based machine is one where you are able to use Kerberos aut
To connect from a Linux/Mac OS client, authenticate to Active Directory using the kinit command and then use sqlcmd tool to connect to the SQL Managed Instance.
-```bash
+```console
kinit <username>@<REALM> sqlcmd -S <Endpoint DNS name>,<Endpoint port number> -E ```
-For connecting using the CONTOSO\admin AD account to the SQL Managed Instance with endpoint sqlmi.contoso.local at port 31433, the commands should look like the following. The -E argument is used to perform Integrated Authentication.
+For example, to connect with the CONTOSO\admin account to the SQL managed instance with endpoint `sqlmi.contoso.local` at port `31433`, use the following command:
-```bash
+```console
kinit admin@CONTOSO.LOCAL sqlcmd -S sqlmi.contoso.local,31433 -E ```
-## Connect to SQL MI instance from Windows
+In the example, `-E` specifies Active Directory integrated authentication.
-From Windows, when you run the following command, the AD identity you are logged in to Windows with should be picked up automatically for connecting to SQL Managed Instance.
+## Connect SQL Managed Instance from Windows
-```bash
+To log in to SQL Managed Instance with your current Windows Active Directory login, run the following command:
+
+```console
sqlcmd -S <DNS name for master instance>,31433 -E ```
-## Connect to SQL MI instance from SSMS
+## Connect to SQL Managed Instance from SSMS
![Connect with SSMS](media/active-directory-deployment/connect-with-ssms.png)
-## Connect to SQL MI instance from ADS
+## Connect to SQL Managed Instance from ADS
![Connect with ADS](media/active-directory-deployment/connect-with-ads.png)
azure-arc Create Postgresql Hyperscale Server Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-postgresql-hyperscale-server-group.md
If you prefer to try out things without provisioning a full environment yourself
Implement this step before moving to the next step. To deploy PostgreSQL Hyperscale server group onto Red Hat OpenShift in a project other than the default, you need to execute the following commands against your cluster to update the security constraints. This command grants the necessary privileges to the service accounts that will run your PostgreSQL Hyperscale server group. The security context constraint (SCC) arc-data-scc is the one you added when you deployed the Azure Arc data controller. ```Console
-oc adm policy add-scc-to-user arc-data-scc -z <server-group-name> -n <namespace name>
+oc adm policy add-scc-to-user arc-data-scc -z <server-group-name> -n <namespace-name>
``` **Server-group-name is the name of the server group you will create during the next step.**
azure-arc Deploy Active Directory Connector Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/deploy-active-directory-connector-cli.md
+
+ Title: Tutorial ΓÇô Deploy Active Directory connector using Azure CLI
+description: Tutorial to deploy an Active Directory connector using Azure CLI
++++++ Last updated : 05/05/2022++++
+# Tutorial ΓÇô Deploy Active Directory connector using Azure CLI
+
+This article explains how to deploy an Active Directory (AD) connector using Azure CLI. The AD connector is a key component to enable Active Directory authentication on Azure Arc-enabled SQL Managed Instance.
+
+## Prerequisites
+
+### Install tools
+
+Before you can proceed with the tasks in this article you need to install the following tools:
+
+- The [Azure CLI (az)](/cli/azure/install-azure-cli)
+- The [`arcdata` extension for Azure CLI](install-arcdata-extension.md)
+
+To know further details about how to set up OU and AD account, go to [Deploy Azure Arc-enabled data services in Active Directory authentication - prerequisites](active-directory-prerequisites.md)
++
+## Deploy Active Directory connector in customer-managed keytab mode
+
+### [Customer-managed keytab mode](#tab/customer-managed-keytab-mode)
+
+#### Create an AD connector instance
+
+> [!NOTE]
+> Make sure the password of provided domain service AD account here doesn't contain `!` as special characters.
+>
+
+To view available options for create command for AD connector instance, use the following command:
+
+```azurecli
+az arcdata ad-connector create --help
+```
+
+To create an AD connector instance, use `az arcdata ad-connector create`. See the following examples for different connectivity modes:
++
+##### Indirectly connected mode
+
+```azurecli
+az arcdata ad-connector create
+--name < name >
+--k8s-namespace < Kubernetes namespace >
+--realm < AD Domain name >
+--nameserver-addresses < DNS server IP addresses >
+--account-provisioning < account provisioning mode : manual or auto >
+--prefer-k8s-dns < whether Kubernetes DNS or AD DNS Server for IP address lookup >
+--use-k8s
+```
+
+Example:
+
+```azurecli
+az arcdata ad-connector create
+--name arcadc
+--k8s-namespace arc
+--realm CONTOSO.LOCAL
+--nameserver-addresses 10.10.10.11
+--account-provisioning manual
+--prefer-k8s-dns false
+--use-k8s
+```
+
+##### Directly connected mode
+
+```azurecli
+az arcdata ad-connector create
+--name < name >
+--dns-domain-name < The DNS name of AD domain >
+--realm < AD Domain name >
+--nameserver-addresses < DNS server IP addresses >
+--account-provisioning < account provisioning mode : manual or auto >
+--prefer-k8s-dns < whether Kubernetes DNS or AD DNS Server for IP address lookup >
+--data-controller-name < Arc Data Controller Name >
+--resource-group < resource-group >
+```
+
+Example:
+
+```azurecli
+az arcdata ad-connector create
+--name arcadc
+--realm CONTOSO.LOCAL
+--dns-domain-name contoso.local
+--nameserver-addresses 10.10.10.11
+--account-provisioning manual
+--prefer-k8s-dns false
+--data-controller-name arcdc
+--resource-group arc-rg
+```
+
+### Update an AD connector instance
+
+To view available options for update command for AD connector instance, use the following command:
+
+```azurecli
+az arcdata ad-connector update --help
+```
+
+To update an AD connector instance, use `az arcdata ad-connector update`. See the following examples for different connectivity modes:
+
+#### Indirectly connected mode
+
+```azurecli
+az arcdata ad-connector update
+--name < name >
+--k8s-namespace < Kubernetes namespace >
+--nameserver-addresses < DNS server IP addresses >
+--use-k8s
+```
+
+Example:
+
+```azurecli
+az arcdata ad-connector update
+--name arcadc
+--k8s-namespace arc
+--nameserver-addresses 10.10.10.11
+--use-k8s
+```
+
+#### Directly connected mode
+
+```azurecli
+az arcdata ad-connector update
+--name < name >
+--nameserver-addresses < DNS server IP addresses >
+--data-controller-name < Arc Data Controller Name >
+--resource-group < resource-group >
+```
+
+Example:
+
+```azurecli
+az arcdata ad-connector update
+--name arcadc
+--nameserver-addresses 10.10.10.11
+--data-controller-name arcdc
+--resource-group arc-rg
+```
++
+### [system-managed keytab mode](#tab/system-managed-keytab-mode)
+To create an AD connector instance, use `az arcdata ad-connector create`. See the following examples for different connectivity modes:
++
+#### Indirectly connected mode
+
+```azurecli
+az arcdata ad-connector create
+--name < name >
+--k8s-namespace < Kubernetes namespace >
+--dns-domain-name < The DNS name of AD domain >
+--realm < AD Domain name >
+--nameserver-addresses < DNS server IP addresses >
+--account-provisioning < account provisioning mode >
+--ou-distinguished-name < AD Organizational Unit distinguished name >
+--prefer-k8s-dns < whether Kubernetes DNS or AD DNS Server for IP address lookup >
+--use-k8s
+```
+
+Example:
+
+```azurecli
+az arcdata ad-connector create
+--name arcadc
+--k8s-namespace arc
+--realm CONTOSO.LOCAL
+--netbios-domain-name CONTOSO
+--dns-domain-name contoso.local
+--nameserver-addresses 10.10.10.11
+--account-provisioning automatic
+--ou-distinguished-name ΓÇ£OU=arcou,DC=contoso,DC=localΓÇ¥
+--prefer-k8s-dns false
+--use-k8s
+```
+
+#### Directly connected mode
+
+```azurecli
+az arcdata ad-connector create
+--name < name >
+--dns-domain-name < The DNS name of AD domain >
+--realm < AD Domain name >
+--netbios-domain-name < AD domain NETBOIS name >
+--nameserver-addresses < DNS server IP addresses >
+--account-provisioning < account provisioning mode >
+--ou-distinguished-name < AD domain organizational distinguished name >
+--prefer-k8s-dns < whether Kubernetes DNS or AD DNS Server for IP address lookup >
+--data-controller-name < Arc Data Controller Name >
+--resource-group < resource-group >
+```
+
+Example:
+
+```azurecli
+az arcdata ad-connector create
+--name arcadc
+--realm CONTOSO.LOCAL
+--netbios-domain-name CONTOSO
+--dns-domain-name contoso.local
+--nameserver-addresses 10.10.10.11
+--account-provisioning automatic
+--ou-distinguished-name ΓÇ£OU=arcou,DC=contoso,DC=localΓÇ¥
+--prefer-k8s-dns false
+--data-controller-name arcdc
+--resource-group arc-rg
+```
+
+### Update an AD connector instance
+
+To view available options for update command for AD connector instance, use the following command:
+
+```azurecli
+az arcdata ad-connector update --help
+```
+To update an AD connector instance, use `az arcdata ad-connector update`. See the following examples for different connectivity modes:
+
+### Indirectly connected mode
+
+```azurecli
+az arcdata ad-connector update
+--name < name >
+--k8s-namespace < Kubernetes namespace >
+--nameserver-addresses < DNS server IP addresses >
+--use-k8s
+```
+
+Example:
+
+```azurecli
+az arcdata ad-connector update
+--name arcadc
+--k8s-namespace arc
+--nameserver-addresses 10.10.10.11
+--use-k8s
+```
+
+#### Directly connected mode
+
+```azurecli
+az arcdata ad-connector update
+--name < name >
+--nameserver-addresses < DNS server IP addresses >
+--data-controller-name < Arc Data Controller Name>
+--resource-group <resource-group>
+```
+
+Example:
+
+```azurecli
+az arcdata ad-connector update
+--name arcadc
+--nameserver-addresses 10.10.10.11
+--data-controller-name arcdc
+--resource-group arc-rg
+```
+++
+## Delete an AD connector instance
+
+To delete an AD connector instance, use `az arcdata ad-connector delete`. See the following examples for both connectivity modes:
+
+### [Indirectly-Connected mode](#tab/indirectly-connected-mode)
+
+```azurecli
+az arcdata ad-connector delete --name < AD Connector name > --k8s-namespace < namespace > --use-k8s
+```
+
+Example:
+
+```azurecli
+az arcdata ad-connector delete --name arcadc --k8s-namespace arc --use-k8s
+```
+
+### [Directly-Connected mode](#tab/directly-connected-mode)
+```azurecli
+az arcdata ad-connector delete --name < AD Connector name > --data-controller-name < data controller name > --resource-group < resource group >
+```
+
+Example:
+
+```azurecli
+az arcdata ad-connector delete --name arcadc --data-controller-name arcdc --resource-group arc-rg
+```
+++
+## Next steps
+* [Tutorial ΓÇô Deploy AD connector in customer-managed keytab mode](deploy-customer-managed-keytab-active-directory-connector.md)
+* [Tutorial ΓÇô Deploy AD connector in system-managed keytab mode](deploy-system-managed-keytab-active-directory-connector.md)
+* [Deploy Arc-enabled SQL Managed Instance with Active Directory Authentication](deploy-active-directory-sql-managed-instance.md).
+
azure-arc Deploy Active Directory Sql Managed Instance Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/deploy-active-directory-sql-managed-instance-cli.md
+
+ Title: Deploy Active Directory integrated Azure Arc-enabled SQL Managed Instance using Azure CLI
+description: Explains how to deploy Active Directory integrated Azure Arc-enabled SQL Managed Instance using Azure CLI
++++++ Last updated : 04/28/2022+++
+# Deploy Active Directory integrated Azure Arc-enabled SQL Managed Instance using Azure CLI
+
+This article explains how to deploy Azure Arc-enabled SQL Managed Instance with Active Directory (AD) authentication using Azure CLI.
+
+See these articles for specific instructions:
+
+- [Tutorial ΓÇô Deploy AD connector in customer-managed keytab mode](deploy-customer-managed-keytab-active-directory-connector.md)
+- [Tutorial ΓÇô Deploy AD connector in system-managed keytab mode](deploy-system-managed-keytab-active-directory-connector.md)
+
+### Prerequisites
+
+Before you proceed, install the following tools:
+
+- The [Azure CLI (az)](/cli/azure/install-azure-cli)
+- The [`arcdata` extension for Azure CLI](install-arcdata-extension.md)
+
+To know more further details about how to set up OU and AD account, go to [Deploy Azure Arc-enabled data services in Active Directory authentication - prerequisites](active-directory-prerequisites.md)
++
+## Deploy and update Active Directory integrated Azure Arc-enabled SQL Managed Instance
+
+### [Customer-managed keytab mode](#tab/Customer-managed-keytab-mode)
++
+#### Create an Azure Arc-enabled SQL Managed Instance
+
+To view available options for create command for Azure Arc-enabled SQL Managed Instance, use the following command:
+
+```azurecli
+az sql mi-arc create --help
+```
+
+To create a SQL Managed Instance, use `az sql mi-arc create`. See the following examples for different connectivity modes:
+
+#### Create - indirectly connected mode
+
+```azurecli
+az sql mi-arc create
+--name < SQL MI name >
+--k8s-namespace < namespace >
+--ad-connector-name < your AD connector name >
+--keytab-secret < SQL MI keytab secret name >
+--ad-account-name < SQL MI AD user account >
+--primary-dns-name < SQL MI DNS endpoint >
+--primary-port-number < SQL MI port number >
+--use-k8s
+```
+
+Example:
+
+```azurecli
+az sql mi-arc create
+--name contososqlmi
+--k8s-namespace arc
+--ad-connector-name adarc
+--keytab-secret arcuser-keytab-secret
+--ad-account-name arcuser
+--primary-dns-name arcsqlmi.contoso.local
+--primary-port-number 31433
+--use-k8s
+```
+
+#### Create - directly connected mode
+
+```azurecli
+az sql mi-arc create
+--name < SQL MI name >
+--ad-connector-name < your AD connector name >
+--keytab-secret < SQL MI keytab secret name >
+--ad-account-name < SQL MI AD user account >
+--primary-dns-name < SQL MI DNS endpoint >
+--primary-port-number < SQL MI port number >
+--location < your cloud region >
+--custom-location < your custom location >
+--resource-group < resource-group >
+```
+
+Example:
+
+```azurecli
+az sql mi-arc create
+--name contososqlmi
+--ad-connector-name adarc
+--keytab-secret arcuser-keytab-secret
+--ad-account-name arcuser
+--primary-dns-name arcsqlmi.contoso.local
+--primary-port-number 31433
+--location westeurope
+--custom-location private-location
+--resource-group arc-rg
+```
+
+#### Update an Azure Arc-enabled SQL Managed Instance
+
+To update a SQL Managed Instance, use `az sql mi-arc update`. See the following examples for different connectivity modes:
+
+#### Update - indirectly connected mode
+
+```azurecli
+az sql mi-arc update
+--name < SQL MI name >
+--k8s-namespace < namespace >
+--keytab-secret < SQL MI keytab secret name >
+--use-k8s
+```
+
+Example:
+
+```azurecli
+az sql mi-arc update
+--name contososqlmi
+--k8s-namespace arc
+--keytab-secret arcuser-keytab-secret
+--use-k8s
+```
+
+#### Update - directly connected mode
+
+> [!NOTE]
+> Note that the **resource group** is a mandatory parameter but this is not changeable.
+
+```azurecli
+az sql mi-arc update
+--name < SQL MI name >
+--keytab-secret < SQL MI keytab secret name >
+--resource-group < resource-group >
+```
+
+Example:
+
+```azurecli
+az sql mi-arc update
+--name contososqlmi
+--keytab-secret arcuser-keytab-secret
+--resource-group arc-rg
+```
+
+### [System-managed keytab mode](#tab/system-managed-keytab-mode)
++
+#### Create an Azure Arc-enabled SQL Managed Instance
+
+To view available options for create command for Azure Arc-enabled SQL Managed Instance, use the following command:
+
+```azurecli
+az sql mi-arc create --help
+```
+
+To create a SQL Managed Instance, use `az sql mi-arc create`. See the following examples for different connectivity modes:
++
+##### Create - indirectly connected mode
+
+```azurecli
+az sql mi-arc create
+--name < SQL MI name >
+--k8s-namespace < namespace >
+--ad-connector-name < your AD connector name >
+--ad-account-name < SQL MI AD user account >
+--primary-dns-name < SQL MI DNS endpoint >
+--primary-port-number < SQL MI port number >
+--use-k8s
+```
+
+Example:
+
+```azurecli
+az sql mi-arc create
+--name contososqlmi
+--k8s-namespace arc
+--ad-connector-name adarc
+--ad-account-name arcuser
+--primary-dns-name arcsqlmi.contoso.local
+--primary-port-number 31433
+--use-k8s
+```
+
+##### Create - directly connected mode
+
+```azurecli
+az sql mi-arc create
+--name < SQL MI name >
+--ad-connector-name < your AD connector name >
+--ad-account-name < SQL MI AD user account >
+--primary-dns-name < SQL MI DNS endpoint >
+--primary-port-number < SQL MI port number >
+--location < your cloud region >
+--custom-location < your custom location >
+--resource-group <resource-group>
+```
+
+Example:
+
+```azurecli
+az sql mi-arc create
+--name contososqlmi
+--ad-connector-name adarc
+--ad-account-name arcuser
+--primary-dns-name arcsqlmi.contoso.local
+--primary-port-number 31433
+--location westeurope
+--custom-location private-location
+--resource-group arc-rg
+```
+++++
+## Delete an Azure Arc-enabled SQL Managed Instance in directly connected mode
+
+To delete a SQL Managed Instance, use `az sql mi-arc delete`. See the following examples for both connectivity modes:
++
+### [Indirectly-Connected mode](#tab/indirectly-connected-mode)
+
+```azurecli
+az sql mi-arc delete --name < SQL MI name > --k8s-namespace < namespace > --use-k8s
+```
+
+Example:
+
+```azurecli
+az sql mi-arc delete --name contososqlmi --k8s-namespace arc --use-k8s
+```
+
+### [Directly-Connected mode](#tab/directly-connected-mode)
+
+```azurecli
+az sql mi-arc delete --name < SQL MI name > --resource-group < resource group >
+```
+
+Example:
+
+```azurecli
+az sql mi-arc delete --name contososqlmi --resource-group arc-rg
+```
++++
+## Next steps
+* [Deploy Arc-enabled SQL Managed Instance with Active Directory Authentication](deploy-active-directory-sql-managed-instance.md).
+* [Connect to Active Directory integrated Azure Arc-enabled SQL Managed Instance](connect-active-directory-sql-managed-instance.md).
+
azure-arc Deploy Active Directory Sql Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/deploy-active-directory-sql-managed-instance.md
Title: Tutorial ΓÇô Deploy AD-integrated Azure Arc-enabled SQL Managed Instance
-description: Tutorial to deploy AD-integrated Azure Arc-enabled SQL Managed Instance
+ Title: Deploy Active Directory integrated Azure Arc-enabled SQL Managed Instance
+description: Explains how to deploy Active Directory integrated Azure Arc-enabled SQL Managed Instance
Last updated 04/05/2022
-# Tutorial ΓÇô deploy AD-integrated Azure Arc-enabled SQL Managed Instance
+# Deploy Active Directory integrated Azure Arc-enabled SQL Managed Instance
This article explains how to deploy Azure Arc-enabled SQL Managed Instance with Active Directory (AD) authentication.
-Before you proceed, complete the steps explained in [Deploy bring your own keytab (BYOK) Active Directory (AD) connector](deploy-byok-active-directory-connector.md) or [Tutorial ΓÇô deploy an automatic AD connector](deploy-automatic-active-directory-connector.md)
+Before you proceed, complete the steps explained in [Customer-managed keytab Active Directory (AD) connector](deploy-customer-managed-keytab-active-directory-connector.md) or [Deploy a system-managed keytab AD connector](deploy-system-managed-keytab-active-directory-connector.md)
## Prerequisites
Before you proceed, verify that you have:
* An Active Directory (AD) Domain * An instance of data controller deployed
-* An instance of Active Directory Connector deployed
+* An instance of Active Directory connector deployed
+
+### Specific requirements for different modes
+
+#### [Customer-managed keytab mode](#tab/customer-managed-keytab-mode)
+
+The following instructions expect that the users can bring in the Active Directory domain and provide to the AD customer-managed keytab deployment.
+
+* An Active Directory user account for SQL
+* Service Principal Names (SPNs) under the user account
+* DNS record for the endpoint DNS name for SQL
+
+#### [System-managed keytab mode](#tab/system-managed-keytab-mode)
+
+The following instructions expect that the users can bring in the Active Directory domain and provide to the AD system-managed keytab deployment.
+
+* A unique name of an Active Directory user account for SQL
+* DNS record for the endpoint DNS name for SQL
+
+
+
+## Before you deploy SQL Managed Instance
+
+1. Identify a DNS name for the SQL endpoint.
+
+ Choose a unique DNS name for the SQL endpoint that clients will connect to from outside the Kubernetes cluster.
+
+ This DNS name should be in the Active Directory domain or its descendant domains.
+
+ The examples in these instructions use `sqlmi.contoso.local` for the DNS name.
+
+2. Identify the port number for the SQL endpoint.
+
+ You provide a port number for the SQL endpoint.
+
+ This port number must be in the acceptable range of port numbers for Kubernetes cluster.
+
+ The examples in these instructions use `31433` for the port number.
+
+### [Customer-managed keytab mode](#tab/customer-managed-keytab-mode)
+
+3. Create an Active Directory account for the SQL managed instance.
+
+ Choose a name for the Active Directory account that will represent your SQL. This name should be unique in the Active Directory domain.
+
+ Open `Active Directory Users and Computers` tool on the Domain Controller and create an account that will represent this SQL Managed Instance.
+
+ Provide a complex password to this account that is acceptable to the Active Directory domain password policy. This password will be needed in some of the next steps.
+
+ The account does not need any special permissions. Ensure that the account is enabled.
+
+ The examples in these instructions use `sqlmi-account` for the AD account name.
+
+### [System-managed keytab mode](#tab/system-managed-keytab-mode)
+
+3. Choose an Active Directory account name for SQL.
+
+ Choose a name for the Active Directory account that will represent your SQL. This name should be unique in the Active Directory domain and the account must NOT pre-exist in the domain. The system will generate this account in the domain.
+
+ The examples in these instructions use `sqlmi-account` for the AD account name.
+++
+4. Create a DNS record for the SQL endpoint in the Active Directory DNS servers.
+
+ In one of the Active Directory DNS servers, create an A record (forward lookup record) for the DNS name chosen in step 1. This DNS record should point to the IP address that the SQL endpoint will listen on for connections from outside the Kubernetes cluster.
+
+ You do not need to create a PTR record (reverse lookup record) in association with the A record.
+
+### [Customer-managed keytab mode](#tab/customer-managed-keytab-mode)
+
+5. Create Service Principal Names (SPNs)
+
+ In order for SQL to be able to accept AD authentication against the SQL endpoint DNS name, we need to register two SPNs under the account generated in the previous step. These two SPNs should be of the following format:
+
+ ```output
+ MSSQLSvc/<DNS name>
+ MSSQLSvc/<DNS name>:<port>
+ ```
+
+ To register the SPNs, run the following commands on one of the domain controllers.
+
+ ```console
+ setspn -S MSSQLSvc/<DNS name> <account>
+ setspn -S MSSQLSvc/<DNS name>:<port> <account>
+ ```
+
+ With the chosen example DNS name, port number and the account name in this document, the commands should look like the following:
+
+ ```console
+ setspn -S MSSQLSvc/sqlmi.contoso.local sqlmi-account
+ setspn -S MSSQLSvc/sqlmi.contoso.local:31433 sqlmi-account
+ ```
+
+6. Generate a keytab file containing entries for the account and SPNs
+
+ For SQL to be able to authenticate itself to Active Directory and accept authentication from Active Directory users, provide a keytab file using a Kubernetes secret.
+
+ The keytab file contains encrypted entries for the Active Directory account generated for the managed instance and the SPNs.
+
+ SQL Server will use this file as its credential against Active Directory.
+
+ There are multiple tools available to generate a keytab file.
+
+ - `adutil`: This tool is available for Linux. See [Introduction to `adutil` - Active Directory utility](/sql/linux/sql-server-linux-ad-auth-adutil-introduction).
+ - `ktutil`: This tool is available on Linux
+ - `ktpass`: This tool is available on Windows
+
+ To generate the keytab file specifically for the managed instance, use a bash shell script we have published. It wraps `ktutil` and `adutil` together. It is for use on Linux.
+
+ A bash script works on Linux-based OS can be found here: [create-sql-keytab.sh](https://github.com/microsoft/azure_arc/tree/main/arc_data_services/deploy/scripts/create-sql-keytab.sh).
+ A PowerShell script works on Windows server based OS can be found here: [create-sql-keytab.ps1](https://github.com/microsoft/azure_arc/tree/main/arc_data_services/deploy/scripts/create-sql-keytab.ps1).
+
+ This script accepts several parameters and will output a keytab file and a yaml specification file for the Kubernetes secret containing the keytab.
+
+ Use the following command to run the script after replacing the parameter values with the ones for your managed instance deployment.
+
+ ```console
+ AD_PASSWORD=<password> ./create-sql-keytab.sh --realm <AD domain in uppercase> --account <AD account name> --port <endpoint port> --dns-name <endpoint DNS name> --keytab-file <keytab file name/path> --secret-name <keytab secret name> --secret-namespace <keytab secret namespace>
+ ```
+
+ The input parameters are expecting the following values:
+ * `--realm` expects the uppercase of the AD domain, such as CONTOSO.LOCAL
+ * `--account` expects the AD account under where the SPNs are registered, such sqlmi-account
+ * `--port` expects the SQL endpoint port number 31433
+ * `--dns-name` expects the DNS name for the SQL endpoint
+ * `--keytab-file` expects the path to the keytab file
+ * `--secret-name` expects the name of the keytab secret to generate a specification for
+ * `--secret-namespace` expects the Kubernetes namespace containing the keytab secret
+
+ Choose a name for the Kubernetes secret hosting the keytab. The namespace should be the same as what SQL will be deployed in.
+
+ The following command creates a keytab. It uses values that this article describes:
+
+ ```console
+ AD_PASSWORD=<password> ./create-sql-keytab.sh --realm CONTOSO.LOCAL --account sqlmi-account --port 31433 --dns-name sqlmi.contoso.local --keytab-file sqlmi.keytab --secret-name sqlmi-keytab-secret --secret-namespace sqlmi-ns
+ ```
+
+ To verify that the keytab is correct, you may run the following command:
+
+ ```console
+ klist -kte <keytab file>
+ ```
+
+## Deploy Kubernetes secret for the keytab
+
+Use the Kubernetes secret specification file generated in the previous step to deploy the secret.
+The specification file should look like the following:
+
+```yaml
+apiVersion: v1
+kind: Secret
+type: Opaque
+metadata:
+ name: <secret name>
+ namespace: <secret namespace>
+data:
+ keytab: <keytab content in base64>
+```
+
+Deploy the Kubernetes secret with `kubectl apply -f <file>`. For example:
+
+```console
+kubectl apply ΓÇôf sqlmi-keytab-secret.yaml
+```
+### [System-managed keytab mode](#tab/system-managed-keytab-mode)
+
+These steps do not apply to the system-managed keytab mode.
++ ## Azure Arc-enabled SQL Managed Instance specification for Active Directory Authentication
-To deploy an Azure Arc-enabled SQL Managed Instance for Azure Arc Active Directory Authentication, the deployment specification needs to reference the Active Directory Connector instance it wants to use. Referencing the Active Directory Connector in managed instance specification will automatically set up the managed instance to perform Active Directory authentication.
+To deploy an Azure Arc-enabled SQL Managed Instance for Azure Arc Active Directory Authentication, the deployment specification needs to reference the Active Directory connector instance it wants to use. Referencing the Active Directory connector in SQL specification will automatically set up SQL to perform Active Directory authentication.
-To support Active Directory authentication on managed instance, the deployment specification uses the following fields:
+To support Active Directory authentication on SQL, the deployment specification uses the following fields:
- **Required** (For AD authentication) - `spec.security.activeDirectory.connector.name`
- Name of the pre-existing Active Directory Connector custom resource to join for AD authentication. When provided, system will assume that AD authentication is desired.
+ Name of the pre-existing Active Directory connector custom resource to join for AD authentication. When provided, system will assume that AD authentication is desired.
+
+### [Customer-managed keytab mode](#tab/customer-managed-keytab-mode)
+ - `spec.security.activeDirectory.accountName`
- Name of the Active Directory (AD) account that was automatically generated for this instance.
- - `spec.security.activeDirectory.keytabSecret`
- Name of the Kubernetes secret hosting the pre-created keytab file by users. This secret must be in the same namespace as the managed instance. This parameter is only required for the AD deployment in bring your own keytab AD integration mode.
+ Name of the Active Directory account for this managed instance.
+ - `spec.security.activeDirectory.keytabSecret`
+ Name of the Kubernetes secret hosting the pre-created keytab file by users. This secret must be in the same namespace as the managed instance. This parameter is only required for the AD deployment in customer-managed keytab mode.
+
+### [System-managed keytab mode](#tab/system-managed-keytab-mode)
+
+ - `spec.security.activeDirectory.accountName`
+ Name of the Active Directory (AD) account for this SQL. This account will be automatically generated for this SQL by the system and must not exist in the domain before deploying SQL.
+++ - `spec.services.primary.dnsName`
- DNS name for the primary endpoint, this is the primary for the managed instance endpoint
+ You provide a DNS name for the primary SQL endpoint.
- `spec.services.primary.port`
- Port number for the primary endpoint, this is port number for the managed instance endpoint
+ You provide a port number for the primary SQL endpoint.
- **Optional** - `spec.security.activeDirectory.connector.namespace`
- Kubernetes namespace of the pre-existing Active Directory Connector instance to join for AD authentication. When not provided, system will assume the same namespace as the managed instance.
+ Kubernetes namespace of the pre-existing Active Directory connector to join for AD authentication. When not provided, system will assume the same namespace as SQL.
### Prepare deployment specification for SQL Managed Instance for Azure Arc
-Prepare the following .yaml specification to deploy a managed instance. Set the fields described in the spec.
+Prepare the following .yaml specification to deploy SQL. Set the fields described in the spec.
> [!NOTE]
-> The *admin-login-secret* in the yaml example is used for basic authentication. You can use it to login into the SQL managed instance, and then create SQL logins for AD users and groups. Check out [Connect to AD-integrated Azure Arc-enabled SQL Managed Instance](connect-active-directory-sql-managed-instance.md) for further details.
+> The *admin-login-secret* in the yaml example is used for basic authentication. You can use it to login into the SQL managed instance, and then create logins for AD users and groups. Check out [Connect to AD-integrated Azure Arc-enabled SQL Managed Instance](connect-active-directory-sql-managed-instance.md) for further details.
+### [Customer-managed keytab mode](#tab/customer-managed-keytab-mode)
```yaml apiVersion: v1
spec:
size: 5Gi ```
+### [System-managed keytab mode](#tab/system-managed-keytab-mode)
+
+```yaml
+apiVersion: v1
+data:
+ password: <your base64 encoded password>
+ username: <your base64 encoded username>
+kind: Secret
+metadata:
+ name: admin-login-secret
+type: Opaque
+
+apiVersion: sql.arcdata.microsoft.com/v3
+kind: SqlManagedInstance
+metadata:
+ name: <name>
+ namespace: <namespace>
+spec:
+ backup:
+ retentionPeriodInDays: 7
+ dev: false
+ tier: GeneralPurpose
+ forceHA: "true"
+ licenseType: LicenseIncluded
+ replicas: 1
+ security:
+ adminLoginSecret: admin-login-secret
+ activeDirectory:
+ connector:
+ name: <AD connector name>
+ namespace: <AD connector namespace>
+ accountName: <AD account name>
+
+ primary:
+ type: LoadBalancer
+ dnsName: <Endpoint DNS name>
+ port: <Endpoint port number>
+ storage:
+ data:
+ volumes:
+ - accessMode: ReadWriteOnce
+ className: local-storage
+ size: 5Gi
+ logs:
+ volumes:
+ - accessMode: ReadWriteOnce
+ className: local-storage
+ size: 5Gi
+```
+++ ### Deploy a managed instance To deploy a managed instance using the prepared specification:
kubectl apply -f sqlmi.yaml
## Next steps
-* [Connect to AD-integrated Azure Arc-enabled SQL Managed Instance](connect-active-directory-sql-managed-instance.md).
+* [Connect to Active Directory integrated Azure Arc-enabled SQL Managed Instance](connect-active-directory-sql-managed-instance.md).
azure-arc Deploy Automatic Active Directory Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/deploy-automatic-active-directory-connector.md
- Title: Tutorial ΓÇô Deploy an automatic Active Directory (AD) Connector
-description: Tutorial to deploy an automatic Active Directory (AD) Connector
------ Previously updated : 04/05/2022----
-# Tutorial ΓÇô deploy an automatic Active Directory (AD) connector
-
-This article explains how to deploy an automatic Active Directory (AD) connector custom resource. It is a key component to enable the Azure Arc-enabled SQL Managed Instance with Active Directory. It applies to either integration mode (bring your own keytab (BYOK) or automatic).
-
-## Prerequisites
-
-Before you proceed, you must have:
-
-* An instance of Data Controller deployed on a supported version of Kubernetes
-* An Active Directory (AD) domain
-* A pre-created organizational unit (OU) in the Active Directory
-* An Active Directory (AD) domain service account
-
-The AD domain service account should have sufficient permissions to create users, groups, and machine accounts automatically inside the provided organizational unit (OU) in the active directory.
--
-The sufficient permission including the following:
-
-- Read all properties-- Write all properties-- Create User objects-- Delete User objects-- Reset Password for Descendant User objects--
-## Input for deploying an automatic Active Directory (AD) connector
-
-To deploy an instance of Active Directory connector, several inputs are needed from the Active Directory domain environment.
-
-These inputs are provided in a .yaml specification for an AD connector instance.
-
-The following metadata about the AD domain must be available before deploying an instance of AD connector:
-
-* Name of the Active Directory domain
-* List of the domain controllers (fully qualified domain names)
-* List of the DNS server IP addresses
-
-The following input fields are exposed to the users in the Active Directory Connector specification:
--- **Required**
- - `spec.activeDirectory.realm`
- Name of the Active Directory domain in uppercase. This is the AD domain that this instance of AD Connector will be associated with.
-
- - `spec.activeDirectory.domainControllers.primaryDomainController.hostname`
- Fully-qualified domain name of the Primary Domain Controller (PDC) in the AD domain.
-
- If you do not know which domain controller in the domain is primary, you can find out by running this command on any Windows machine joined to the AD domain: `netdom query fsmo`.
-
- - `spec.activeDirectory.dns.nameserverIpAddresses`
- List of Active Directory DNS server IP addresses. DNS proxy service will forward DNS queries in the provided domain name to these servers.
--- **Optional**
- - `spec.activeDirectory.serviceAccountProvisioning` This is an optional field defines your AD connector deployment mode with possible value `bring your own keytab (BYOK)` or `automatic`. This field indicating whether the service account provisioning including SPN and keytab generation should be automatic or bring your own keytab (BYOK). The default is bring your own keytab (BYOK). When set to bring your own keytab (BYOK), the system will not take care of AD service account generation, SPN registration and keytab generation. When set to automatic, the service AD account is automatically generated and SPNs are registered on that account. A keytab file is generated then transported to SQL Managed Instance.
-
- - `spec.activeDirectory.ouDistinguishedName` This is an optional field. Though it becomes conditionally mandatory when the value of `serviceAccountProvisioning` is set to `automatic`. This field accepts the Distinguished Name (DN) of an Organizational Unit (OU) that the users must create in Active Directory domain before deploying AD Connector. It stores the system-generated AD accounts in active directory for AD LDAP server. The example of the value would look like: `OU=arcou,DC=contoso,DC=local`.
-
- - `spec.activeDirectory.domainServiceAccountSecret` This is an optional field. it becomes conditionally mandatory when the value of `serviceAccountProvisioning` is set to automatic. This field accepts a name of the Kubernetes secret that contains the username and password of the service Domain Service Account that was created prior to the AD deployment, the Security Support Service will use it to generate other AD users in the OU and perform actions on those AD accounts.
-
- - `spec.activeDirectory.netbiosDomainName`
- NETBIOS name of the Active Directory domain. This is the short domain name that represents the Active Directory domain.
-
- This is often used to qualify accounts in the AD domain. e.g. if the accounts in the domain are referred to as CONTOSO\admin, then CONTOSO is the NETBIOS domain name.
-
- This field is optional. When not provided, it defaults to the first label of the `spec.activeDirectory.realm` field.
-
- In most domain environments, this is set to the default value but some domain environments may have a non-default value.
-
- - `spec.activeDirectory.domainControllers.secondaryDomainControllers[*].hostname`
- List of the fully qualified domain names of the secondary domain controllers in the AD domain.
-
- If your domain is served by multiple domain controllers, it is a good practice to provide some of their fully qualified domain names in this list. This allows high-availability for Kerberos operations.
-
- This field is optional and not needed if your domain is served by only one domain controller.
-
- - `spec.activeDirectory.dns.domainName`
- DNS domain name for which DNS lookups should be forwarded to the Active Directory DNS servers.
-
- A DNS lookup for any name belonging to this domain or its descendant domains will get forwarded to Active Directory.
-
- This field is optional. When not provided, it defaults to the value provided for `spec.activeDirectory.realm` converted to lowercase.
-
- - `spec.activeDirectory.dns.replicas`
- Replica count for DNS proxy service. This field is optional and defaults to 1 when not provided.
-
- - `spec.activeDirectory.dns.preferK8sDnsForPtrLookups`
- Flag indicating whether to prefer Kubernetes DNS server response over AD DNS server response for IP address lookups.
-
- DNS proxy service relies on this field to determine which upstream group of DNS servers to prefer for IP address lookups.
-
- This field is optional. When not provided, it defaults to true i.e. the DNS lookups of IP addresses will be first forwarded to Kubernetes DNS servers.
-
- If Kubernetes DNS servers fail to answer the lookup, the query is then forwarded to AD DNS servers.
-
-## Deploy an Automatic Active Directory (AD) connector
-
-To deploy an AD connector, create a YAML specification file called `active-directory-connector.yaml`.
-
-The following example is an example of an Automatic AD connector uses an AD domain of name `CONTOSO.LOCAL`. Ensure to replace the values with the ones for your AD domain. The `adarc-dsa-secret` contains the AD domain service account that was created prior to the AD deployment.
-
-> [!NOTE]
-> Make sure the password of provided domain service AD acccount here doesn't contain `!` as special characters.
->
-
-```yaml
-apiVersion: v1
-kind: Secret
-type: Opaque
-metadata:
- name: adarc-dsa-secret
- namespace: <namespace>
-data:
- password: <your base64 encoded password>
- username: <your base64 encoded username>
-
-apiVersion: arcdata.microsoft.com/v1beta2
-kind: ActiveDirectoryConnector
-metadata:
- name: adarc
- namespace: <namespace>
-spec:
- activeDirectory:
- realm: CONTOSO.LOCAL
- serviceAccountProvisioning: automatic
- ouDistinguishedName: "OU=arcou,DC=contoso,DC=local"
- domainServiceAccountSecret: adarc-dsa-secret
- domainControllers:
- primaryDomainController:
- hostname: dc1.contoso.local
- secondaryDomainControllers:
- - hostname: dc2.contoso.local
- - hostname: dc3.contoso.local
- dns:
- preferK8sDnsForPtrLookups: false
- nameserverIPAddresses:
- - <DNS Server 1 IP address>
- - <DNS Server 2 IP address>
-```
--
-The following command deploys the AD connector instance. Currently, only kube-native approach of deploying is supported.
-
-```console
-kubectl apply ΓÇôf active-directory-connector.yaml
-```
-
-After submitting the deployment for the AD connector instance, you may check the status of the deployment using the following command.
-
-```console
-kubectl get adc -n <namespace>
-```
-
-## Next steps
-* [Deploy an Bring your own keytab (BYOK) Active Directory (AD) connector](deploy-byok-active-directory-connector.md)
-* [Deploy SQL Managed Instance with Active Directory Authentication](deploy-active-directory-sql-managed-instance.md).
-* [Connect to AD-integrated Azure Arc-enabled SQL Managed Instance](connect-active-directory-sql-managed-instance.md).
azure-arc Deploy Byok Active Directory Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/deploy-byok-active-directory-connector.md
- Title: Tutorial ΓÇô Deploy a bring your own keytab (BYOK) Active Directory (AD) connector
-description: Tutorial to deploy a bring your own keytab (BYOK) Active Directory (AD) connector
------ Previously updated : 04/05/2022---
-# Tutorial ΓÇô Deploy a bring your own keytab (BYOK) Active Directory (AD) connector
-
-This article explains how to deploy an automatic Active Directory (AD) connector custom resource. It is a key component to enable the Azure Arc-enabled SQL Managed Instance with Active Directory. It applies to both integration modes (bring your own keytab (BYOK) or automatic).
-
-## Prerequisites
-
-Before you proceed, you must have:
-
-* An instance of Data Controller deployed on a supported version of Kubernetes
-* An Active Directory (AD) domain
-
-The following instructions expect that the users can bring in the Active Directory domain and provide to the AD bring your own keytab (BYOK) deployment.
-
-* An Active Directory user account for the managed instance
-* Service Principal Names (SPNs) under the user account
-* DNS record for the endpoint DNS name for managed instance
-
-## Before you deploy the managed instance
-
-1. Identify a DNS name for the managed instance endpoint.
-
- The DNS name for the endpoint the managed instance will listen on for connections coming from outside the Kubernetes cluster.
-
- This DNS name should be in the Active Directory domain or its descendant domains.
-
- The examples in these instructions use `sqlmi.contoso.local` for the DNS name.
-
-2. Identify the port number for the managed instance endpoint.
-
- You must decide a port number for the endpoint managed instance will listen on for connections coming from outside the Kubernetes cluster.
-
- This port number must be in the acceptable range of port numbers to Kubernetes cluster.
-
- The examples in these instructions use `31433` for the port number.
-
-3. Create an Active Directory account for the managed instance.
-
- Choose a name for the Active Directory account that will represent your managed instance. This name should be unique in the Active Directory domain.
-
- Use `Active Directory Users and Computers` on the Domain Controllers, create an account for the managed instance name.
-
- Provide a complex password to this account that is acceptable to the Active Directory domain password policy. This password will be needed in some of the next steps.
-
- The account does not need any special permissions. Ensure that the account is enabled.
-
- The examples in these instructions use `sqlmi-account` for the AD account name.
-
-4. Create a DNS record for the managed instance endpoint in the Active Directory DNS servers.
-
- In one of the Active Directory DNS servers, create an A record (forward lookup record) for the DNS name chosen in step 1. This DNS record should point to the IP address that the managed instance endpoint will listen on for connections from outside the Kubernetes cluster.
-
- You do not need to create a PTR record (reverse lookup record) in association with the A record.
-
-5. Create Service Principal Names (SPNs)
-
- In order for managed instance to be able to accept AD authentication against the managed instance endpoint DNS name, we need to register two SPNs under the account generated in the previous step. These two SPNs should be of the following format:
-
- ```output
- MSSQLSvc/<DNS name>
- MSSQLSvc/<DNS name>:<port>
- ```
-
- To register the SPNs, run the following commands on one of the domain controllers.
-
- ```console
- setspn -S MSSQLSvc/<DNS name> <account>
- setspn -S MSSQLSvc/<DNS name>:<port> <account>
- ```
-
- With the chosen example DNS name, port number and the account name in this document, the commands should look like the following:
-
- ```console
- setspn -S MSSQLSvc/sqlmi.contoso.local sqlmi-account
- setspn -S MSSQLSvc/sqlmi.contoso.local:31433 sqlmi-account
- ```
-
-6. Generate a keytab file containing entries for the account and SPNs
-
- For the managed instance to be able to authenticate itself to Active Directory and accept authentication from Active Directory users, provide a keytab file using a Kubernetes secret.
-
- The keytab file contains encrypted entries for the Active Directory account generated for the managed instance and the SPNs.
-
- SQL Server will use this file as its credential against Active Directory.
-
- There are multiple tools available to generate a keytab file.
- - `ktutil`: This tool is available on Linux
- - `ktpass`: This tool is available on Windows
- - `adutil`: This tool is available for Linux. See [Introduction to `adutil` - Active Directory utility](/sql/linux/sql-server-linux-ad-auth-adutil-introduction).
-
- To generate the keytab file specifically for the managed instance, use a bash shell script we have published. It wraps `ktutil` and `adutil` together. It is for use on Linux.
-
- A bash script works on Linux-based OS can be found here: [create-sql-keytab.sh](https://github.com/microsoft/azure_arc/tree/main/arc_data_services/deploy/scripts/create-sql-keytab.sh).
- A PowerShell script works on Windows server based OS can be found here: [create-sql-keytab.ps1](https://github.com/microsoft/azure_arc/tree/main/arc_data_services/deploy/scripts/create-sql-keytab.ps1).
-
- This script accepts several parameters and will output a keytab file and a yaml specification file for the Kubernetes secret containing the keytab.
-
- Use the following command to run the script after replacing the parameter values with the ones for your managed instance deployment.
-
- ```console
- AD_PASSWORD=<password> ./create-sql-keytab.sh --realm <AD domain in uppercase> --account <AD account name> --port <endpoint port> --dns-name <endpoint DNS name> --keytab-file <keytab file name/path> --secret-name <keytab secret name> --secret-namespace <keytab secret namespace>
- ```
-
- The input parameters are expecting the following values:
- * `--realm` expects the uppercase of the AD domain, such as CONTOSO.LOCAL
- * `--account` expects the AD account under where the SPNs are registered, such sqlmi-account
- * `--port` expects the SQL endpoint port number 31433
- * `--dns-name` expects the DNS name for the SQL endpoint
- * `--keytab-file` expects the path to the keytab file
- * `--secret-name` expects the name of the keytab secret to generate a specification for
- * `--secret-namespace` expects the Kubernetes namespace containing the keytab secret
-
- Choose a name for the Kubernetes secret hosting the keytab. The namespace should be the same as what the managed instance will be deployed in.
-
- The following command creates a keytab. It uses values that this article describes:
-
- ```console
- AD_PASSWORD=<password> ./create-sql-keytab.sh --realm CONTOSO.LOCAL --account sqlmi-account --port 31433 --dns-name sqlmi.contoso.local --keytab-file sqlmi.keytab --secret-name sqlmi-keytab-secret --secret-namespace sqlmi-ns
- ```
-
- To verify that the keytab is correct, you may run the following command:
-
- ```console
- klist -kte <keytab file>
- ```
-
-## Deploy Kubernetes secret for the keytab
-
-Use the Kubernetes secret specification file generated in the previous step to deploy the secret.
-The specification file should look like the following:
-
-```yaml
-apiVersion: v1
-kind: Secret
-type: Opaque
-metadata:
- name: <secret name>
- namespace: <secret namespace>
-data:
- keytab: <keytab content in base64>
-```
-
-Deploy the Kubernetes secret with `kubectl apply -f <file>`. For example:
-
-```console
-kubectl apply ΓÇôf sqlmi-keytab-secret.yaml
-```
-
-## Active directory (AD) bring your own keytab (BYOK) integration mode
-
-The following are the steps for user to set up:
-1. Creating and providing an Active Directory account for each managed instance that must accept AD authentication.
-1. Providing a DNS name belonging to the Active Directory DNS domain for the managed instance endpoint.
-1. Creating a DNS record in Active Directory for the SQL endpoint.
-1. Providing a port number for the managed instance endpoint.
-1. Registering Service Principal Names (SPNs) under the AD account in Active Directory domain for the SQL endpoint.
-1. Creating and providing a keytab file for managed instance containing entries for the AD account and SPNs.
-
-An Active Directory Connector instance stores the information needed to enable connections to DNS and AD for purposes of authenticating users and service accounts and it deploys a DNS proxy service that proxies the DNS requests coming from the managed instance to either of the two upstream DNS
-* Active Directory DNS Servers
-* Kubernetes DNS Servers
-
-The following diagram Active Directory Connector and SQL Managed Instance describes how the AD bring your own keytab (BYOK) integration mode works:
-
-![Actice Directory Connector](media/active-directory-deployment/active-directory-connector-byok.png)
-
-## Input for deploying Active Directory (AD) Connector
-
-To deploy an instance of Active Directory Connector, several inputs are needed from the Active Directory domain environment.
-
-These inputs are provided in a YAML specification of AD Connector instance.
-
-Following metadata about the AD domain must be available before deploying an instance of AD Connector:
-* Name of the Active Directory domain
-* List of the domain controllers (fully qualified domain names)
-* List of the DNS server IP addresses
-
-Following input fields are exposed to the users in the Active Directory Connector spec:
--- **Required**-
- - `spec.activeDirectory.realm`
- Name of the Active Directory domain in uppercase. This is the AD domain that this instance of AD Connector will be associated with.
-
- - `spec.activeDirectory.domainControllers.primaryDomainController.hostname`
- Fully qualified domain name of the primary domain controller in the AD domain.
-
- To identify the primary domain controller, run this command on any Windows machine joined to the AD domain:
-
- ```console
- netdom query fsmo
- ```
-
- - `spec.activeDirectory.dns.nameserverIpAddresses`
- List of Active Directory DNS server IP addresses. DNS proxy service will forward DNS queries in the provided domain name to these servers.
--- **Optional**-
- - `spec.activeDirectory.netbiosDomainName`
- NETBIOS name of the Active Directory domain. This is the short domain name that represents the Active Directory domain.
-
- This is often used to qualify accounts in the AD domain. e.g. if the accounts in the domain are referred to as CONTOSO\admin, then CONTOSO is the NETBIOS domain name.
-
- This field is optional. When not provided, it defaults to the first label of the `spec.activeDirectory.realm` field.
-
- In most domain environments, this is set to the default value but some domain environments may have a non-default value.
-
- - `spec.activeDirectory.domainControllers.secondaryDomainControllers[*].hostname`
- List of the fully qualified domain names of the secondary domain controllers in the AD domain.
-
- If your domain is served by multiple domain controllers, it is a good practice to provide some of their fully qualified domain names in this list. This allows high-availability for Kerberos operations.
-
- This field is optional and not needed if your domain is served by only one domain controller.
-
- - `spec.activeDirectory.dns.domainName`
- DNS domain name for which DNS lookups should be forwarded to the Active Directory DNS servers.
-
- A DNS lookup for any name belonging to this domain or its descendant domains will get forwarded to Active Directory.
-
- This field is optional. When not provided, it defaults to the value provided for `spec.activeDirectory.realm` converted to lowercase.
-
- - `spec.activeDirectory.dns.replicas`
- Replica count for DNS proxy service. This field is optional and defaults to 1 when not provided.
-
- - `spec.activeDirectory.dns.preferK8sDnsForPtrLookups`
- Flag indicating whether to prefer Kubernetes DNS server response over AD DNS server response for IP address lookups.
-
- DNS proxy service relies on this field to determine which upstream group of DNS servers to prefer for IP address lookups.
-
- This field is optional. When not provided, it defaults to true i.e. the DNS lookups of IP addresses will be first forwarded to Kubernetes DNS servers.
-
- If Kubernetes DNS servers fail to answer the lookup, the query is then forwarded to AD DNS servers.
--
-## Deploy a bring your own keytab (BYOK) Active Directory (AD) connector
-
-To deploy an AD connector, create a .yaml specification file called `active-directory-connector.yaml`.
-
-The following example is an example of a bring your own keytab (BYOK) AD connector uses an AD domain of name `CONTOSO.LOCAL`. Ensure to replace the values with the ones for your AD domain.
-
-```yaml
-apiVersion: arcdata.microsoft.com/v1beta1
-kind: ActiveDirectoryConnector
-metadata:
- name: adarc
- namespace: <namespace>
-spec:
- activeDirectory:
- realm: CONTOSO.LOCAL
- domainControllers:
- primaryDomainController:
- hostname: dc1.contoso.local
- secondaryDomainControllers:
- - hostname: dc2.contoso.local
- - hostname: dc3.contoso.local
- dns:
- preferK8sDnsForPtrLookups: false
- nameserverIPAddresses:
- - <DNS Server 1 IP address>
- - <DNS Server 2 IP address>
-```
-
-The following command deploys the AD connector instance. Currently, only kube-native approach of deploying is supported.
-
-```console
-kubectl apply ΓÇôf active-directory-connector.yaml
-```
-
-After submitting the deployment of AD Connector instance, you may check the status of the deployment using the following command.
-
-```console
-kubectl get adc -n <namespace>
-```
-
-## Next steps
-* [Deploy an Automatic Active Directory (AD) connector](deploy-automatic-active-directory-connector.md)
-* [Deploy SQL Managed Instance with Active Directory Authentication](deploy-active-directory-sql-managed-instance.md).
-* [Connect to AD-integrated Azure Arc-enabled SQL Managed Instance](connect-active-directory-sql-managed-instance.md).
-
azure-arc Deploy Customer Managed Keytab Active Directory Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/deploy-customer-managed-keytab-active-directory-connector.md
+
+ Title: Tutorial ΓÇô Deploy Active Directory (AD) Connector in customer-managed keytab mode
+description: Tutorial to deploy a customer-managed keytab Active Directory (AD) connector
++++++ Last updated : 04/05/2022+++
+# Tutorial ΓÇô Deploy Active Directory (AD) connector in customer-managed keytab mode
+
+This article explains how to deploy Active Directory (AD) connector in customer-managed keytab mode. The connector is a key component to enable Active Directory authentication on Azure Arc-enabled SQL Managed Instance.
+
+## Active Directory connector in customer-managed keytab mode
+
+In customer-managed keytab mode, an Active Directory connector deploys a DNS proxy service that proxies the DNS requests coming from the managed instance to either of the two upstream DNS
+* Active Directory DNS Servers
+* Kubernetes DNS Servers
+
+The AD Connector facilitates the environment needed by SQL to authenticate AD logins.
+
+The following diagram shows AD Connector and DNS Proxy service functionality in customer-managed keytab mode:
+
+![Active Directory connector](media/active-directory-deployment/active-directory-connector-customer-managed.png)
+
+## Prerequisites
+
+Before you proceed, you must have:
+
+* An instance of Data Controller deployed on a supported version of Kubernetes
+* An Active Directory (AD) domain
+
+## Input for deploying Active Directory (AD) Connector
+
+To deploy an instance of Active Directory connector, several inputs are needed from the Active Directory domain environment.
+
+These inputs are provided in a YAML specification of AD Connector instance.
+
+Following metadata about the AD domain must be available before deploying an instance of AD Connector:
+* Name of the Active Directory domain
+* List of the domain controllers (fully qualified domain names)
+* List of the DNS server IP addresses
+
+Following input fields are exposed to the users in the Active Directory connector spec:
+
+- **Required**
+
+ - `spec.activeDirectory.realm`
+ Name of the Active Directory domain in uppercase. This is the AD domain that this instance of AD Connector will be associated with.
+
+ - `spec.activeDirectory.dns.nameserverIpAddresses`
+ List of Active Directory DNS server IP addresses. DNS proxy service will forward DNS queries in the provided domain name to these servers.
+
+- **Optional**
+
+ - `spec.activeDirectory.netbiosDomainName` NetBIOS name of the Active Directory domain. This is the short domain name (pre-Windows 2000 name) of your Active Directory domain. This is often used to qualify accounts in the AD domain. e.g. if the accounts in the domain are referred to as CONTOSO\admin, then CONTOSO is the NETBIOS domain name.
+
+ This field is optional. When not provided, its value defaults to the first label of the `spec.activeDirectory.realm` field.
+
+ In most domain environments, this is set to the default value but some domain environments may have a non-default value. You will need to use this field only when your domain's NetBIOS name does not match the first label of its fully qualified name.
+
+ - `spec.activeDirectory.dns.domainName`
+ DNS domain name for which DNS lookups should be forwarded to the Active Directory DNS servers.
+
+ A DNS lookup for any name belonging to this domain or its descendant domains will get forwarded to Active Directory.
+
+ This field is optional. When not provided, it defaults to the value provided for `spec.activeDirectory.realm` converted to lowercase.
+
+ - `spec.activeDirectory.dns.replicas`
+ Replica count for DNS proxy service. This field is optional and defaults to 1 when not provided.
+
+ - `spec.activeDirectory.dns.preferK8sDnsForPtrLookups`
+ Flag indicating whether to prefer Kubernetes DNS server response over AD DNS server response for IP address lookups.
+
+ DNS proxy service relies on this field to determine which upstream group of DNS servers to prefer for IP address lookups.
+
+ This field is optional. When not provided, it defaults to `true` i.e. the DNS lookups of IP addresses will be first forwarded to Kubernetes DNS servers. If Kubernetes DNS servers fail to answer the lookup, the query is then forwarded to AD DNS servers. When set to `false`, these DNS lookups will be forwarded to AD DNS servers first and upon failure, fall back to Kubernetes.
++
+## Deploy a customer-managed keytab Active Directory (AD) connector
+
+To deploy an AD connector, create a .yaml specification file called `active-directory-connector.yaml`.
+
+The following example is an example of a customer-managed keytab AD connector uses an AD domain of name `CONTOSO.LOCAL`. Ensure to replace the values with the ones for your AD domain.
+
+```yaml
+apiVersion: arcdata.microsoft.com/v1beta1
+kind: ActiveDirectoryConnector
+metadata:
+ name: adarc
+ namespace: <namespace>
+spec:
+ activeDirectory:
+ realm: CONTOSO.LOCAL
+ dns:
+ preferK8sDnsForPtrLookups: false
+ nameserverIPAddresses:
+ - <DNS Server 1 IP address>
+ - <DNS Server 2 IP address>
+```
+
+The following command deploys the AD connector instance. Currently, only kube-native approach of deploying is supported.
+
+```console
+kubectl apply ΓÇôf active-directory-connector.yaml
+```
+
+After submitting the deployment of AD Connector instance, you may check the status of the deployment using the following command.
+
+```console
+kubectl get adc -n <namespace>
+```
+
+## Next steps
+* [Deploy a system-managed keytab Active Directory (AD) connector](deploy-system-managed-keytab-active-directory-connector.md)
+* [Deploy SQL Managed Instance with Active Directory Authentication](deploy-active-directory-sql-managed-instance.md).
+* [Connect to AD-integrated Azure Arc-enabled SQL Managed Instance](connect-active-directory-sql-managed-instance.md).
+
azure-arc Deploy System Managed Keytab Active Directory Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/deploy-system-managed-keytab-active-directory-connector.md
+
+ Title: Tutorial ΓÇô Deploy Active Directory connector in system-managed keytab mode
+description: Tutorial to deploy a system-managed keytab Active Directory connector
++++++ Last updated : 04/05/2022++++
+# Tutorial ΓÇô Deploy Active Directory connector in system-managed keytab mode
+
+This article explains how to deploy Active Directory connector in system-managed keytab mode. It is a key component to enable Active Directory authentication on Azure Arc-enabled SQL Managed Instance.
+
+## Active Directory connector in system-managed keytab mode
+
+In System-Managed Keytab mode, an Active Directory connector deploys a DNS proxy service that proxies the DNS requests coming from the managed instance to either of the two upstream DNS
+* Active Directory DNS Servers
+* Kubernetes DNS Servers
+
+In addition to the DNS proxy service, AD Connector also deploys a Security Support Service that facilitates communication to the AD domain for automatic creation and management of AD accounts, Service Principal Names (SPNs) and keytabs.
+
+The following diagram shows AD Connector and DNS Proxy service functionality in system-managed keytab mode:
+
+![Active Directory connector](media/active-directory-deployment/active-directory-connector-smk.png)
+
+## Prerequisites
+
+Before you proceed, you must have:
+
+* An instance of Data Controller deployed on a supported version of Kubernetes
+* An Active Directory domain
+* A pre-created organizational unit (OU) in the Active Directory domain
+* An Active Directory domain service account
+
+The AD domain service account should have sufficient permissions to automatically create and delete users accounts inside the provided organizational unit (OU) in the active directory.
+
+Grant the following permissions - scoped to the Organizational Unit (OU) - to the domain service account:
+
+- Read all properties
+- Write all properties
+- Create User objects
+- Delete User objects
+- Reset Password for Descendant User objects
+
+For details about how to set up OU and AD account, go to [Deploy Azure Arc-enabled data services in Active Directory authentication with system-managed keytab - prerequisites](active-directory-prerequisites.md)
+
+## Input for deploying Active Directory connector in system-managed keytab mode
+
+To deploy an instance of Active Directory connector, several inputs are needed from the Active Directory domain environment.
+
+These inputs are provided in a yaml specification for the AD connector instance.
+
+The following metadata about the AD domain must be available before deploying an instance of AD connector:
+
+* Name of the Active Directory domain
+* List of the domain controllers (fully qualified domain names)
+* List of the DNS server IP addresses
+
+The following input fields are exposed to the users in the Active Directory connector specification:
+
+- **Required**
+ - `spec.activeDirectory.realm`
+ Name of the Active Directory domain in uppercase. This is the AD domain that this instance of AD Connector will be associated with.
+
+ - `spec.activeDirectory.domainControllers.primaryDomainController.hostname`
+ Fully qualified domain name of the Primary Domain Controller (PDC) in the AD domain.
+
+ If you do not know which domain controller in the domain is primary, you can find out by running this command on any Windows machine joined to the AD domain: `netdom query fsmo`.
+
+ - `spec.activeDirectory.dns.nameserverIpAddresses`
+ List of Active Directory DNS server IP addresses. DNS proxy service will forward DNS queries in the provided domain name to these servers.
+
+- **Optional**
+ - `spec.activeDirectory.serviceAccountProvisioning` This is an optional field which defines your AD connector deployment mode with possible values as `manual` for customer-managed keytab or `automatic` for system-managed keytab. When this field is not set, the value defaults to `manual`. When set to `automatic` (system-managed keytab), the system will automatically generate AD accounts and Service Principal Names (SPNs) for the SQL Managed Instances associated with this AD Connector and create keytab files for them. When set to `manual` (customer-managed keytab), the system will not provide automatic generation of the AD account and keytab generation. The user will be expected to provide a keytab file.
+
+ - `spec.activeDirectory.ouDistinguishedName` This is an optional field. Though it becomes conditionally mandatory when the value of `serviceAccountProvisioning` is set to `automatic`. This field accepts the Distinguished Name (DN) of the Organizational Unit (OU) that the users must create in Active Directory domain before deploying AD Connector. It is used to store the system-generated AD accounts for SQL Managed Instances in Active Directory domain. The example of the value looks like: `OU=arcou,DC=contoso,DC=local`.
+
+ - `spec.activeDirectory.domainServiceAccountSecret` This is an optional field. It becomes conditionally mandatory when the value of `serviceAccountProvisioning` is set to `automatic`. This field accepts the name of the Kubernetes secret that contains the username and password of the Domain Service Account that was created prior to the AD Connector deployment. The system will use this account to generate other AD accounts in the OU and perform actions on those AD accounts.
+
+ - `spec.activeDirectory.netbiosDomainName` NetBIOS name of the Active Directory domain. This is the short domain name (pre-Windows 2000 name) of your Active Directory domain. This is often used to qualify accounts in the AD domain. e.g. if the accounts in the domain are referred to as CONTOSO\admin, then CONTOSO is the NETBIOS domain name.
+
+ This field is optional. When not provided, its value defaults to the first label of the `spec.activeDirectory.realm` field.
+
+ In most domain environments, this is set to the default value but some domain environments may have a non-default value. You will need to use this field only when your domain's NetBIOS name does not match the first label of its fully qualified name.
+
+ - `spec.activeDirectory.domainControllers.secondaryDomainControllers[*].hostname`
+ List of the fully qualified domain names of the secondary domain controllers in the AD domain.
+
+ If your domain is served by multiple domain controllers, it is a good practice to provide some of their fully qualified domain names in this list. This allows high-availability for Kerberos operations.
+
+ This field is optional and not needed. The system will automatically detect the secondary domain controllers when a value is not provided.
+
+ - `spec.activeDirectory.dns.domainName`
+ DNS domain name for which DNS lookups should be forwarded to the Active Directory DNS servers.
+
+ A DNS lookup for any name belonging to this domain or its descendant domains will get forwarded to Active Directory.
+
+ This field is optional. When not provided, it defaults to the value provided for `spec.activeDirectory.realm` converted to lowercase.
+
+ - `spec.activeDirectory.dns.replicas`
+ Replica count for DNS proxy service. This field is optional and defaults to 1 when not provided.
+
+ - `spec.activeDirectory.dns.preferK8sDnsForPtrLookups`
+ Flag indicating whether to prefer Kubernetes DNS server response over AD DNS server response for IP address lookups.
+
+ DNS proxy service relies on this field to determine which upstream group of DNS servers to prefer for IP address lookups.
+
+ This field is optional. When not provided, it defaults to `true` i.e. the DNS lookups of IP addresses will be first forwarded to Kubernetes DNS servers. If Kubernetes DNS servers fail to answer the lookup, the query is then forwarded to AD DNS servers. When set to `false`, these DNS lookups will be forwarded to AD DNS servers first and upon failure, fall back to Kubernetes.
+
+## Deploy Active Directory connector in system-managed keytab mode
+
+To deploy an AD connector, create a YAML specification file called `active-directory-connector.yaml`.
+
+Following is an example of a system-managed keytab AD connector that uses an AD domain of name `CONTOSO.LOCAL`. Ensure to replace the values with the ones for your AD domain. The `adarc-dsa-secret` contains the AD domain service account that was created prior to the AD deployment.
+
+> [!NOTE]
+> Make sure the password of provided domain service AD account here doesn't contain `!` as special characters.
+>
+
+```yaml
+apiVersion: v1
+kind: Secret
+type: Opaque
+metadata:
+ name: adarc-dsa-secret
+ namespace: <namespace>
+data:
+ password: <your base64 encoded password>
+ username: <your base64 encoded username>
+
+apiVersion: arcdata.microsoft.com/v1beta2
+kind: ActiveDirectoryConnector
+metadata:
+ name: adarc
+ namespace: <namespace>
+spec:
+ activeDirectory:
+ realm: CONTOSO.LOCAL
+ serviceAccountProvisioning: automatic
+ ouDistinguishedName: "OU=arcou,DC=contoso,DC=local"
+ domainServiceAccountSecret: adarc-dsa-secret
+ domainControllers:
+ primaryDomainController:
+ hostname: dc1.contoso.local
+ secondaryDomainControllers:
+ - hostname: dc2.contoso.local
+ - hostname: dc3.contoso.local
+ dns:
+ preferK8sDnsForPtrLookups: false
+ nameserverIPAddresses:
+ - <DNS Server 1 IP address>
+ - <DNS Server 2 IP address>
+```
++
+The following command deploys the AD connector instance. Currently, only kube-native approach of deploying is supported.
+
+```console
+kubectl apply ΓÇôf active-directory-connector.yaml
+```
+
+After submitting the deployment for the AD connector instance, you may check the status of the deployment using the following command.
+
+```console
+kubectl get adc -n <namespace>
+```
+
+## Next steps
+* [Deploy a customer-managed keytab Active Directory connector](deploy-customer-managed-keytab-active-directory-connector.md)
+* [Deploy SQL Managed Instance with Active Directory Authentication](deploy-active-directory-sql-managed-instance.md).
+* [Connect to AD-integrated Azure Arc-enabled SQL Managed Instance](connect-active-directory-sql-managed-instance.md).
azure-arc Managed Instance Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/managed-instance-disaster-recovery.md
The following image shows a properly configured distributed availability group:
2. Provision the managed instance in the secondary site and configure as a disaster recovery instance. At this point, the system databases are not part of the contained availability group. ```azurecli
- az sql mi-arc create --name <secondaryinstance> --tier bc --replicas 3 --disaster-recovery-site true --k8s-namespace <namespace> --use-k8s
+ az sql mi-arc create --name <secondaryinstance> --tier bc --replicas 3 ΓÇôlicense-type DisasterRecovery --k8s-namespace <namespace> --use-k8s
``` 3. Copy the mirroring certificates from each site to a location that's accessible to both the geo-primary and geo-secondary instances.
The following image shows a properly configured distributed availability group:
```azurecli az sql instance-failover-group-arc create --shared-name myfog --name primarycr --mi sqlinstance1 --role primary --partner-mi sqlinstance2 --partner-mirroring-url tcp://10.20.5.20:970 --partner-mirroring-cert-file $HOME/sqlcerts/sqlinstance2.pem --k8s-namespace my-namespace --use-k8s
- az sql instance-failover-group-arc create --shared-name myfog --name secondarycr --mi sqlinstance2 --role primary --partner-mi sqlinstance1 --partner-mirroring-url tcp://10.10.5.20:970 --partner-mirroring-cert-file $HOME/sqlcerts/sqlinstance1.pem --k8s-namespace my-namespace --use-k8s
+ az sql instance-failover-group-arc create --shared-name myfog --name secondarycr --mi sqlinstance2 --role secondary --partner-mi sqlinstance1 --partner-mirroring-url tcp://10.10.5.20:970 --partner-mirroring-cert-file $HOME/sqlcerts/sqlinstance1.pem --k8s-namespace my-namespace --use-k8s
``` ## Manual failover from primary to secondary instance
azure-arc Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/release-notes.md
You can create a maintenance window on the data controller, and if you have SQL
Metrics for each replica in a business critical instance are now sent to the Azure portal so you can view them in the monitoring charts.
-AD authentication connectors can now be set up in an `automatic mode` which will use a service account to automatically create SQL service accounts, SPNs, and DNS entries as an alternative to the AD authentication connectors which use the `Bring Your Own Keytab` mode.
+AD authentication connectors can now be set up in an `automatic mode` or *system-managed keytab* which will use a service account to automatically create SQL service accounts, SPNs, and DNS entries as an alternative to the AD authentication connectors which use the *customer-managed keytab* mode.
+
+> [!NOTE]
+> In some early releases customer-managed keytab mode was called *bring your own keytab* mode.
Backup and point-in-time-restore when a database has Transparent Data Encryption (TDE) enabled is now supported.
azure-monitor Logs Data Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/logs-data-export.md
Log Analytics workspace data export continuously exports data that is sent to yo
- Destinations must be in the same region as the Log Analytics workspace. - Storage Account must be unique across rules in workspace. - Tables names can be no longer than 60 characters when exporting to Storage Account and 47 characters to Event Hubs. Tables with longer names will not be exported.-- Data export isn't supported in Government regions currently
+- Data export isn't supported in China currently.
## Data completeness Data export is optimized for moving large data volume to your destinations, and in certain retry conditions, can include a fraction of duplicated records. The export operation could fail when ingress limits are reached, see details under [Create or update data export rule](#create-or-update-data-export-rule). In such case, a retry continues for up to 30 minutes, and if destination is unavailable yet, data will be discarded until destination becomes available.
azure-resource-manager Resource Name Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/resource-name-rules.md
Title: Resource naming restrictions description: Shows the rules and restrictions for naming Azure resources. Previously updated : 04/28/2022++ Last updated : 05/06/2022 # Naming rules and restrictions for Azure resources
In the following tables, the term alphanumeric refers to:
> | servers / databases | server | 1-128 | Can't use:<br>`<>*%&:\/?` or control characters<br><br>Can't end with period or space. | > | servers / databases / syncGroups | database | 1-150 | Alphanumerics, hyphens, and underscores. | > | servers / elasticPools | server | 1-128 | Can't use:<br>`<>*%&:\/?` or control characters<br><br>Can't end with period or space. |
-> | servers / failoverGroups | global | 1-63 | Lowercase letters, numbers, and hyphens.<br><br>Can't start or end with hyphen. <br><br> Can't have hyphen twice in both third and fourth place. For example, `ab--cde` is not allowed. |
+> | servers / failoverGroups | global | 1-63 | Lowercase letters, numbers, and hyphens.<br><br>Can't start or end with hyphen. |
> | servers / firewallRules | server | 1-128 | Can't use:<br>`<>*%&:;\/?` or control characters<br><br>Can't end with period. | > | servers / keys | server | | Must be in format:<br>`VaultName_KeyName_KeyVersion`. |
bastion Tutorial Create Host Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/tutorial-create-host-portal.md
This section helps you deploy Bastion to your VNet. Once Bastion is deployed, yo
1. When you finish specifying the settings, select **Review + Create**. This validates the values.
-1. Once validation passes, you can deploy Bastion. Select **Create**. You'll see a message letting you know that your deployment is process. Status will display on this page as the resources are created. It takes about 10 minutes for the Bastion resource to be created and deployed.
+1. Once validation passes, you can deploy Bastion. Select **Create**. You'll see a message letting you know that your deployment is in process. Status will display on this page as the resources are created. It takes about 10 minutes for the Bastion resource to be created and deployed.
## <a name="connect"></a>Connect to a VM
cognitive-services Overview Ocr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/overview-ocr.md
The Read API includes the following features.
* Select pages and page ranges from large, multi-page documents * Natural reading order option for text line output (Latin only) * Handwriting classification for text lines (Latin only)
-* Available as Distroless Docker container for on-premise deployment
+* Available as Distroless Docker container for on-premises deployment
Learn [how to use the OCR features](./vision-api-how-to-topics/call-read-api.md).
-## Use the cloud API or deploy on-premise
+## Use the cloud API or deploy on-premises
The Read 3.x cloud APIs are the preferred option for most customers because of ease of integration and fast productivity out of the box. Azure and the Computer Vision service handle scale, performance, data security, and compliance needs while you focus on meeting your customers' needs.
-For on-premise deployment, the [Read Docker container (preview)](./computer-vision-how-to-install-containers.md) enables you to deploy the new OCR capabilities in your own local environment. Containers are great for specific security and data governance requirements.
+For on-premises deployment, the [Read Docker container (preview)](./computer-vision-how-to-install-containers.md) enables you to deploy the new OCR capabilities in your own local environment. Containers are great for specific security and data governance requirements.
> [!WARNING] > The Computer Vision [RecognizeText](https://westus.dev.cognitive.microsoft.com/docs/services/5cd27ec07268f6c679a3e641/operations/587f2c6a1540550560080311) and [ocr](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f20d) operations are no longer maintained, and are in the process of being deprecated in favor of the new [Read API](#read-api) covered in this article. Existing customers should [transition to using Read operations](upgrade-api-versions.md).
cognitive-services Captioning Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/captioning-concepts.md
Last updated 04/12/2022
-zone_pivot_groups: programming-languages-speech-sdk
+zone_pivot_groups: programming-languages-speech-sdk-cli
# Captioning with speech to text
The following are aspects to consider when using captioning:
* Center captions horizontally on the screen, in a large and prominent font. * Consider whether to use partial results, when to start displaying captions, and how many words to show at a time. * Learn about captioning protocols such as [SMPTE-TT](https://ieeexplore.ieee.org/document/7291854).
-* Consider output formats such as SRT (SubRip Subtitle) and WebVTT (Web Video Text Tracks). These can be loaded onto most video players such as VLC, automatically adding the captions on to your video.
+* Consider output formats such as SRT (SubRip Text) and WebVTT (Web Video Text Tracks). These can be loaded onto most video players such as VLC, automatically adding the captions on to your video.
> [!TIP] > Try the [Azure Video Indexer](/azure/azure-video-indexer/video-indexer-overview) as a demonstration of how you can get captions for videos that you upload.
-Captioning can accompany real time or pre-recorded speech. Whether you're showing captions in real time or with a recording, you can use the [Speech SDK](speech-sdk.md) to recognize speech and get transcriptions. You can also use the [Batch transcription API](batch-transcription.md) for pre-recorded video.
+Captioning can accompany real time or pre-recorded speech. Whether you're showing captions in real time or with a recording, you can use the [Speech SDK](speech-sdk.md) or [Speech CLI](spx-overview.md) to recognize speech and get transcriptions. You can also use the [Batch transcription API](batch-transcription.md) for pre-recorded video.
+
+## Caption output format
+
+The Speech service supports output formats such as SRT (SubRip Text) and WebVTT (Web Video Text Tracks). These can be loaded onto most video players such as VLC, automatically adding the captions on to your video.
+
+The [SRT](https://docs.fileformat.com/video/srt/) (SubRip Text) timespan output format is `hh:mm:ss,fff`.
+
+```srt
+1
+00:00:00,180 --> 00:00:03,230
+Welcome to applied Mathematics course 201.
+```
+
+The [WebVTT](https://www.w3.org/TR/webvtt1/#introduction) (Web Video Text Tracks) timespan output format is `hh:mm:ss,fff`.
+
+```
+WEBVTT
+
+00:00:00.180 --> 00:00:03.230
+Welcome to applied Mathematics course 201.
+{
+ "ResultId": "8e89437b4b9349088a933f8db4ccc263",
+ "Duration": "00:00:03.0500000"
+}
+```
## Input audio to the Speech service
For captioning of prerecorded speech or wherever latency isn't a concern, you co
Real time captioning presents tradeoffs with respect to latency versus accuracy. You could show the text from each `Recognizing` event as soon as possible. However, if you can accept some latency, you can improve the accuracy of the caption by displaying the text from the `Recognized` event. There's also some middle ground, which is referred to as "stable partial results".
-You can request that the Speech service return fewer `Recognizing` events that are more accurate. This is done by setting the `SpeechServiceResponse_StablePartialResultThreshold` property to a value between `0` and `2147483647`. The value that you set is the number of times a word has to be recognized before the Speech service returns a `Recognizing` event. For example, if you set the `SpeechServiceResponse_StablePartialResultThreshold` value to `5`, the Speech service will affirm recognition of a word at least five times before returning the partial results to you with a `Recognizing` event.
+You can request that the Speech service return fewer `Recognizing` events that are more accurate. This is done by setting the `SpeechServiceResponse_StablePartialResultThreshold` property to a value between `0` and `2147483647`. The value that you set is the number of times a word has to be recognized before the Speech service returns a `Recognizing` event. For example, if you set the `SpeechServiceResponse_StablePartialResultThreshold` property value to `5`, the Speech service will affirm recognition of a word at least five times before returning the partial results to you with a `Recognizing` event.
::: zone pivot="programming-language-csharp" ```csharp
self.speechConfig!.setPropertyTo(5, by: SPXPropertyId.speechServiceResponseStabl
speech_config.set_property(property_id = speechsdk.PropertyId.SpeechServiceResponse_StablePartialResultThreshold, value = 5) ``` ::: zone-end
+```console
+spx recognize --file caption.this.mp4 --format any --property SpeechServiceResponse_StablePartialResultThreshold=5 --output vtt file - --output srt file -
+```
Requesting more stable partial results will reduce the "flickering" or changing text, but it can increase latency as you wait for higher confidence results.
self.speechConfig!.setProfanityOptionTo(SPXSpeechConfigProfanityOption_Profanity
speech_config.set_profanity(speechsdk.ProfanityOption.Removed) ``` ::: zone-end
+```console
+spx recognize --file caption.this.mp4 --format any --profanity masked --output vtt file - --output srt file -
+```
Profanity filter is applied to the result `Text` and `MaskedNormalizedForm` properties. Profanity filter isn't applied to the result `LexicalForm` and `NormalizedForm` properties. Neither is the filter applied to the word level results.
There are some situations where [training a custom model](custom-speech-overview
## Next steps
-* [Get started with speech to text](get-started-speech-to-text.md)
+* [Captioning quickstart](captioning-quickstart.md)
* [Get speech recognition results](get-speech-recognition-results.md)
cognitive-services Captioning Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/captioning-quickstart.md
+
+ Title: "Create captions with speech to text quickstart - Speech service"
+
+description: In this quickstart, you convert speech to text as captions.
++++++ Last updated : 04/23/2022+
+ms.devlang: cpp, csharp
+zone_pivot_groups: programming-languages-speech-sdk-cli
++
+# Quickstart: Create captions with speech to text
++++++++++
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Learn more about speech recognition](how-to-recognize-speech.md)
cognitive-services Get Speech Recognition Results https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/get-speech-recognition-results.md
Last updated 03/31/2022 ms.devlang: cpp, csharp, golang, java, javascript, objective-c, python
-zone_pivot_groups: programming-languages-speech-sdk
+zone_pivot_groups: programming-languages-speech-sdk-cli
keywords: speech to text, speech to text software
keywords: speech to text, speech to text software
[!INCLUDE [Python include](./includes/how-to/recognize-speech-results/python.md)] ::: zone-end + ## Next steps * [Try the speech to text quickstart](get-started-speech-to-text.md)
cognitive-services Improve Accuracy Phrase List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/improve-accuracy-phrase-list.md
Now try Speech Studio to see how phrase list can improve recognition accuracy.
1. Sign in to [Speech Studio](https://speech.microsoft.com/). 1. Select **Real-time Speech-to-text**. 1. You test speech recognition by uploading an audio file or recording audio with a microphone. For example, select **record audio with a microphone** and then say "Hi Rehaan, this is Jessie from Contoso bank. " Then select the red button to stop recording.
-1. You should see the transcription result in the **Test results** text box. If "Rehaan", "Jesse", or "Contoso" were recognized incorrectly, you can add the terms to a phrase list in the next step.
+1. You should see the transcription result in the **Test results** text box. If "Rehaan", "Jessie", or "Contoso" were recognized incorrectly, you can add the terms to a phrase list in the next step.
1. Select **Show advanced options** and turn on **Phrase list**. 1. Enter "Contoso;Jessie;Rehaan" in the phrase list text box. Note that multiple phrases need to be separated by a semicolon. :::image type="content" source="./media/custom-speech/phrase-list-after-zoom.png" alt-text="Screenshot of a phrase list applied in Speech Studio." lightbox="./media/custom-speech/phrase-list-after-full.png":::
-1. Use the microphone to test recognition again. Otherwise you can select the retry arrow next to your audio file to re-run your audio. The terms "Rehaan", "Jesse", or "Contoso" should be recognized.
+1. Use the microphone to test recognition again. Otherwise you can select the retry arrow next to your audio file to re-run your audio. The terms "Rehaan", "Jessie", or "Contoso" should be recognized.
## Implement phrase list
cognitive-services Quickstart Custom Commands Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/quickstart-custom-commands-application.md
In this quickstart, you create and test a basic Custom Commands application using Speech Studio. You will also be able to access this application from a Windows client app. ## Region Availability
-At this time, Custom Commands supports speech subscriptions created in these regions:
-* West US
-* West US2
-* East US
-* East US2
-* West Central US
-* North Europe
-* West Europe
-* East Asia
-* Southeast Asia
-* Central India
+At this time, Custom Commands supports speech subscriptions created in regions that have [voice assistant capabilities](./regions.md#voice-assistants).
## Prerequisites
cognitive-services Spx Batch Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/spx-batch-operations.md
Title: "Speech CLI batch operations - Speech service"
+ Title: "Run batch operations with the Speech CLI - Speech service"
description: Learn how to do batch speech to text (speech recognition), batch text to speech (speech synthesis) with the Speech CLI.
-# Speech CLI batch operations
+# Run batch operations with the Speech CLI
Common tasks when using Azure Speech services, are batch operations. In this article, you'll learn how to do batch speech to text (speech recognition), batch text to speech (speech synthesis) with the Speech CLI. Specifically, you'll learn how to:
cognitive-services Spx Data Store Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/spx-data-store-configuration.md
Title: "Speech CLI configuration options - Speech service"
+ Title: "Configure the Speech CLI datastore - Speech service"
-description: Learn how to create and manage configuration files for use with the Azure Speech CLI.
+description: Learn how to configure the Speech CLI datastore.
Previously updated : 01/13/2021 Last updated : 05/01/2022
-# Speech CLI configuration options
+# Configure the Speech CLI datastore
-Speech CLI's behavior can rely on settings in configuration files, which you can refer to using a `@` symbol. The Speech CLI saves a new setting in a new `./spx/data` subdirectory that is created in the current working directory for the Speech CLI. When looking for a configuration value, the Speech CLI searches your current working directory, then in the datastore at `./spx/data`, and then in other datastores, including a final read-only datastore in the `spx` binary.
+The [Speech CLI](spx-basics.md) can rely on settings in configuration files, which you can refer to using a `@` symbol. The Speech CLI saves a new setting in a new `./spx/data` subdirectory that is created in the current working directory for the Speech CLI. When looking for a configuration value, the Speech CLI searches your current working directory, then in the datastore at `./spx/data`, and then in other datastores, including a final read-only datastore in the `spx` binary.
In the [Speech CLI quickstart](spx-basics.md), you used the datastore to save your `@key` and `@region` values, so you did not need to specify them with each `spx` command. Keep in mind, that you can use configuration files to store your own configuration settings, or even use them to pass URLs or other dynamic content generated at runtime.
-> [!NOTE]
-> In PowerShell, the [stop-parsing token](/powershell/module/microsoft.powershell.core/about/about_special_characters#stop-parsing-token) (`--%`) should follow `spx`. For example, run `spx --% config @region` to view the current region config value.
+For more details about datastore files, including use of default configuration files (`@spx.default`, `@default.config`, and `@*.default.config` for command-specific default settings), enter this command:
-## Create and manage configuration files in the datastore
+```console
+spx help advanced setup
+```
-This section shows how to use a configuration file in the local datastore to store and fetch command settings using `spx config`, and store output from Speech CLI using the `--output` option.
+## nodefaults
The following example clears the `@my.defaults` configuration file, adds key-value pairs for **key** and **region** in the file, and uses the configuration in a call to `spx recognize`.
spx config @my.defaults
spx recognize --nodefaults @my.defaults --file hello.wav ```
-You can also write dynamic content to a configuration file. For example, the following command creates a custom speech model and stores the URL of the new model in a configuration file. The next command waits until the model at that URL is ready for use before returning.
+## Dynamic configuration
+
+You can also write dynamic content to a configuration file using the `--output` option.
+
+For example, the following command creates a custom speech model and stores the URL of the new model in a configuration file. The next command waits until the model at that URL is ready for use before returning.
```console spx csr model create --name "Example 4" --datasets @my.datasets.txt --output url @my.model.txt
spx csr model status --model @my.model.txt --wait
The following example writes two URLs to the `@my.datasets.txt` configuration file. In this scenario, `--output` can include an optional **add** keyword to create a configuration file or append to the existing one. - ```console spx csr dataset create --name "LM" --kind Language --content https://crbn.us/data.txt --output url @my.datasets.txt spx csr dataset create --name "AM" --kind Acoustic --content https://crbn.us/audio.zip --output add url @my.datasets.txt
spx csr dataset create --name "AM" --kind Acoustic --content https://crbn.us/aud
spx config @my.datasets.txt ```
-For more details about datastore files, including use of default configuration files (`@spx.default`, `@default.config`, and `@*.default.config` for command-specific default settings), enter this command:
+## SPX config add
+
+For readability, flexibility, and convenience, you can use a preset configuration with select output options.
+
+For example, you might have the following requirements for [captioning](captioning-quickstart.md):
+- Recognize from the input file `caption.this.mp4`.
+- Output WebVTT and SRT captions to the files `caption.vtt` and `caption.srt` respectively.
+- Output the `offset`, `duration`, `resultid`, and `text` of each recognizing event to the file `each.result.tsv`.
+
+You can create a preset configuration named `@caption.defaults` as shown here:
```console
-spx help advanced setup
+spx config @caption.defaults --clear
+spx config @caption.defaults --add output.each.recognizing.result.offset=true
+spx config @caption.defaults --add output.each.recognizing.result.duration=true
+spx config @caption.defaults --add output.each.recognizing.result.resultid=true
+spx config @caption.defaults --add output.each.recognizing.result.text=true
+spx config @caption.defaults --add output.each.file.name=each.result.tsv
+spx config @caption.defaults --add output.srt.file.name=caption.srt
+spx config @caption.defaults --add output.vtt.file.name=caption.vtt
+```
+
+The settings are saved to the current directory in a file named `caption.defaults`. Here are the file contents:
+
+```
+output.each.recognizing.result.offset=true
+output.each.recognizing.result.duration=true
+output.each.recognizing.result.resultid=true
+output.each.recognizing.result.text=true
+output.all.file.name=output.result.tsv
+output.each.file.name=each.result.tsv
+output.srt.file.name=caption.srt
+output.vtt.file.name=caption.vtt
+```
+
+Then, to generate [captions](captioning-quickstart.md), you can run this command that imports settings from the `@caption.defaults` preset configuration:
+
+```console
+spx recognize --file caption.this.mp4 --format any --output vtt --output srt @caption.defaults
+```
+
+Using the preset configuration as shown previously is similar to running the following command:
+
+```console
+spx recognize --file caption.this.mp4 --format any --output vtt file caption.vtt --output srt file caption.srt --output each file each.result.tsv --output all file output.result.tsv --output each recognizer recognizing result offset --output each recognizer recognizing duration --output each recognizer recognizing result resultid --output each recognizer recognizing text
``` ## Next steps
cognitive-services Spx Output Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/spx-output-options.md
+
+ Title: "Configure the Speech CLI output options - Speech service"
+
+description: Learn how to configure output options with the Speech CLI.
+++++ Last updated : 05/01/2022+++
+# Configure the Speech CLI output options
+
+The [Speech CLI](spx-basics.md) output can be written to standard output or specified files.
+
+For contextual help in the Speech CLI, you can run any of the following commands:
+
+```console
+spx help recognize output examples
+spx help synthesize output examples
+spx help translate output examples
+spx help intent output examples
+```
+
+## Standard output
+
+If the file argument is a hyphen (`-`), the results are written to standard output as shown in the following example.
+
+```console
+spx recognize --file caption.this.mp4 --format any --output vtt file - --output srt file - --output each file - @output.each.detailed --property SpeechServiceResponse_StablePartialResultThreshold=0 --profanity masked
+```
+
+## Default file output
+
+If you omit the `file` option, output is written to default files in the current directory.
+
+For example, run the following command to write WebVTT and SRT [captions](captioning-concepts.md) to their own default files:
+
+```console
+spx recognize --file caption.this.mp4 --format any --output vtt --output srt --output each text --output all duration
+```
+
+The default file names are as follows, where the `<EPOCH_TIME>` is replaced at run time.
+- The default SRT file name includes the input file name and the local operating system epoch time: `output.caption.this.<EPOCH_TIME>.srt`
+- The default Web VTT file name includes the input file name and the local operating system epoch time: `output.caption.this.<EPOCH_TIME>.vtt`
+- The default `output each` file name, `each.<EPOCH_TIME>.tsv`, includes the local operating system epoch time. This file is not created by default, unless you specify the `--output each` option.
+- The default `output all` file name, `output.<EPOCH_TIME>.tsv`, includes the local operating system epoch time. This file is created by default.
+
+## Output to specific files
+
+For output to files that you specify instead of the [default files](#default-file-output), set the `file` option to the file name.
+
+For example, to output both WebVTT and SRT [captions](captioning-concepts.md) to files that you specify, run the following command:
+
+```console
+spx recognize --file caption.this.mp4 --format any --output vtt file caption.vtt --output srt file caption.srt --output each text --output each file each.result.tsv --output all file output.result.tsv
+```
+
+The preceding command also outputs the `each` and `all` results to the specified files.
+
+## Output to multiple files
+
+For translations with `spx translate`, separate files are created for the source language (such as `--source en-US`) and each target language (such as `--target de;fr;zh-Hant`).
+
+For example, to output translated SRT and WebVTT captions, run the following command:
+
+```console
+spx translate --source en-US --target de;fr;zh-Hant --file caption.this.mp4 --format any --output vtt file caption.vtt --output srt file caption.srt
+```
+
+Captions should then be written to the following files: *caption.srt*, *caption.vtt*, *caption.de.srt*, *caption.de.vtt*, *caption.fr.srt*, *caption.fr.vtt*, *caption.zh-Hant.srt*, and *caption.zh-Hant.vtt*.
+
+## Suppress header
+
+You can suppress the header line in the output file by setting the `has header false` option:
+
+```
+spx recognize --nodefaults @my.defaults --file audio.wav --output recognized text --output file has header false
+```
+
+See [Configure the Speech CLI datastore](spx-data-store-configuration.md#nodefaults) for more information about `--nodefaults`.
+
+## Next steps
+
+* [Captioning quickstart](./captioning-quickstart.md)
cosmos-db Cassandra Monitor Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/cassandra-monitor-insights.md
+
+ Title: Monitor and debug with insights in Azure Cosmos DB Cassandra API
+description: Learn how to debug and monitor your Azure Cosmos DB Cassandra API account using insights
+++++ Last updated : 05/02/2022+++
+# Monitor and debug with insights in Azure Cosmos DB Cassandra API
+
+Azure Cosmos DB helps provide insights into your applicationΓÇÖs performance using the Azure Monitor API. Azure Monitor for Azure Cosmos DB provides metrics view to monitor your Cassandra API Account and create dashboards.
+
+This article walks through some common use cases and how best to use Azure Cosmos DB insights to analyze and debug your Cassandra API account.
+> [!NOTE]
+> The Azure Cosmos DB metrics are collected by default, this feature does not require you to enable or configure anything.
++
+## Availability
+The availability shows the percentage of successful requests over the total requests per hour. Monitor service availability for a specified Cassandra API account.
+++
+## Latency
+These charts below show the read and write latency observed by your Cassandra API account in the region where your account is operating. You can visualize latency across regions for a geo-replicated account. This metric doesn't represent the end-to-end request latency. Use diagnostic log for cases where you experience high latency for query operations.
+
+The server side latency (Avg) by region also displays a sudden latency spike on the server. It can help a customer differentiate between a client side latency spike and a server-side latency spike.
++
+Also view server-side latency by different operations in a specific keyspace.
+++++
+Is your application experiencing any throttling? The chart below shows the total number of requests failed with a 429-response code.
+Exceeding provisioned throughput could be one of the reasons. Enable [Server Side Retry](./prevent-rate-limiting-errors.md) when your application experiences high throttling due to higher consumption of request units than what is allocated.
++++
+## System and management operations
+The system view helps show metadata requests count by primary partition. It also helps identify throttled requests. The management operation shows the account activities such as creation, deletion, key, network and replication settings. Request volume per status code over a time period.
++
+- Metric chart for account diagnostic, network and replication settings over a specified period and filtered based on a Keyspace.
+++
+- Metric chart to view account key rotation.
+
+You can view changes to primary or secondary password for your Cassandra API account.
+++
+## Storage
+Storage distribution for raw and index storage. Also a count of documents in the Cassandra API account.
++
+Maximum request units consumption for an account over a defined time period.
+++
+## Throughput and requests
+The Total Request Units metric displays the requests unit usage based on operation types.
+
+These operations can be analyzed within a given time interval, defined keyspace or table.
+++
+The Normalized RU Consumption metric is a metric between 0% to 100% that is used to help measure the utilization of provisioned throughput on a database or container. The metric can also be used to view the utilization of individual partition key ranges on a database or container. One of the main factors of a scalable application is having a good cardinality of partition keys.
+The chart below shows if your applicationΓÇÖs high RU consumption is because of hot partition.
++
+The chart below shows a breakdown of requests by different status code. Understand the meaning of the different codes for your [Cassandra API codes](./error-codes-solution.md).
+++
+## Next steps
+- [Monitor and debug with insights in Azure Cosmos DB](../use-metrics.md)
+- [Create alerts for Azure Cosmos DB using Azure Monitor](../create-alerts.md)
data-factory How To Configure Azure Ssis Ir Custom Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-configure-azure-ssis-ir-custom-setup.md
To view and reuse some samples of standard custom setups, complete the following
If Data Source Name (DSN) is used in connection, DSN configuration is needed in setup script. For example: C:\Windows\SysWOW64\odbcconf.exe /A {CONFIGSYSDSN "MySQL ODBC 8.0 Unicode Driver" "DSN=\<dsnname\>|PORT=3306|SERVER=\<servername\>"}
- * An *ORACLE ENTERPRISE* folder, which contains a custom setup script (*main.cmd*) and silent installation config file (*client.rsp*) to install the Oracle connectors and OCI driver on each node of your Azure-SSIS IR Enterprise Edition. This setup lets you use the Oracle Connection Manager, Source, and Destination to connect to the Oracle server.
+ * An *ORACLE ENTERPRISE* folder, which contains a custom setup script (*main.cmd*) to install the Oracle connectors and OCI driver on each node of your Azure-SSIS IR Enterprise Edition. This setup lets you use the Oracle Connection Manager, Source, and Destination to connect to the Oracle server.
- First, download Microsoft Connectors v5.0 for Oracle (*AttunitySSISOraAdaptersSetup.msi* and *AttunitySSISOraAdaptersSetup64.msi*) from [Microsoft Download Center](https://www.microsoft.com/en-us/download/details.aspx?id=55179) and the latest Oracle client (for example, *winx64_12102_client.zip*) from [Oracle](https://www.oracle.com/database/technologies/oracle19c-windows-downloads.html). Next, upload them all together with *main.cmd* and *client.rsp* to your blob container. If you use TNS to connect to Oracle, you also need to download *tnsnames.ora*, edit it, and upload it to your blob container. In this way, it can be copied to the Oracle installation folder during setup.
+ First, download Microsoft Connectors v5.0 for Oracle (*AttunitySSISOraAdaptersSetup.msi* and *AttunitySSISOraAdaptersSetup64.msi*) from [Microsoft Download Center](https://www.microsoft.com/en-us/download/details.aspx?id=55179) and the latest Oracle Instant Client (for example, *instantclient-basic-windows.x64-21.3.0.0.0.zip*) from [Oracle](https://www.oracle.com/cis/database/technologies/instant-client/downloads.html). Next, upload them all together with *main.cmd* to your blob container. If you use TNS to connect to Oracle, you also need to download *tnsnames.ora*, edit it, and upload it to your blob container. In this way, it can be copied to the Oracle installation folder during setup.
* An *ORACLE STANDARD ADO.NET* folder, which contains a custom setup script (*main.cmd*) to install the Oracle ODP.NET driver on each node of your Azure-SSIS IR. This setup lets you use the ADO.NET Connection Manager, Source, and Destination to connect to the Oracle server.
defender-for-cloud File Integrity Monitoring Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/file-integrity-monitoring-usage.md
FIM baselines start by identifying characteristics of a known-good state for the
|Policy Name | Registry Setting|
-||-|
+|-|--|
|Domain controller: Refuse machine account password changes| MACHINE\System\CurrentControlSet\Services \Netlogon\Parameters\RefusePasswordChange| |Domain member: Digitally encrypt or sign secure channel data (always)|MACHINE\System\CurrentControlSet\Services \Netlogon\Parameters\RequireSignOrSeal| |Domain member: Digitally encrypt secure channel data (when possible)|MACHINE\System\CurrentControlSet\Services \Netlogon\Parameters\SealSecureChannel|
FIM baselines start by identifying characteristics of a known-good state for the
To configure FIM to monitor registry baselines:
-1. In the **Add Windows Registry for Change Tracking** window, in the **Windows Registry Key** text box, enter the following registry key:
+- In the **Add Windows Registry for Change Tracking** window, in the **Windows Registry Key** text box, enter the following registry key:
``` HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Netlogon\Parameters
In the example in the following figure,
File Integrity Monitoring data resides within the Azure Log Analytics / ConfigurationChange table set. 1. Set a time range to retrieve a summary of changes by resource.
-In the following example, we are retrieving all changes in the last fourteen days in the categories of registry and files:
- <code>
+ In the following example, we are retrieving all changes in the last fourteen days in the categories of registry and files:
- > ConfigurationChange
-
- > | where TimeGenerated > ago(14d)
-
- > | where ConfigChangeType in ('Registry', 'Files')
-
- > | summarize count() by Computer, ConfigChangeType
-
- </code>
+ ```
+ ConfigurationChange
+ | where TimeGenerated > ago(14d)
+ | where ConfigChangeType in ('Registry', 'Files')
+ | summarize count() by Computer, ConfigChangeType
+ ```
1. To view details of the registry changes: 1. Remove **Files** from the **where** clause, 1. Remove the summarization line and replace it with an ordering clause:
- <code>
-
- > ConfigurationChange
-
- > | where TimeGenerated > ago(14d)
-
- > | where ConfigChangeType in ('Registry')
-
- > | order by Computer, RegistryKey
-
- </code>
+ ```
+ ConfigurationChange
+ | where TimeGenerated > ago(14d)
+ | where ConfigChangeType in ('Registry')
+ | order by Computer, RegistryKey
+ ```
Reports can be exported to CSV for archival and/or channeled to a Power BI report.
defender-for-iot Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/overview.md
Microsoft Defender for IoT provides lightweight security agents so that you can
- **Security posture management**: You can proactively monitor the security posture of your IoT devices. Defender for IoT provides security posture recommendations based on the CIS benchmark, along with device-specific recommendations. Get visibility into operating system security, including OS configuration, firewall settings, and permissions. - **Endpoint threat detection**: Detect threats like botnets, brute force attempts, crypto miners, and suspicious network activity. Create custom alerts to target the most important threats in your organization. -- **IoT Hub integration**: Defender for IoT is enabled by default in every new IoT Hub that is created. Defender for IoT provides real-time monitoring, recommendations, and alerts, without requiring agent installation on any devices, and uses advanced analytics on logged IoT Hub meta data to analyze and protect your field devices and IoT hubs
+- **IoT Hub integration**: Defender for IoT is enabled by default in every new IoT Hub that is created. Defender for IoT provides real-time monitoring, recommendations, and alerts, without requiring agent installation on any devices. Defender for IoT uses advanced analytics on logged IoT Hub meta data to analyze and protect your field devices and IoT hubs.
## Security posture management
Microsoft Defender for IoT provides lightweight security agents so that you can
The Defender for IoT micro agent enables you to quickly improve your organization's device security and defense capabilities by offering CIS best practice configurations, along with constant identification of any existing weak links in your OS security posture. CIS benchmark-based OS baseline recommendations help identify issues with device security hygiene, and prioritize changes for security hardening. -- CIS benchmarks are the best practices for securely configuring a target system. CIS benchmarks are developed through a unique consensus-based process comprised of cybersecurity professionals and subject matter experts around the world.
+- CIS benchmarks are the best practices for securely configuring a target system. CIS benchmarks are developed through a unique, consensus-based process, comprised of cybersecurity professionals and subject matter experts around the world.
- CIS benchmarks are the only consensus-based, best-practice security configuration guides that are both developed, and accepted by government, business, industry, and academia.
The Defender for IoT micro agent provides deep security protection, and visibili
- The micro agent collects, aggregates, and analyzes raw security events from your devices. Events can include IP connections, process creation, user logons, and other security-relevant information. - Defender for IoT device agents handles event aggregation, to help avoid high network throughput.-- The micro agent has flexible deployment options. The micro agent includes source code, so you can incorporate it into firmware, or customize it to include only what you need. It's also available as a binary package, or integrated directly into other Azure IoT solutions. The micro agent is available for standard IoT operating systems like Linux and Azure RTOS.
+- The micro agent has flexible deployment options. The micro agent includes source code, so you can incorporate it into firmware, or customize it to include only what you need. It's also available as a binary package, or integrated directly into other Azure IoT solutions. The micro agent is available for standard IoT operating systems, such as Linux and Azure RTOS.
- The agents are highly customizable, allowing you to use them for specific tasks, such as sending only important information at the fastest SLA, or for aggregating extensive security information and context into larger segments, avoiding higher service costs.
The Defender for IoT micro agent provides deep security protection, and visibili
The Defender for IoT analytics pipeline also receives other threat intelligence streams from various sources within Microsoft and Microsoft partners. The entire analytics pipeline works with every customer configuration made on the service, such as custom alerts and use of the send security message SDK.
-Using the analytics pipeline, Defender for IoT combines all streams of information to generate actionable recommendations and alerts. The pipeline contains both custom rules created by security researchers and experts,as well as machine learning models searching for deviation from standard device behavior, and risk analysis.
+Using the analytics pipeline, Defender for IoT combines all streams of information to generate actionable recommendations and alerts. The pipeline contains both custom rules created by security researchers and experts, as well as machine learning models searching for deviation from standard device behavior, and risk analysis.
:::image type="content" source="media/overview/micro-agent-architecture.png" alt-text="The micro agent architecture.":::
defender-for-iot Quickstart Onboard Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/quickstart-onboard-iot-hub.md
The **Secure your IoT solution** button will only appear if the IoT Hub has not
1. Navigate to **IoT Hub** > **`Your hub`** > **Defender for IoT** > **Overview**.
-1. The Threat prevention, and Threat detection screen will appear.
+ The Threat prevention and Threat detection screen will appear.
:::image type="content" source="media/quickstart-onboard-iot-hub/threat-prevention.png" alt-text="Screenshot showing that Defender for IoT is enabled." lightbox="media/quickstart-onboard-iot-hub/threat-prevention-expanded.png"::: ## Next steps
-Advance to the next article to add a resource group to your solution...
+Advance to the next article to add a resource group to your solution.
> [!div class="nextstepaction"] > [Add a resource group to your IoT solution](tutorial-configure-your-solution.md)
defender-for-iot Tutorial Standalone Agent Binary Installation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/tutorial-standalone-agent-binary-installation.md
In this tutorial you will learn how to:
- You must have [enabled Microsoft Defender for IoT on your Azure IoT Hub](quickstart-onboard-iot-hub.md). -- You must have [added a resource group to your IoT solution](quickstart-configure-your-solution.md)
+- You must have [added a resource group to your IoT solution](quickstart-configure-your-solution.md).
-- You must have [Create a Defender for IoT micro agent module twin (Preview)](quickstart-create-micro-agent-module-twin.md).
+- You must have [created a Defender for IoT micro agent module twin (Preview)](quickstart-create-micro-agent-module-twin.md).
## Download and install the micro agent
You will need to copy the module identity connection string from the DefenderIoT
`HostName=<the host name of the iot hub>;DeviceId=<the id of the device>;ModuleId=<the id of the module>;x509=true`
- This string alerts the Defender for IoT agent, to expect a certificate be provided for authentication.
+ This string alerts the Defender for IoT agent to expect a certificate to be provided for authentication.
1. Restart the service using the following command:
You will need to copy the module identity connection string from the DefenderIoT
**To validate your installation**:
-1. Use the following command to ensure the micro agent is running properly with:
+1. Use the following command to ensure the micro agent is running properly:
```bash systemctl status defender-iot-micro-agent.service
You can test the system by creating a trigger file on the device. The trigger fi
Allow up to one hour for the recommendation to appear in the hub.
-A baseline recommendation called 'IoT_CISBenchmarks_DIoTTest' is created. You can query this recommendation fro Log Analytics as follows:
+A baseline recommendation called 'IoT_CISBenchmarks_DIoTTest' is created. You can query this recommendation from Log Analytics as follows:
```kusto SecurityRecommendation
defender-for-iot Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/architecture.md
Defender for IoT connects to both cloud and on-premises components, and is built
Defender for IoT systems include the following components: -- The Azure portal, for cloud management and integration to other Microsoft services, such as Microsoft Sentinel
+- The Azure portal, for cloud management and integration to other Microsoft services, such as Microsoft Sentinel.
- Network sensors, deployed on either a virtual machine or a physical appliance. You can configure your OT sensors as cloud-connected sensors, or fully on-premises sensors. - An on-premises management console for cloud-connected or local, air-gapped site management. - An embedded security agent (optional).
Defender for IoT network sensors discover and continuously monitor network traff
- Sensors use IoT and OT-aware analytics engines and Layer-6 Deep Packet Inspection (DPI) to detect IoT and OT threats, such as fileless malware, based on anomalous or unauthorized activity.
-Data collection, processing, analysis, and alerting takes place directly on the sensor, which can be ideal for locations with low bandwidth or high latency connectivity because only metadata is transferred on, either to the Azure portal for cloud management, or an on-premises management console.
+Data collection, processing, analysis, and alerting takes place directly on the sensor. Running processes directly on the sensor can be ideal for locations with low bandwidth or high-latency connectivity because only the metadata is transferred on for management, either to the Azure portal or an on-premises management console.
-### Cloud-connected vs local sensors
+### Cloud-connected vs. local sensors
Cloud-connected sensors are sensors that are connected to Defender for IoT in Azure, and differ from locally managed sensors as follows:
Defender for IoT sensors apply analytics engines on ingested data, triggering al
Analytics engines provide machine learning and profile analytics, risk analysis, a device database and set of insights, threat intelligence, and behavioral analytics.
-For example, for OT networks, the **policy violation detection** engine alerts users of any deviation from baseline behavior, such as unauthorized use of specific function codes, access to specific objects, or changes to device configuration. The policy violation engine models industry control system (ICS) networks as deterministic sequences of states and transitionsΓÇöusing a patented technique called Industrial Finite State Modeling (IFSM). The policy violation detection engine establishes a baseline of the ICS networks, so that the platform requires a shorter learning period to build a baseline of the network than generic mathematical approaches or analytics, which were originally developed for IT rather than OT networks.
+For example, for OT networks, the **policy violation detection** engine alerts users of any deviation from baseline behavior, such as unauthorized use of specific function codes, access to specific objects, or changes to device configuration. The policy violation engine models industry control system (ICS) networks as deterministic sequences of states and transitions - using a patented technique called Industrial Finite State Modeling (IFSM). The policy violation detection engine creates a baseline for industrial control system (ICS) networks. Since many detection algorithms were build for IT, rather than OT, networks, an extra baseline for ICS networks helps to shorten the systems learning curve for new detections.
Specifically for OT networks, OT network sensors also provide the following analytics engines: -- **Protocol violation detection engine**. Identifies the use of packet structures and field values that violate ICS protocol specifications, for example: Modbus exception, and Initiation of an obsolete function code alerts.
+- **Protocol violation detection engine**. Identifies the use of packet structures and field values that violate ICS protocol specifications, for example: Modbus exception, and initiation of an obsolete function code alerts.
- **Industrial malware detection engine**. Identifies behaviors that indicate the presence of known malware, such as Conficker, Black Energy, Havex, WannaCry, NotPetya, and Triton.
Specifically for OT networks, OT network sensors also provide the following anal
Defender for IoT provides hybrid network support using the following management options: -- **The Azure portal**. Use the Azure portal as a single pane of glass view all data ingested from your devices via network sensors. The Azure portal provides extra value, such as [workbooks](workbooks.md), [connections to Microsoft Sentinel](../../sentinel/iot-solution.md?bc=%2fazure%2fdefender-for-iot%2fbreadcrumb%2ftoc.json&tabs=use-out-of-the-box-analytics-rules-recommended&toc=%2fazure%2fdefender-for-iot%2forganizations%2ftoc.json), and more.
+- **The Azure portal**. Use the Azure portal as a single pane of glass to view all data ingested from your devices via network sensors. The Azure portal provides extra value, such as [workbooks](workbooks.md), [connections to Microsoft Sentinel](../../sentinel/iot-solution.md?bc=%2fazure%2fdefender-for-iot%2fbreadcrumb%2ftoc.json&tabs=use-out-of-the-box-analytics-rules-recommended&toc=%2fazure%2fdefender-for-iot%2forganizations%2ftoc.json), and more.
Also use the Azure portal to obtain new appliances and software updates, onboard and maintain your sensors in Defender for IoT, and update threat intelligence packages.
defender-for-iot How To Install Software https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-install-software.md
# Defender for IoT software installation
-This article describes how to install software for OT sensors and on-premises management consoles. You might need the procedures in this article if you're re-installing software on a preconfigured appliance, or if you've chosen to install software on your own appliances.
+This article describes how to install software for OT sensors and on-premises management consoles. You might need the procedures in this article if you're reinstalling software on a preconfigured appliance, or if you've chosen to install software on your own appliances.
## Pre-installation configuration
-Each appliance type comes with it's own set of instructions that are required before installing Defender for IoT software.
+Each appliance type comes with its own set of instructions that are required before installing Defender for IoT software.
Make sure that you've completed the procedures as instructed in the **Reference > OT monitoring appliance** section of our documentation before installing Defender for IoT software.
Mount the ISO file using one of the following options:
This procedure describes how to install OT sensor software on a physical or virtual appliance. > [!Note]
-> At the end of this process you will be presented with the usernames, and passwords for your device. Make sure to copy these down as these passwords will not be presented again.
+> At the end of this process you will be presented with the usernames and passwords for your device. Make sure to copy these down as these passwords will not be presented again.
**To install the sensor's software**:
This procedure describes how to install OT sensor software on a physical or virt
1. The sensor will reboot, and the **Package configuration** screen will appear. Press the up or down arrows to navigate, and the SPACE bar to select an option. Press ENTER to advance to the next screen.
-1. Select the monitor interface, and press the **ENTER** key.
+1. Select the monitor interface and press the **ENTER** key.
:::image type="content" source="media/tutorial-install-components/monitor-interface.png" alt-text="Screenshot of the select monitor interface screen.":::
This procedure describes how to install OT sensor software on a physical or virt
1. Enter the DNS Server IP address, and press the **ENTER** key.
-1. Enter the sensor hostname, and press the **ENTER** key.
+1. Enter the sensor hostname and press the **ENTER** key.
:::image type="content" source="media/tutorial-install-components/sensor-hostname.png" alt-text="Screenshot of the screen where you enter a hostname for your sensor.":::
-1. The installation process runs.
+ The installation process runs.
1. When the installation process completes, save the appliance ID, and passwords. Copy these credentials to a safe place as you'll need them to access the platform the first time you use it.
This procedure describes how to install on-premises management console software
The installation process takes about 20 minutes. After the installation, the system is restarted several times.
-During the installation process, you can add a secondary NIC. If you choose not to install the secondary NIC during installation, you can [add a secondary NIC](#add-a-secondary-nic-optional) at a later time.
+During the installation process, you can add a secondary NIC. If you choose not to install the secondary Network Interface Card (NIC) during installation, you can [add a secondary NIC](#add-a-secondary-nic-optional) at a later time.
**To install the software**:
During the installation process, you can add a secondary NIC. If you choose not
|--|--| | **configure management network interface** | For Dell: **eth0, eth1** <br /> For HP: **enu1, enu2** <br> Or <br />**possible value** | | **configure management network IP address** | Enter an IP address |
- | **configure subnet mask:** | Enter an IP address|
- | **configure DNS:** | Enter an IP address |
- | **configure default gateway IP address:** | Enter an IP address|
+ | **configure subnet mask** | Enter an IP address|
+ | **configure DNS** | Enter an IP address |
+ | **configure default gateway IP address** | Enter an IP address|
-1. **(Optional)** If you would like to install a secondary Network Interface Card (NIC), define the following appliance profile, and network properties:
+1. **(Optional)** If you would like to install a secondary NIC, define the following appliance profile, and network properties:
:::image type="content" source="media/tutorial-install-components/on-prem-secondary-nic-install.png" alt-text="Screenshot that shows the Secondary NIC install questions."::: | Parameter | Configuration | |--|--| | **configure sensor monitoring interface** (Optional) | **eth1** or **possible value** |
- | **configure an IP address for the sensor monitoring interface:** | Enter an IP address |
- | **configure a subnet mask for the sensor monitoring interface:** | Enter an IP address |
+ | **configure an IP address for the sensor monitoring interface** | Enter an IP address |
+ | **configure a subnet mask for the sensor monitoring interface** | Enter an IP address |
-1. Accept the settlings and continue by typing `Y`.
+1. Accept the settings and continue by typing `Y`.
1. After about 10 minutes, the two sets of credentials appear. One is for a **CyberX** user, and one is for a **Support** user.
- :::image type="content" source="media/tutorial-install-components/credentials-screen.png" alt-text="Copy these credentials as they will not be presented again.":::
+ :::image type="content" source="media/tutorial-install-components/credentials-screen.png" alt-text="Copy these credentials as they won't be presented again.":::
- Save the usernames, and passwords, you'll need these credentials to access the platform the first time you use it.
+ Save the usernames and passwords, you'll need these credentials to access the platform the first time you use it.
1. Select **Enter** to continue.
This procedure describes how to add a secondary NIC if you've already installed
| **Subnet mask** | `N` | | **DNS** | `N` | | **Default gateway IP Address** | `N` |
- | **Sensor monitoring interface** <br>Optional. Relevant when sensors are on a different network segment.| `Y` and select a possible value |
+ | **Sensor monitoring interface** <br>Optional. Relevant when sensors are on a different network segment.| `Y`, and select a possible value |
| **An IP address for the sensor monitoring interface** | `Y`, and enter an IP address that's accessible by the sensors| | **A subnet mask for the sensor monitoring interface** | `Y`, and enter an IP address that's accessible by the sensors| | **Hostname** | Enter the hostname |
-1. Review all choices, and enter `Y` to accept the changes. The system reboots.
+1. Review all choices and enter `Y` to accept the changes. The system reboots.
### Find your port
-If you are having trouble locating the physical port on your device, you can use the following command to:
+If you are having trouble locating the physical port on your device, you can use the following command to find your port:
```bash sudo ethtool -p <port value> <time-in-seconds> ```
-This command will cause the light on the port to flash for the specified time period. For example, entering `sudo ethtool -p eno1 120`, will have port eno1 flash for 2 minutes allowing you to find the port on the back of your appliance.
+This command will cause the light on the port to flash for the specified time period. For example, entering `sudo ethtool -p eno1 120`, will have port eno1 flash for 2 minutes, allowing you to find the port on the back of your appliance.
## Post-installation validation To validate the installation of a physical appliance, you need to perform many tests. The same validation process applies to all the appliance types.
-Perform the validation by using the GUI or the CLI. The validation is available to the user **Support** and the user **CyberX**.
+Perform the validation by using the GUI or the CLI. The validation is available to both the **Support** and **CyberX** users.
Post-installation validation must include the following tests:
Check your system health from the sensor or on-premises management console. For
#### System -- **Core Log**: Provides the last 500 rows of the core log, enabling you to view the recent log rows without exporting the entire system log.
+- **Core Log**: Provides the last 500 rows of the core log, so that you can view the recent log rows without exporting the entire system log.
- **Task Manager**: Translates the tasks that appear in the table of processes to the following layers:
Check your system health from the sensor or on-premises management console. For
### Check system health by using the CLI
-Verify that the system is up, and running prior to testing the system's sanity.
+Verify that the system is up and running prior to testing the system's sanity.
**To test the system's sanity**:
Verify that all the input interfaces configured during the installation process
**To validate the system's network status**:
-1. Connect to the CLI with the Linux terminal (for example, PuTTY) and the user **Support**.
+1. Connect to the CLI with the Linux terminal (for example, PuTTY) and the **Support** user.
1. Enter `network list` (the equivalent of the Linux command `ifconfig`).
Verify that you can access the console web GUI:
1. To apply the settings, select **Y**.
-1. After restart, connect with the support user credentials and use the `network list` command to verify that the parameters were changed.
+1. After restart, connect with the **Support** user credentials and use the `network list` command to verify that the parameters were changed.
1. Try to ping and connect from the GUI again.
Verify that you can access the console web GUI:
1. Connect a monitor and keyboard to the appliance, or use PuTTY to connect remotely to the CLI.
-1. Use the **Support** user's credentials to sign in.
+1. Use the **Support** user credentials to sign in.
1. Use the `system sanity` command and check that all processes are running.
You can enhance system security by preventing direct user access to the sensor.
**To enable tunneling**:
-1. Sign in to the on-premises management console's CLI with the **CyberX**, or the **Support** user credentials.
+1. Sign in to the on-premises management console's CLI with the **CyberX** or the **Support** user credentials.
1. Enter `sudo cyberx-management-tunnel-enable`.
defender-for-iot How To Manage The On Premises Management Console https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-the-on-premises-management-console.md
If the upload fails, contact your security or IT administrator, or review the in
1. Select **Save**.
-For more information about first-time certificate upload see,
-[First-time sign-in and activation checklist](how-to-activate-and-set-up-your-sensor.md#first-time-sign-in-and-activation-checklist)
+For more information about first-time certificate upload, see [First-time sign-in and activation checklist](how-to-activate-and-set-up-your-sensor.md#first-time-sign-in-and-activation-checklist).
## Define backup and restore settings
If you are working with an on-premises management console and managed sensors, *
1. Select **Download** and save the file.
-1. Log into on-premises management console and select **System Settings** from the side menu.
+1. Sign into the on-premises management console and select **System Settings** from the side menu.
1. On the **Version Update** pane, select **Update**.
defender-for-iot How To Set Up Your Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-set-up-your-network.md
We recommend having your certificates ready before you start your deployment. Fo
1. Prepare the LAN cable for connecting the management to the network switch.
-1. Prepare the LAN cables for connecting switch SPAN (mirror) ports, and network taps to the Defender for IoT appliance.
+1. Prepare the LAN cables for connecting switch SPAN (mirror) ports and network taps to the Defender for IoT appliance.
1. Configure, connect, and validate SPAN ports in the mirrored switches as described in the architecture review session.
-1. Connect the configured SPAN port to a computer running Wireshark and verify that the port is configured correctly.
+1. Connect the configured SPAN port to a computer running Wireshark, and verify that the port is configured correctly.
1. Open all the relevant firewall ports.
Use the following tables to ensure that required firewalls are open on your work
| Protocol | Transport | In/Out | Port | Used | Purpose | Source | Destination | |--|--|--|--|--|--|--|--|
-| SSH | TCP | In/Out | 22 | CLI | To access the CLI. | Client | Sensor and on-premises management console |
-| HTTPS | TCP | In/Out | 443 | To access the sensor, and on-premises management console web console. | Access to Web console | Client | Sensor and on-premises management console |
+| SSH | TCP | In/Out | 22 | CLI | To access the CLI | Client | Sensor and on-premises management console |
+| HTTPS | TCP | In/Out | 443 | To access the sensor, and on-premises management console web console | Access to Web console | Client | Sensor and on-premises management console |
### Sensor access to Azure portal
Use the following tables to ensure that required firewalls are open on your work
| Protocol | Transport | In/Out | Port | Used | Purpose | Source | Destination | |--|--|--|--|--|--|--|--|
-| NTP | UDP | In/Out | 123 | Time Sync | Connects the NTP to the on-premises management console. | Sensor | On-premises management console |
+| NTP | UDP | In/Out | 123 | Time Sync | Connects the NTP to the on-premises management console | Sensor | On-premises management console |
| SSL | TCP | In/Out | 443 | Give the sensor access to the on-premises management console. | The connection between the sensor, and the on-premises management console | Sensor | On-premises management console | ### Other firewall rules for external services (optional)
Open these ports to allow extra services for Defender for IoT.
| Protocol | Transport | In/Out | Port | Used | Purpose | Source | Destination | |--|--|--|--|--|--|--|--|
-| SMTP | TCP | Out | 25 | Email | Used to open the customer's mail server, in order to send emails for alerts, and events. | Sensor and On-premises management console | Email server |
-| DNS | TCP/UDP | In/Out | 53 | DNS | The DNS server port. | On-premises management console and Sensor | DNS server |
+| SMTP | TCP | Out | 25 | Email | Used to open the customer's mail server, in order to send emails for alerts, and events | Sensor and On-premises management console | Email server |
+| DNS | TCP/UDP | In/Out | 53 | DNS | The DNS server port | On-premises management console and Sensor | DNS server |
| HTTP | TCP | Out | 80 | The CRL download for certificate validation when uploading certificates. | Access to the CRL server | Sensor and on-premises management console | CRL server |
-| [WMI](how-to-configure-windows-endpoint-monitoring.md) | TCP/UDP | Out | 135, 1025-65535 | Monitoring | Windows Endpoint Monitoring. | Sensor | Relevant network element |
-| [SNMP](how-to-set-up-snmp-mib-monitoring.md) | UDP | Out | 161 | Monitoring | Monitors the sensor's health. | On-premises management console and Sensor | SNMP server |
-| LDAP | TCP | In/Out | 389 | Active Directory | Allows Active Directory management of users that have access, to sign in to the system. | On-premises management console and Sensor | LDAP server |
+| [WMI](how-to-configure-windows-endpoint-monitoring.md) | TCP/UDP | Out | 135, 1025-65535 | Monitoring | Windows Endpoint Monitoring | Sensor | Relevant network element |
+| [SNMP](how-to-set-up-snmp-mib-monitoring.md) | UDP | Out | 161 | Monitoring | Monitors the sensor's health | On-premises management console and Sensor | SNMP server |
+| LDAP | TCP | In/Out | 389 | Active Directory | Allows Active Directory management of users that have access, to sign in to the system | On-premises management console and Sensor | LDAP server |
| Proxy | TCP/UDP | In/Out | 443 | Proxy | To connect the sensor to a proxy server | On-premises management console and Sensor | Proxy server |
-| Syslog | UDP | Out | 514 | LEEF | The logs that are sent from the on-premises management console to Syslog server. | On-premises management console and Sensor | Syslog server |
-| LDAPS | TCP | In/Out | 636 | Active Directory | Allows Active Directory management of users that have access, to sign in to the system. | On-premises management console and Sensor | LDAPS server |
-| Tunneling | TCP | In | 9000 </br></br> in addition to port 443 </br></br> Allows access from the sensor, or end user, to the on-premises management console. </br></br> Port 22 from the sensor to the on-premises management console. | Monitoring | Tunneling | Endpoint, Sensor | On-premises management console |
+| Syslog | UDP | Out | 514 | LEEF | The logs that are sent from the on-premises management console to Syslog server | On-premises management console and Sensor | Syslog server |
+| LDAPS | TCP | In/Out | 636 | Active Directory | Allows Active Directory management of users that have access, to sign in to the system | On-premises management console and Sensor | LDAPS server |
+| Tunneling | TCP | In | 9000 </br></br> in addition to port 443 </br></br> Allows access from the sensor, or end user, to the on-premises management console </br></br> Port 22 from the sensor to the on-premises management console | Monitoring | Tunneling | Endpoint, Sensor | On-premises management console |
## Choose a cloud connection method
This section provides troubleshooting for common issues when preparing your netw
1. Connect with a monitor and keyboard to the appliance, or use PuTTY to connect remotely to the CLI.
-2. Use the support credentials to sign in.
+2. Use the **support** credentials to sign in.
3. Use the **system sanity** command and check that all processes are running.
defender-for-iot Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/overview.md
[!INCLUDE [Banner for top of topics](../includes/banner.md)]
-The Internet of Things (IoT) supports billions of connected devices that use operational technology (OT) networks. IoT/OT devices and networks are often designed without security in priority, and therefore can't be protected by traditional systems. With each new wave of innovation, the risk to IoT devices and OT networks increases the possible attack surfaces.
+The Internet of Things (IoT) supports billions of connected devices that use operational technology (OT) networks. IoT/OT devices and networks are often designed without prioritizing security, and therefore can't be protected by traditional systems. With each new wave of innovation, the risk to IoT devices and OT networks increases the possible attack surfaces.
Microsoft Defender for IoT is a unified security solution for identifying IoT and OT devices, vulnerabilities, and threats and managing them through a central interface. This set of documentation describes how end-user organizations can secure their entire IoT/OT environment, including protecting existing devices or building security into new IoT innovations. :::image type="content" source="media/overview/end-to-end-coverage.png" alt-text="Diagram showing an example of Defender for IoT's end-to-end coverage solution.":::
-**For end-user organizations**, Microsoft Defender for IoT provides an agentless, network-layer monitoring that integrates smoothly with industrial equipment and SOC tools. You can deploy Microsoft Defender for IoT in Azure-connected and hybrid environments or completely on-premises.
+**For end-user organizations**, Microsoft Defender for IoT provides an agentless, network-layer monitoring that integrates smoothly with industrial equipment and SOC tools. You can deploy Microsoft Defender for IoT in Azure-connected and hybrid environments, or completely on-premises.
-**For IoT device builders**, Microsoft Defender for IoT also offers a lightweight, micro-agent that supports standard IoT operating systems, such as Linux and RTOS. The Microsoft Defender device builder agent helps you ensure that security is built into your IoT/OT projects, from the cloud. For more information, see [Microsoft Defender for IoT for device builders documentation](../device-builders/overview.md).
+**For IoT device builders**, Microsoft Defender for IoT also offers a lightweight micro-agent that supports standard IoT operating systems, such as Linux and RTOS. The Microsoft Defender device builder agent helps you ensure that security is built into your IoT/OT projects, from the cloud. For more information, see [Microsoft Defender for IoT for device builders documentation](../device-builders/overview.md).
## Agentless device monitoring
Agentless monitoring in Defender for IoT provides visibility and security into n
- **Assess risks and manage vulnerabilities** using machine learning, threat intelligence, and behavioral analytics. For example:
- - Identify unpatched devices, open ports, unauthorized applications, unauthorized connections, changes to device configurations, PLC code, and firmware, and more.
+ - Identify unpatched devices, open ports, unauthorized applications, unauthorized connections, changes to device configurations, PLC code, firmware, and more.
- Run searches in historical traffic across all relevant dimensions and protocols. Access full-fidelity PCAPs to drill down further. - Detect advanced threats that you may have missed by static IOCs, such as zero-day malware, fileless malware, and living-off-the-land tactics. -- **Respond to threats** by integrating with Microsoft services, such as Microsoft Sentinel, and third-party systems and APIs. Use advanced integrations for security information and event management (SIEM), security operations and response (SOAR), and extended detection and response (XDR) services, and more.
+- **Respond to threats** by integrating with Microsoft services, such as Microsoft Sentinel, third-party systems, and APIs. Use advanced integrations for security information and event management (SIEM), security operations and response (SOAR), extended detection and response (XDR) services, and more.
A centralized user experience lets the security team visualize and secure all their IT, IoT, and OT devices regardless of where the devices are located.
Contact [ms-horizon-support@microsoft.com](mailto:ms-horizon-support@microsoft.c
Microsoft Defender for IoT can protect IoT and OT devices, whether they're connected to IT, OT, or dedicated IoT networks.
-Enterprise IoT network protection extends agentless features beyond operational environments, providing coverage for all IoT devices in your environment. For example, an enterprise IoT environment may include printers, cameras, and purpose-built, proprietary, unique devices.
+Enterprise IoT network protection extends agentless features beyond operational environments, providing coverage for all IoT devices in your environment. For example, an enterprise IoT environment may include printers, cameras, and purpose-built, proprietary, devices.
When you expand Microsoft Defender for IoT into the enterprise network, you can apply Microsoft 365 Defender's features for asset discovery and use Microsoft Defender for Endpoint for a single, integrated package that can secure all of your IoT/OT infrastructure.
-Use Microsoft Defender for IoT's sensors as extra data sources, providing visibility in area's of your organizations network where Microsoft Defender for Endpoint isn't deployed, and when employees are accessing information remotely. Microsoft Defender for IoT's sensors provide visibility into both the IoT-to-IoT and the IoT-to-internet communications. Integrating Defender for IoT and Defender for Endpoint synchronizes any devices discovered on the network by either service.
+Use Microsoft Defender for IoT's sensors as extra data sources, providing visibility in areas of your organization's network where Microsoft Defender for Endpoint isn't deployed, and when employees are accessing information remotely. Microsoft Defender for IoT's sensors provide visibility into both the IoT-to-IoT and the IoT-to-internet communications. Integrating Defender for IoT and Defender for Endpoint synchronizes any devices discovered on the network by either service.
For more information, see the [Microsoft 365 Defender](/microsoft-365/security/defender/microsoft-365-defender) and [Microsoft Defender for Endpoint documentation](/microsoft-365/security/defender-endpoint).
event-hubs Transport Layer Security Configure Minimum Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/transport-layer-security-configure-minimum-version.md
To configure the minimum TLS version for an Event Hubs namespace with a template
Configuring the minimum TLS version requires api-version 2022-01-01-preview or later of the Azure Event Hubs resource provider.
+## Check the minimum required TLS version for a namespace
+
+To check the minimum required TLS version for your Event Hubs namespace, you can query the Azure Resource Manager API. You will need a Bearer token to query against the API, which you can retrieve using [ARMClient](https://github.com/projectkudu/ARMClient) by executing the following commands.
+
+```powershell
+.\ARMClient.exe login
+.\ARMClient.exe token <your-subscription-id>
+```
+
+Once you have your bearer token, you can use the script below in combination with something like [Rest Client](https://marketplace.visualstudio.com/items?itemName=humao.rest-client) to query the API.
+
+```http
+@token = Bearer <Token received from ARMClient>
+@subscription = <your-subscription-id>
+@resourceGroup = <your-resource-group-name>
+@namespaceName = <your-namespace-name>
+
+###
+GET https://management.azure.com/subscriptions/{{subscription}}/resourceGroups/{{resourceGroup}}/providers/Microsoft.EventHub/namespaces/{{namespaceName}}?api-version=2022-01-01-preview
+content-type: application/json
+Authorization: {{token}}
+```
+
+The response should look something like the below, with the minimumTlsVersion set under the properties.
+
+```json
+{
+ "sku": {
+ "name": "Premium",
+ "tier": "Premium",
+ "capacity": 1
+ },
+ "id": "/subscriptions/<your-subscription-id>/resourceGroups/<your-resource-group-name>/providers/Microsoft.EventHub/namespaces/<your-namespace-name>",
+ "name": "<your-namespace-name>",
+ "type": "Microsoft.EventHub/Namespaces",
+ "location": "West Europe",
+ "properties": {
+ "minimumTlsVersion": "1.2",
+ "publicNetworkAccess": "Enabled",
+ "disableLocalAuth": false,
+ "zoneRedundant": true,
+ "isAutoInflateEnabled": false,
+ "maximumThroughputUnits": 0,
+ "kafkaEnabled": true,
+ "provisioningState": "Succeeded",
+ "status": "Active"
+ }
+}
+```
+ ## Test the minimum TLS version from a client To test that the minimum required TLS version for an Event Hubs namespace forbids calls made with an older version, you can configure a client to use an older version of TLS. For more information about configuring a client to use a specific version of TLS, see [Configure Transport Layer Security (TLS) for a client application](transport-layer-security-configure-client-version.md).
healthcare-apis Fhir Versioning Policy And History Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/fhir-versioning-policy-and-history-management.md
Previously updated : 05/05/2022 Last updated : 05/06/2022
Versioning policy available to configure at as a system-wide setting and also to
To configure versioning policy, select the **Versioning Policy Configuration** blade inside your FHIR service.
-[ ![Screenshot of the Azure portal Versioning Policy Configuration.](media/versioning-policy/fhir-service-versioning-policy-configuration.png) ](media/versioning-policy/fhir-service-versioning-policy-configuration.png#lightbox)
After you've browsed to Versioning Policy Configuration, you'll be able to configure the setting at both system level and the resource level (as an override of the system level). The system level configuration (annotated as 1) will apply to every resource in your FHIR service unless a resource specific override (annotated at 2) has been configured.
-[ ![Screenshot of Azure portal versioning policy configuration showing system level vs resource level configuration.](media/versioning-policy/system-level-versus-resource-level.png) ](media/versioning-policy/system-level-versus-resource-level.png#lightbox)
When configuring resource level configuration, you'll be able to select the FHIR resource type (annotated as 1) and the specific versioning policy for this specific resource (annotated as 2). Make sure to select the **Add** button (annotated as 3) to queue up this setting for saving.
-[ ![Screenshot of Azure portal versioning policy configuration showing resource level configuration.](media/versioning-policy/resource-versioning.jpg) ](media/versioning-policy/resource-versioning.jpg#lightbox)
- **Make sure** to select **Save** after you've completed your versioning policy configuration.
-[ ![Screenshot of Azure portal versioning policy configuration configuration showing save button.](media/versioning-policy/save-button.jpg) ](media/versioning-policy/save-button.jpg#lightbox)
-## History Management
+## History management
History in FHIR is important for end users to see how a resource has changed over time. It's also useful in coordination with audit logs to see the state of a resource before and after a user modified it. In general, it's recommended to keep history for a resource unless you know that the history isn't needed. Frequent updates of resources can result in a large amount of data storage, which can be undesired in FHIR services with a large amount of data.
purview Asset Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/asset-insights.md
In Microsoft Purview, you can register and scan source types. Once the scan is c
:::image type="content" source="./media/asset-insights/file-path.png" alt-text="View file paths":::
-8. View the list of files within the folder. Navigate back to Insights using the bread crumbs.
+8. View the list of files within the folder. Navigate back to Data Estate Insights using the bread crumbs.
:::image type="content" source="./media/asset-insights/list-page.png" alt-text="View list of assets":::
purview Catalog Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/catalog-permissions.md
Microsoft Purview uses **Collections** to organize and manage access across its
A collection is a tool Microsoft Purview uses to group assets, sources, and other artifacts into a hierarchy for discoverability and to manage access control. All accesses to Microsoft Purview's resources are managed from collections in the Microsoft Purview account itself. > [!NOTE]
-> As of November 8th, 2021, ***Microsoft Purview Data Estate Insights*** is accessible to Data Curators. Data Readers do not have access to Insights.
+> As of November 8th, 2021, ***Microsoft Purview Data Estate Insights*** is accessible to Data Curators. Data Readers do not have access to Data Estate Insights.
## Roles
Microsoft Purview uses a set of predefined roles to control who can access what
|I just need to find assets, I don't want to edit anything|Data reader| |I need to edit information about assets, assign classifications, associate them with glossary entries, and so on.|Data curator| |I need to edit the glossary or set up new classification definitions|Data curator|
-|I need to view Insights to understand the governance posture of my data estate|Data curator|
+|I need to view Data Estate Insights to understand the governance posture of my data estate|Data curator|
|My application's Service Principal needs to push data to Microsoft Purview|Data curator| |I need to set up scans via the Microsoft Purview governance portal|Data curator on the collection **or** data curator **and** data source administrator where the source is registered.| |I need to enable a Service Principal or group to set up and monitor scans in Microsoft Purview without allowing them to access the catalog's information |Data source administrator|
purview Concept Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-insights.md
Title: Understand Insights reports in Microsoft Purview
-description: This article explains what Insights are in Microsoft Purview.
+ Title: Understand Data Estate Insights reports in Microsoft Purview
+description: This article explains what Data Estate Insights are in Microsoft Purview.
Last updated 12/02/2020
-# Understand Insights in Microsoft Purview
+# Understand Data Estate Insights in Microsoft Purview
-This article provides an overview of the Insights feature in Microsoft Purview.
+This article provides an overview of the Data Estate Insights feature in Microsoft Purview.
-Insights are one of the key pillars of Microsoft Purview. The feature provides customers, a single pane of glass view into their catalog and further aims to provide specific insights to the data source administrators, business users, data stewards, data officer and, security administrators. Currently, Microsoft Purview has the following Insights reports that will be available to customers during Insight's public preview.
+Data Estate Insights are one of the key pillars of Microsoft Purview. The feature provides customers, a single pane of glass view into their catalog and further aims to provide specific insights to the data source administrators, business users, data stewards, data officer and, security administrators. Currently, Microsoft Purview has the following Data Estate Insights reports that will be available to customers during Insight's public preview.
> [!IMPORTANT] > Microsoft Purview Data Estate Insights are currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
Insights are one of the key pillars of Microsoft Purview. The feature provides c
This report gives a bird's eye view of your data estate, and its distribution by source type, by classification and by file size as some of the dimensions. This report caters to different types of stakeholder in the data governance and cataloging roles, who are interested to know state of their DataMap, by classification and file extensions.
-The report provides broad insights through graphs and KPIs and later deep dive into specific anomalies such as misplaced files. The report also supports an end-to-end customer experience, where customer can view count of assets with a specific classification, can breakdown the information by source types and top folders, and can also view the list of assets for further investigation.
+The report provides broad insights through graphs and KPIs and later deep dive into specific anomalies such as misplaced files. The report also supports an end-to-end customer experience, where customer can view count of assets with a specific classification, can break down the information by source types and top folders, and can also view the list of assets for further investigation.
> [!NOTE]
-> File Extension Insights has been merged into Asset Insights with richer trend report showing growth in data size by file extension. Learn more by exploring [Asset Insights](asset-insights.md)
+> File Extension Insights has been merged into Asset Insights with richer trend report showing growth in data size by file extension. Learn more by exploring [Asset Insights](asset-insights.md).
## Glossary Insights
-This report gives the Data Stewards a status report on glossary. Data Stewards can view this report to understand distribution of glossary terms by status, learn how many glossary terms are attached to assets and how many are not yet attached to any asset. Business users can also learn about completeness of their glossary terms.
+This report gives the Data Stewards a status report on glossary. Data Stewards can view this report to understand distribution of glossary terms by status, learn how many glossary terms are attached to assets and how many aren't yet attached to any asset. Business users can also learn about completeness of their glossary terms.
-This report summarizes top items that a Data Steward needs to focus on, to create a complete and usable glossary for his/her organization. Stewards can also navigate into the "Glossary" experience from "Glossary Insights" experience, to make changes on a specific glossary term.
+This report summarizes top items that a Data Steward needs to focus on, to create a complete and usable glossary for their organization. Stewards can also navigate into the "Glossary" experience from "Glossary Insights" experience, to make changes on a specific glossary term.
## Classification Insights
This report provides details about where classified data is located, the classif
In Microsoft Purview, classifications are similar to subject tags, and are used to mark and identify content of a specific type in your data estate.
-Use the Classification Insights report to identify content with specific classifications and understand required actions, such as adding additional security to the repositories, or moving content to a more secure location.
+Use the Classification Insights report to identify content with specific classifications and understand required actions, such as adding more security to the repositories, or moving content to a more secure location.
For more information, see [Classification insights about your data from Microsoft Purview](classification-insights.md). ## Sensitivity Labeling Insights
-This report provides details about the sensitivity labels found during a scan, as well as a drill-down to the labeled files themselves. It enables security administrators to ensure the security of information found in their organization's data estate.
+This report provides details about the sensitivity labels found during a scan, and a drill-down to the labeled files themselves. It enables security administrators to ensure the security of information found in their organization's data estate.
-In Microsoft Purview, sensitivity labels are used to identify classification type categories, as well as the group security policies that you want to apply to each category.
+In Microsoft Purview, sensitivity labels are used to identify classification type categories, and the group security policies that you want to apply to each category.
Use the Labeling Insights report to identify the sensitivity labels found in your content and understand required actions, such as managing access to specific repositories or files.
purview Glossary Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/glossary-insights.md
This how-to guide describes how to access, view, and filter Microsoft Purview Gl
In this how-to guide, you'll learn how to: > [!div class="checklist"]
-> - Go to Insights from your Microsoft Purview account
+> - Go to Data Estate Insights from your Microsoft Purview account
> - Get a bird's eye view of your data ## Prerequisites
In Microsoft Purview, you can create glossary terms and attach them to assets. L
:::image type="content" source="./media/glossary-insights/glossary-view-more.png" alt-text="Snapshot of terms with and without assets":::
-4. When you select "View more" for ***Approved terms with assets***, Insights allow you to navigate to the **Glossary** term detail page, from where you can further navigate to the list of assets with the attached terms.
+4. When you select "View more" for ***Approved terms with assets***, Data Estate Insights allow you to navigate to the **Glossary** term detail page, from where you can further navigate to the list of assets with the attached terms.
- :::image type="content" source="./media/glossary-insights/navigate-to-glossary-detail.png" alt-text="Insights to glossary":::
+ :::image type="content" source="./media/glossary-insights/navigate-to-glossary-detail.png" alt-text="Data Estate Insights to glossary":::
4. In Glossary insights page, view a distribution of **Incomplete terms** by type of information missing. The graph shows count of terms with ***Missing definition***, ***Missing expert***, ***Missing steward*** and ***Missing multiple*** fields.
purview Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/overview.md
Last updated 12/06/2021
Microsoft Purview is a unified data governance service that helps you manage and govern your on-premises, multi-cloud, and software-as-a-service (SaaS) data. Create a holistic, up-to-date map of your data landscape with automated data discovery, sensitive data classification, and end-to-end data lineage. Enable data curators to manage and secure your data estate. Empower data consumers to find valuable, trustworthy data. >[!TIP] > Looking to govern your data in Microsoft 365 by keeping what you need and deleting what you don't? Use [Microsoft Purview Data Lifecycle Management](/microsoft-365/compliance/data-lifecycle-management).
purview Reference Azure Purview Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/reference-azure-purview-glossary.md
An entry in the Business glossary that defines a concept specific to an organiza
A scan that detects and processes assets that have been created, modified, or deleted since the previous successful scan. To run an incremental scan, at least one full scan must be completed on the source. ## Ingested asset An asset that has been scanned, classified (when applicable), and added to the Microsoft Purview data map. Ingested assets are discoverable and consumable within the data catalog through automated scanning or external connections, such as Azure Data Factory and Azure Synapse.
-## Insights
+## Data Estate Insights
An area within Microsoft Purview where you can view reports that summarize information about your data. ## Integration runtime The compute infrastructure used to scan in a data source.
purview Register Scan Amazon Rds https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-amazon-rds.md
For this service, use Microsoft Purview to provide a Microsoft account with secu
- **Supported database engines**: Amazon RDS structured data storage supports multiple database engines. Microsoft Purview supports Amazon RDS with/based on Microsoft SQL and PostgreSQL. -- **Maximum columns supported**: Scanning RDS tables with more than 300 columns is not supported.
+- **Maximum columns supported**: Scanning RDS tables with more than 300 columns isn't supported.
-- **Public access support**: Microsoft Purview supports scanning only with VPC Private Link in AWS, and does not include public access scanning.
+- **Public access support**: Microsoft Purview supports scanning only with VPC Private Link in AWS, and doesn't include public access scanning.
- **Supported regions**: Microsoft Purview only supports Amazon RDS databases that are located in the following AWS regions:
For this service, use Microsoft Purview to provide a Microsoft account with secu
- **IP address requirements**: Your RDS database must have a static IP address. The static IP address is used to configure AWS PrivateLink, as described in this article. -- **Known issues**: The following functionality is not currently supported:
+- **Known issues**: The following functionality isn't currently supported:
- The **Test connection** button. The scan status messages will indicate any errors related to connection setup. - Selecting specific tables in your database to scan.
This CloudFormation template is available for download from the [Azure GitHub re
Define your settings as needed for your environment. For more information, select the **Learn more** links to access the AWS documentation. When you're done, select **Next** to continue.
-1. On the **Review** page, check to make sure that the values you selected are correct for your environment. Make any changes needed, and then select **Create stack** when you're done.
+1. On the **Review** page, check to make sure that the values you selected are correct for your environment. Make any changes needed and then select **Create stack** when you're done.
1. Watch for the resources to be created. When complete, relevant data for this procedure is shown on the following tabs:
This CloudFormation template is available for download from the [Azure GitHub re
1. In the **Outputs** tab, copy the **ServiceName** key value to the clipboard.
- You'll use the value of the **ServiceName** key in the Microsoft Purview portal, when [registering your RDS database](#register-an-amazon-rds-data-source) as Microsoft Purview data source. There, enter the **ServiceName** key in the **Connect to private network via endpoint service** field.
+ You'll use the value of the **ServiceName** key in the Microsoft Purview governance portal, when [registering your RDS database](#register-an-amazon-rds-data-source) as Microsoft Purview data source. There, enter the **ServiceName** key in the **Connect to private network via endpoint service** field.
## Register an Amazon RDS data source
Use the other areas of Microsoft Purview to find out details about the content i
All Microsoft Purview Insight reports include the Amazon RDS scanning results, along with the rest of the results from your Azure data sources. When relevant, an **Amazon RDS** asset type is added to the report filtering options.
- For more information, see the [Understand Insights in Microsoft Purview](concept-insights.md).
+ For more information, see the [Understand Data Estate Insights in Microsoft Purview](concept-insights.md).
- **View RDS data in other Microsoft Purview features**, such as the **Scans** and **Glossary** areas. For more information, see:
After the [Load Balancer is created](#step-4-create-a-load-balancer) and its Sta
<a name="service-name"></a>**To copy the service name for use in Microsoft Purview**:
-After youΓÇÖve created your endpoint service, you can copy the **Service name** value in the Microsoft Purview portal, when [registering your RDS database](#register-an-amazon-rds-data-source) as Microsoft Purview data source.
+After youΓÇÖve created your endpoint service, you can copy the **Service name** value in the Microsoft Purview governance portal, when [registering your RDS database](#register-an-amazon-rds-data-source) as Microsoft Purview data source.
Locate the **Service name** on the **Details** tab for your selected endpoint service.
The following errors may appear in Microsoft Purview:
Learn more about Microsoft Purview Insight reports: > [!div class="nextstepaction"]
-> [Understand Insights in Microsoft Purview](concept-insights.md)
+> [Understand Data Estate Insights in Microsoft Purview](concept-insights.md)
purview Register Scan Amazon S3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-amazon-s3.md
For this service, use Microsoft Purview to provide a Microsoft account with secu
## Microsoft Purview scope for Amazon S3
-We currently do not support ingestion private endpoints that work with your AWS sources.
+We currently don't support ingestion private endpoints that work with your AWS sources.
For more information about Microsoft Purview limits, see:
For more information about Microsoft Purview limits, see:
### Storage and scanning regions
-The Microsoft Purview connector for the Amazon S3 service is currently deployed in specific regions only. The following table maps the regions where you data is stored to the region where it would be scanned by Microsoft Purview.
+The Microsoft Purview connector for the Amazon S3 service is currently deployed in specific regions only. The following table maps the regions where your data is stored to the region where it would be scanned by Microsoft Purview.
> [!IMPORTANT] > Customers will be charged for all related data transfer charges according to the region of their bucket.
This procedure describes how to create the AWS role, with the required Microsoft
- For buckets that use **AWS-KMS** encryption, [special configuration](#configure-scanning-for-encrypted-amazon-s3-buckets) is required to enable scanning. -- Make sure that your bucket policy does not block the connection. For more information, see:
+- Make sure that your bucket policy doesn't block the connection. For more information, see:
- [Confirm your bucket policy access](#confirm-your-bucket-policy-access) - [Confirm your SCP policy access](#confirm-your-scp-policy-access)
This procedure describes how to create a new Microsoft Purview credential to use
|Field |Description | ||| |**Name** |Enter a meaningful name for this credential. |
- |**Description** |Enter a optional description for this credential, such as `Used to scan the tutorial S3 buckets` |
+ |**Description** |Enter an optional description for this credential, such as `Used to scan the tutorial S3 buckets` |
|**Authentication method** |Select **Role ARN**, since you're using a role ARN to access your bucket. | |**Role ARN** | Once you've [created your Amazon IAM role](#create-a-new-aws-role-for-microsoft-purview), navigate to your role in the AWS IAM area, copy the **Role ARN** value, and enter it here. For example: `arn:aws:iam::181328463391:role/S3Role`. <br><br>For more information, see [Retrieve your new Role ARN](#retrieve-your-new-role-arn). | | | |
AWS buckets support multiple encryption types. For buckets that use **AWS-KMS**
### Confirm your bucket policy access
-Make sure that the S3 bucket [policy](https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-iam-policies.html) does not block the connection:
+Make sure that the S3 bucket [policy](https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-iam-policies.html) doesn't block the connection:
1. In AWS, navigate to your S3 bucket, and then select the **Permissions** tab > **Bucket policy**. 1. Check the policy details to make sure that it doesn't block the connection from the Microsoft Purview scanner service. ### Confirm your SCP policy access
-Make sure that there is no [SCP policy](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps.html) that blocks the connection to the S3 bucket.
+Make sure that there's no [SCP policy](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps.html) that blocks the connection to the S3 bucket.
For example, your SCP policy might block read API calls to the [AWS Region](#storage-and-scanning-regions) where your S3 bucket is hosted.
For example:
## Add a single Amazon S3 bucket as a Microsoft Purview account
-Use this procedure if you only have a single S3 bucket that you want to register to Microsoft Purview as a data source, or if you have multiple buckets in your AWS account, but do not want to register all of them to Microsoft Purview.
+Use this procedure if you only have a single S3 bucket that you want to register to Microsoft Purview as a data source, or if you have multiple buckets in your AWS account, but don't want to register all of them to Microsoft Purview.
**To add your bucket**:
Use the other areas of Microsoft Purview to find out details about the content i
- **View Insight reports** to view statistics for the classification, sensitivity labels, file types, and more details about your content.
- All Microsoft Purview Insight reports include the Amazon S3 scanning results, along with the rest of the results from your Azure data sources. When relevant, an additional **Amazon S3** asset type was added to the report filtering options.
+ All Microsoft Purview Insight reports include the Amazon S3 scanning results, along with the rest of the results from your Azure data sources. When relevant, another **Amazon S3** asset type was added to the report filtering options.
- For more information, see the [Understand Insights in Microsoft Purview](concept-insights.md).
+ For more information, see the [Understand Data Estate Insights in Microsoft Purview](concept-insights.md).
## Minimum permissions for your AWS policy
This is a general error that indicates an issue when using the Role ARN. For exa
- Make sure that the AWS role has the correct Microsoft account ID. In the AWS IAM area, select the **Role > Trust relationships** tab and then follow the steps in [Create a new AWS role for Microsoft Purview](#create-a-new-aws-role-for-microsoft-purview) again to verify your details.
-For more information, see [Cannot find the specified bucket](#cannot-find-the-specified-bucket),
+For more information, see [Can't find the specified bucket](#cant-find-the-specified-bucket),
-### Cannot find the specified bucket
+### Can't find the specified bucket
Make sure that the S3 bucket URL is properly defined:
Make sure that the S3 bucket URL is properly defined:
Learn more about Microsoft Purview Insight reports: > [!div class="nextstepaction"]
-> [Understand Insights in Microsoft Purview](concept-insights.md)
+> [Understand Data Estate Insights in Microsoft Purview](concept-insights.md)
role-based-access-control Conditions Custom Security Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/conditions-custom-security-attributes.md
Previously updated : 11/16/2021 Last updated : 05/09/2022 #Customer intent: As a dev, devops, or it admin, I want to
For more information about conditions, see [What is Azure attribute-based access
| | | | | Baker text file | Project | Baker | | Cascade text file | Project | Cascade |
-
+
+ > [!TIP]
+ > For information about the characters that are allowed for blob index tags, see [Setting blob index tags](../storage/blobs/storage-manage-find-blobs.md#setting-blob-index-tags).
+ ## Step 4: Assign Storage Blob Data Reader role with a condition 1. Open a new tab and sign in to the [Azure portal](https://portal.azure.com).
sentinel Configure Snc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/configure-snc.md
This section explains how to import a certificate so that it's trusted by your A
--sapgenpse <path to sapgenpse> \ --server-cert <path to server certificate public key> \ ```
+
If the client certificate is in .crt/.key format, use the following switches:
+
```bash --client-cert <path to client certificate public key> \ --client-key <path to client certificate private key> \ ```
- If client certificate is in .pfx or .p12 format
+
+ If the client certificate is in .pfx or .p12 format:
+
```bash --client-pfx <pfx filename> --client-pfx-passwd <password> ```
- If client certificate issued by enterprise CA, add the switch for **each** CA in the trust chain
+
+ If the client certificate was issued by an enterprise CA, add this switch for **each** CA in the trust chain:
+ ```bash
- --cacert <path to ca certificate> #
+ --cacert <path to ca certificate>
``` For example:
sentinel Deploy Data Connector Agent Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/deploy-data-connector-agent-container.md
Deployment of the SAP continuous threat monitoring solution is divided into the
## Data connector agent deployment overview
-For the Continuous Threat Monitoring solution for SAP to operate correctly, data must first be ingested from SAP system into Microsoft Sentinel. To accomplish this, you need to deploy the Continuous Threat Monitoring solution for SAP data connector agent.
+For the Continuous Threat Monitoring solution for SAP to operate correctly, you must first get your SAP data into Microsoft Sentinel. To accomplish this, you need to deploy the solution's SAP data connector agent.
-The data connector agent runs as a container on a Linux virtual machine (VM). This VM can be hosted either in Azure, in a third-party cloud, or on-premises. The recommended way for you to install and configure this container is by using a *kickstart* script, however you can choose to deploy the container [manually](?tabs=deploy-manually)
+The data connector agent runs as a container on a Linux virtual machine (VM). This VM can be hosted either in Azure, in a third-party cloud, or on-premises. We recommend that you install and configure this container using a *kickstart* script; however, you can choose to [deploy the container manually](?tabs=deploy-manually#deploy-the-data-connector-agent-container).
-The agent connects to your SAP system to pull logs and other data from it, then sends those logs to your Microsoft Sentinel. To do this, the agent has to authenticate to your SAP system - that's why you created a user and a role for the agent in your SAP system in the previous step.
+The agent connects to your SAP system to pull logs and other data from it, then sends those logs to your Microsoft Sentinel workspace. To do this, the agent has to authenticate to your SAP system - that's why you created a user and a role for the agent in your SAP system in the previous step.
Your SAP authentication infrastructure, and where you deploy your VM, will determine how and where your agent configuration information, including your SAP authentication secrets, is stored. These are the options, in descending order of preference:
Your SAP authentication infrastructure, and where you deploy your VM, will deter
- An Azure Key Vault, accessed through an Azure AD **registered-application service principal** - A plaintext **configuration file**
-If your **SAP authentication** infrastructure is based on **SNC**, using **X.509 certificates**, your only option is to use a configuration file. Select the **Configuration file** tab below for the instructions to deploy your agent container.
+If your **SAP authentication** infrastructure is based on **SNC**, using **X.509 certificates**, your only option is to use a configuration file. Select the [**Configuration file** tab below](?tabs=config-file#deploy-the-data-connector-agent-container) for the instructions to deploy your agent container.
-If not, then your SAP configuration and authentication secrets can and should be stored in an [**Azure Key Vault**](../../key-vault/general/authentication.md). How you access your key vault depends on where your VM is deployed:
+If you're not using SNC, then your SAP configuration and authentication secrets can and should be stored in an [**Azure Key Vault**](../../key-vault/general/authentication.md). How you access your key vault depends on where your VM is deployed:
-- **A container on an Azure VM** can use an Azure [system-assigned managed identity](../../active-directory/managed-identities-azure-resources/overview.md) to seamlessly access Azure Key Vault. Select the **Managed identity** tab below for the instructions to deploy your agent container using managed identity.
+- **A container on an Azure VM** can use an Azure [system-assigned managed identity](../../active-directory/managed-identities-azure-resources/overview.md) to seamlessly access Azure Key Vault. Select the [**Managed identity** tab below](?tabs=managed-identity#deploy-the-data-connector-agent-container) for the instructions to deploy your agent container using managed identity.
In the event that a system-assigned managed identity can't be used, the container can also authenticate to Azure Key Vault using an [Azure AD registered-application service principal](../../active-directory/develop/app-objects-and-service-principals.md), or, as a last resort, a configuration file. -- **A container on an on-premises VM**, or **a VM in a third-party cloud environment**, can't use Azure managed identity, but can authenticate to Azure Key Vault using an [Azure AD registered-application service principal](../../active-directory/develop/app-objects-and-service-principals.md). Select the **Registered application** tab below for the instructions to deploy your agent container.
+- **A container on an on-premises VM**, or **a VM in a third-party cloud environment**, can't use Azure managed identity, but can authenticate to Azure Key Vault using an [Azure AD registered-application service principal](../../active-directory/develop/app-objects-and-service-principals.md). Select the [**Registered application** tab below](?tabs=registered-application#deploy-the-data-connector-agent-container) for the instructions to deploy your agent container.
If for some reason a registered-application service principal can't be used, you can use a configuration file, though this is not preferred.
If not, then your SAP configuration and authentication secrets can and should be
# [Managed identity](#tab/managed-identity) 1. Transfer the [SAP NetWeaver SDK](https://aka.ms/sap-sdk-download) to the machine on which you want to install the agent.
-1.
+ 1. Run the following command to **Create a VM** in Azure (substitute actual names for the `<placeholders>`): ```azurecli az vm create --resource-group <resource group name> --name <VM Name> --image Canonical:0001-com-ubuntu-server-focal:20_04-lts-gen2:latest --admin-username <azureuser> --public-ip-address "" --size Standard_D2as_v5 --generate-ssh-keys --assign-identity ```+ For more information, see [Quickstart: Create a Linux virtual machine with the Azure CLI](../../virtual-machines/linux/quick-create-cli.md). > [!IMPORTANT]
If not, then your SAP configuration and authentication secrets can and should be
wget -O sapcon-sentinel-kickstart.sh https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/SAP/sapcon-sentinel-kickstart.sh && bash ./sapcon-sentinel-kickstart.sh ```
- The script updates the OS components, installs the Azure CLI and Docker software and other required utilities (jq, netcat, curl), and prompts you for configuration parameter values. You can supply additional parameters to the script to minimize the amount of prompts, or to customize the container deployment. For more information on available command line options, see [Kickstart script reference](reference-kickstart.md)
+ The script updates the OS components, installs the Azure CLI and Docker software and other required utilities (jq, netcat, curl), and prompts you for configuration parameter values. You can supply additional parameters to the script to minimize the amount of prompts or to customize the container deployment. For more information on available command line options, see [Kickstart script reference](reference-kickstart.md).
1. **Follow the on-screen instructions** to enter your SAP and key vault details and complete the deployment. When the deployment is complete, a confirmation message is displayed:
If not, then your SAP configuration and authentication secrets can and should be
# [Registered application](#tab/registered-application) 1. Transfer the [SAP NetWeaver SDK](https://aka.ms/sap-sdk-download) to the machine on which you want to install the agent.
-1.
+ 1. Run the following command to **create and register an application**: ```azurecli
If not, then your SAP configuration and authentication secrets can and should be
az keyvault create \ --name <KeyVaultName> \ --resource-group <KeyVaultResourceGroupName>
- ```
+ ```
+ 1. Copy the name of the (newly created or existing) key vault and the name of its resource group. You'll need these for assigning the key vault access policy and running the deployment script in the coming steps. 1. Run the following command to **assign a key vault access policy** to the registered application ID that you copied above (substitute actual names or values for the `<placeholders>`):
If not, then your SAP configuration and authentication secrets can and should be
./sapcon-sentinel-kickstart.sh --keymode kvsi --appid aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa --appsecret ssssssssssssssssssssssssssssssssss -tenantid bbbbbbbb-bbbb-bbbb-bbbb-bbbbbbbbbbbb -kvaultname <key vault name> ```
- The script updates the OS components, installs the Azure CLI and Docker software and other required utilities (jq, netcat, curl), and prompts you for configuration parameter values. You can supply additional parameters to the script to minimize the amount of prompts, or to customize the container deployment. For more information on available command line options, see [Kickstart script reference](reference-kickstart.md)
+ The script updates the OS components, installs the Azure CLI and Docker software and other required utilities (jq, netcat, curl), and prompts you for configuration parameter values. You can supply additional parameters to the script to minimize the amount of prompts or to customize the container deployment. For more information on available command line options, see [Kickstart script reference](reference-kickstart.md).
1. **Follow the on-screen instructions** to enter the requested details and complete the deployment. When the deployment is complete, a confirmation message is displayed:
If not, then your SAP configuration and authentication secrets can and should be
# [Configuration file](#tab/config-file) 1. Transfer the [SAP NetWeaver SDK](https://aka.ms/sap-sdk-download) to the machine on which you want to install the agent.
-1.
+ 1. Run the following commands to **download the deployment Kickstart script** from the Microsoft Sentinel GitHub repository and **mark it executable**: ```bash
If not, then your SAP configuration and authentication secrets can and should be
./sapcon-sentinel-kickstart.sh --keymode cfgf ```
- The script updates the OS components, installs the Azure CLI and Docker software and other required utilities (jq, netcat, curl), and prompts you for configuration parameter values. You can supply additional parameters to the script to minimize the amount of prompts, or to customize the container deployment. For more information on available command line options, see [Kickstart script reference](reference-kickstart.md)
+ The script updates the OS components, installs the Azure CLI and Docker software and other required utilities (jq, netcat, curl), and prompts you for configuration parameter values. You can supply additional parameters to the script to minimize the amount of prompts or to customize the container deployment. For more information on available command line options, see [Kickstart script reference](reference-kickstart.md).
1. **Follow the on-screen instructions** to enter the requested details and complete the deployment. When the deployment is complete, a confirmation message is displayed:
If not, then your SAP configuration and authentication secrets can and should be
To view a list of the available containers use the command: `docker ps -a`.
-# [Manual Deployment](#tab/deploy-manually)
+# [Manual deployment](#tab/deploy-manually)
1. Transfer the [SAP NetWeaver SDK](https://aka.ms/sap-sdk-download) to the machine on which you want to install the agent.
-1. Install [Docker](https://www.docker.com/) on the VM, following [recommended deployment steps](https://docs.docker.com/engine/install/) for the chosen operating system
+1. Install [Docker](https://www.docker.com/) on the VM, following the [recommended deployment steps](https://docs.docker.com/engine/install/) for the chosen operating system.
-1. Use the following commands (replacing <*SID*> with the name of the SAP instance) to create a folder to store the container configuration and metadata, and to download a sample systemconfig.ini file into that folder.
+1. Use the following commands (replacing `<SID>` with the name of the SAP instance) to create a folder to store the container configuration and metadata, and to download a sample systemconfig.ini file into that folder.
- ````bash
+ ```bash
sid=<SID> mkdir -p /opt/sapcon/$sid cd /opt/sapcon/$sid wget https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/SAP/template/systemconfig.ini
- ````
+ ```
1. Edit the systemconfig.ini file to [configure the relevant settings](reference-systemconfig.md).
-1. Run the following commands (replacing <*SID*> with the name of the SAP instance) to retrieve the latest container image, create a new container, and configure it to start automatically.
+1. Run the following commands (replacing `<SID>` with the name of the SAP instance) to retrieve the latest container image, create a new container, and configure it to start automatically.
- ````bash
+ ```bash
sid=<SID> docker pull mcr.microsoft.com/azure-sentinel/solutions/sapcon:latest docker create -d --restart unless-stopped -v /opt/sapcon/$sid/:/sapcon-app/sapcon/config/system --name sapcon-$sid sapcon
- ````
+ ```
-1. Run the following command (replacing <*SID*> with the name of the SAP instance and <*sdkfilename*> with full filename of the SAP NetWeaver SDK) to copy the SDK into the container.
+1. Run the following command to copy the SDK into the container. Replace `<SID>` with the name of the SAP instance and `<sdkfilename>` with full filename of the SAP NetWeaver SDK.
- ````bash
+ ```bash
sdkfile=<sdkfilename> sid=<SID> docker cp $sdkfile sapcon-$sid:/sapcon-app/inst/
- ````
+ ```
+
+1. Run the following command (replacing `<SID>` with the name of the SAP instance) to start the container.
-1. Run the following command (replacing <*SID*> with the name of the SAP instance) to start the container.
- ````bash
+ ```bash
sid=<SID> docker start sapcon-$sid
- ````
+ ```
sentinel Deploy Sap Security Content https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/deploy-sap-security-content.md
Track your SAP solution deployment journey through this series of articles:
Deploy the [SAP security content](sap-solution-security-content.md) from the Microsoft Sentinel **Content hub** and **Watchlists** areas.
-The **Microsoft Sentinel - Continuous Threat Monitoring for SAP** solution enables the SAP data connector to be displayed in the Microsoft Sentinel **Data connectors** area. The solution also deploys the **SAP - System Applications and Products** workbook and SAP-related analytics rules.
+Deploying the **Microsoft Sentinel - Continuous Threat Monitoring for SAP** solution causes the SAP data connector to be displayed in the Microsoft Sentinel **Data connectors** area. The solution also deploys the **SAP - System Applications and Products** workbook and SAP-related analytics rules.
To deploy SAP solution security content, do the following:
To deploy SAP solution security content, do the following:
:::image type="content" source="./media/deploy-sap-security-content/sap-solution.png" alt-text="Screenshot of the 'Microsoft Sentinel - Continuous Threat Monitoring for SAP' solution pane." lightbox="media/deploy-sap-security-content/sap-solution.png":::
-1. To launch the solution deployment wizard, select **Create**, and then enter the details of the Azure subscription, resource group, and Log Analytics workspace (the one which is used by Microsoft Sentinel) where you want to deploy the solution.
+1. To launch the solution deployment wizard, select **Create**, and then enter the details of the Azure subscription, resource group, and Log Analytics workspace (the one used by Microsoft Sentinel) where you want to deploy the solution.
1. Select **Next** to cycle through the **Data Connectors**, **Analytics**, and **Workbooks** tabs, where you can learn about the components that will be deployed with this solution.
sentinel Preparing Sap https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/preparing-sap.md
Track your SAP solution deployment journey through this series of articles:
> [!NOTE] >
-> It is *strongly recommended* that the deployment of SAP CRs is carried out by an experienced SAP system administrator.
+> It is *strongly recommended* that the deployment of SAP CRs be carried out by an experienced SAP system administrator.
> > The steps below may differ according to the version of the SAP system and should be considered for demonstration purposes only. >
sentinel Prerequisites For Deploying Sap Continuous Threat Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/prerequisites-for-deploying-sap-continuous-threat-monitoring.md
To successfully deploy the SAP Continuous Threat Monitoring solution, you must m
| **System architecture** | The data connector component of the SAP solution is deployed as a Docker container, and each SAP client requires its own container instance.<br>The container host can be either a physical machine or a virtual machine, can be located either on-premises or in any cloud. <br>The VM hosting the container ***does not*** have to be located in the same Azure subscription as your Microsoft Sentinel workspace, or even in the same Azure AD tenant. | | **Virtual machine sizing recommendations** | **Minimum specification**, such as for a lab environment:<br>*Standard_B2s* VM, with:<br>- 2 cores<br>- 4 GB RAM<br><br>**Standard connector** (default):<br>*Standard_D2as_v5* VM or<br>*Standard_D2_v5* VM, with: <br>- 2 cores<br>- 8 GB RAM<br><br>**Multiple connectors**:<br>*Standard_D4as_v5* or<br>*Standard_D4_v5* VM, with: <br>- 4 cores<br>- 16 GB RAM | | **Administrative privileges** | Administrative privileges (root) are required on the container host machine. |
-| **Supported Linux versions** | SAP Continuous Threat Monitoring data collection agent has been tested with the following Linux distributions:<br>- Ubuntu 18.04 or higher<br>- SLES version 15 or higher<br>- RHEL version 7.7 or higher<br><br>If you have a different operating system, you may need to [deploy and configure the container manually](deploy-data-connector-agent-container.md?tabs=deploy-manually) instead of using the kickstart script. |
+| **Supported Linux versions** | The SAP data connector agent has been tested with the following Linux distributions:<br>- Ubuntu 18.04 or higher<br>- SLES version 15 or higher<br>- RHEL version 7.7 or higher<br><br>If you have a different operating system, you may need to [deploy and configure the container manually](deploy-data-connector-agent-container.md?tabs=deploy-manually#deploy-the-data-connector-agent-container) instead of using the kickstart script. |
| **Network connectivity** | Ensure that the container host has access to: <br>- Microsoft Sentinel <br>- Azure key vault (in deployment scenario where Azure key vault is used to store secrets<br>- SAP system via the following TCP ports: *32xx*, *5xx13*, *33xx*, *48xx* (when SNC is used), where *xx* is the SAP instance number. | | **Software utilities** | The [SAP data connector deployment script](reference-kickstart.md) installs the following required software on the container host VM (depending on the Linux distribution used, the list may vary slightly): <br>- [Unzip](http://infozip.sourceforge.net/UnZip.html)<br>- [NetCat](https://sectools.org/tool/netcat/)<br>- [Docker](https://www.docker.com/)<br>- [jq](https://stedolan.github.io/jq/)<br>- [curl](https://curl.se/)<br><br>
To successfully deploy the SAP Continuous Threat Monitoring solution, you must m
| Prerequisite | Description | | - | -- |
-| **Supported SAP versions** | SAP Continuous Threat Monitoring data collection agent works best with [SAP_BASIS versions 750 SP13](https://support.sap.com/en/my-support/software-downloads/support-package-stacks/product-versions.html#:~:text=SAP%20NetWeaver%20%20%20%20SAP%20Product%20Version,%20%20SAPKB710%3Cxx%3E%20%207%20more%20rows) or later. <br><br>Certain steps in this tutorial provide alternative instructions if you're working on the older [SAP_BASIS version 740](https://support.sap.com/en/my-support/software-downloads/support-package-stacks/product-versions.html#:~:text=SAP%20NetWeaver%20%20%20%20SAP%20Product%20Version,%20%20SAPKB710%3Cxx%3E%20%207%20more%20rows). |
+| **Supported SAP versions** | The SAP data connector agent works best with [SAP_BASIS versions 750 SP13](https://support.sap.com/en/my-support/software-downloads/support-package-stacks/product-versions.html#:~:text=SAP%20NetWeaver%20%20%20%20SAP%20Product%20Version,%20%20SAPKB710%3Cxx%3E%20%207%20more%20rows) or later. <br><br>Certain steps in this tutorial provide alternative instructions if you're working on the older [SAP_BASIS version 740](https://support.sap.com/en/my-support/software-downloads/support-package-stacks/product-versions.html#:~:text=SAP%20NetWeaver%20%20%20%20SAP%20Product%20Version,%20%20SAPKB710%3Cxx%3E%20%207%20more%20rows). |
| **Required software** | SAP NetWeaver RFC SDK 7.50 ([Download here](https://aka.ms/sap-sdk-download)).<br>At the link, select **SAP NW RFC SDK 7.50** -> **Linux on X86_64 64BIT** -> **Download the latest version**.<br><br>Make sure that you also have an SAP user account in order to access the SAP software download page. | | **SAP system details** | Make a note of the following SAP system details for use in this tutorial:<br>- SAP system IP address and FQDN hostname<br>- SAP system number, such as `00`<br>- SAP System ID, from the SAP NetWeaver system (for example, `NPL`) <br>- SAP client ID, such as `001` | | **SAP NetWeaver instance access** | The SAP data connector agent uses one of the following mechanisms to authenticate to the SAP system: <br>- SAP ABAP user/password<br>- A user with an X.509 certificate (This option requires additional configuration steps) |
+## SAP environment validation steps
+### Deploy SAP notes
-### SAP environment validation steps
-
-#### Ensure the following SAP notes are deployed in your SAP system, according to its version:
+Ensure the following SAP notes are deployed in your SAP system, according to its version:
> [!NOTE] >
-> Step-by-step instructions for deploying a CR and assigning the required role are available in the [**Deploying SAP CRs and configuring authorization**](preparing-sap.md) guide. Determine which CRs need to be deployed, retrieve the required CRs from the links in the tables below and proceed to the step-by-step guide.
+> Step-by-step instructions for deploying a CR and assigning the required role are available in the [**Deploying SAP CRs and configuring authorization**](preparing-sap.md) guide. Determine which CRs need to be deployed, retrieve the required CRs from the links in the tables below, and proceed to the step-by-step guide.
| SAP BASIS versions | Required note | | | |
To successfully deploy the SAP Continuous Threat Monitoring solution, you must m
| - 700 to 702<br>- 710 to 711<br>- 730<br>- 731<br>- 740<br>- 750 to 752 | [2502336 - CD: RSSCD100 - read only from archive, not from database](https://launchpad.support.sap.com/#/notes/2502336)* | | | * An SAP account is required to access SAP notes |
-#### Retrieval of additional information from SAP
-To enable the Microsoft Sentinel Continuous Threat Monitoring data connector to retrieve certain information from SAP, you must deploy additional CRs from the [Microsoft Sentinel GitHub repository](https://github.com/Azure/Azure-Sentinel/tree/master/Solutions/SAP/CR)
+### Retrieve additional information from SAP
+
+To enable the SAP data connector to retrieve certain information from your SAP system, you must deploy additional CRs from the [Microsoft Sentinel GitHub repository](https://github.com/Azure/Azure-Sentinel/tree/master/Solutions/SAP/CR):
- **SAP BASIS 7.5 SP12 and above**: Client IP Address information from security audit log - **ANY SAP BASIS version**: DB Table logs
To enable the Microsoft Sentinel Continuous Threat Monitoring data connector to
| | | | - 750 and later | *NPLK900202*: [K900202.NPL](https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/SAP/CR/K900202.NPL), [R900202.NPL](https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/SAP/CR/R900202.NPL) | | - 740 | *NPLK900201*: [K900201.NPL](https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/SAP/CR/K900201.NPL), [R900201.NPL](https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/SAP/CR/R900201.NPL) |
-| | |
-#### Role configuration
-To allow Microsoft Sentinel Continuous Threat Monitoring data connector to connect to SAP system, a role needs to be created. Role can be created by deploying **NPLK900206** CR.
+### Create and configure a role
+
+To allow the SAP data connector to connect to your SAP system, you must create a role. Create the role by deploying CR **NPLK900206**.
+ Experienced SAP administrators may choose to create the role manually and assign it the appropriate permissions. In such a case, it is not necessary to deploy the CR *NPLK900206*, but you must instead create a role using the recommendations outlined in [Expert: Deploy SAP CRs and deploy required ABAP authorizations](preparing-sap.md#required-abap-authorizations). | SAP BASIS versions | Sample CR | | | |
-| Any version | *NPLK900206** [K900206.NPL](https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/SAP/CR/K900206.NPL), [R900206.NPL](https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/SAP/CR/R900206.NPL)|
-| | * An SAP account is required to access SAP notes |
-| | |
+| Any version | *NPLK900206*: [K900206.NPL](https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/SAP/CR/K900206.NPL), [R900206.NPL](https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/SAP/CR/R900206.NPL) |
## Next steps After verifying that all the prerequisites have been met, proceed to the next step to deploy the required CRs to your SAP system and configure authorization. > [!div class="nextstepaction"]
-> [Deploying SAP CRs and configuring authorization](preparing-sap.md)
+> [Deploying SAP CRs and configuring authorization](preparing-sap.md)
service-bus-messaging Transport Layer Security Configure Minimum Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/transport-layer-security-configure-minimum-version.md
To configure the minimum TLS version for a Service Bus namespace with a template
Configuring the minimum TLS version requires api-version 2022-01-01-preview or later of the Azure Service Bus resource provider.
+## Check the minimum required TLS version for a namespace
+
+To check the minimum required TLS version for your Service Bus namespace, you can query the Azure Resource Manager API. You will need a Bearer token to query against the API, which you can retrieve using [ARMClient](https://github.com/projectkudu/ARMClient) by executing the following commands.
+
+```powershell
+.\ARMClient.exe login
+.\ARMClient.exe token <your-subscription-id>
+```
+
+Once you have your bearer token, you can use the script below in combination with something like [Rest Client](https://marketplace.visualstudio.com/items?itemName=humao.rest-client) to query the API.
+
+```http
+@token = Bearer <Token received from ARMClient>
+@subscription = <your-subscription-id>
+@resourceGroup = <your-resource-group-name>
+@namespaceName = <your-namespace-name>
+
+###
+GET https://management.azure.com/subscriptions/{{subscription}}/resourceGroups/{{resourceGroup}}/providers/Microsoft.ServiceBus/namespaces/{{namespaceName}}?api-version=2022-01-01-preview
+content-type: application/json
+Authorization: {{token}}
+```
+
+The response should look something like the below, with the minimumTlsVersion set under the properties.
+
+```json
+{
+ "sku": {
+ "name": "Premium",
+ "tier": "Premium"
+ },
+ "id": "/subscriptions/<your-subscription-id>/resourceGroups/<your-resource-group-name>/providers/Microsoft.ServiceBus/namespaces/<your-namespace-name>",
+ "name": "<your-namespace-name>",
+ "type": "Microsoft.ServiceBus/Namespaces",
+ "location": "West Europe",
+ "tags": {},
+ "properties": {
+ "minimumTlsVersion": "1.2",
+ "publicNetworkAccess": "Enabled",
+ "disableLocalAuth": false,
+ "zoneRedundant": false,
+ "provisioningState": "Succeeded",
+ "status": "Active"
+ }
+}
+```
+ ## Test the minimum TLS version from a client To test that the minimum required TLS version for a Service Bus namespace forbids calls made with an older version, you can configure a client to use an older version of TLS. For more information about configuring a client to use a specific version of TLS, see [Configure Transport Layer Security (TLS) for a client application](transport-layer-security-configure-client-version.md).
static-web-apps Apex Domain External https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/apex-domain-external.md
Before you create the `ALIAS` record, you first need to validate that you own th
|--|--| | Type | `ALIAS` | | Host | Enter **@** |
- | Value | Paste the generated code value you copied from the Azure portal. Make sure to remove the `https://` prefix from your URL. |
+ | Value | Paste the generated URL you copied from the Azure portal. Make sure to remove the `https://` prefix from your URL. |
| TTL (if applicable) | Leave as default value. | 1. Save changes to your DNS record.
storage Monitor Blob Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/monitor-blob-storage.md
Previously updated : 11/10/2021 Last updated : 05/06/2022 ms.devlang: csharp
To collect resource logs, you must create a diagnostic setting. When you create
## Creating a diagnostic setting
-You can create a diagnostic setting by using the Azure portal, PowerShell, the Azure CLI, an Azure Resource Manager template, or Azure Policy.
+This section shows you how to create a diagnostic setting by using the Azure portal, PowerShell, and the Azure CLI. This section provides steps specific to Azure Storage. For general guidance about how to create a diagnostic setting, see [Create diagnostic setting to collect platform logs and metrics in Azure](../../azure-monitor/essentials/diagnostic-settings.md).
-For general guidance, see [Create diagnostic setting to collect platform logs and metrics in Azure](../../azure-monitor/essentials/diagnostic-settings.md).
+> [!TIP]
+> You can also create a diagnostic setting by using an Azure Resource manager template or by using a policy definition. A policy definition can ensure that a diagnostic setting is created for every account that is created or updated.
+>
+> This section doesn't describe templates or policy definitions.
+>
+> - To view an Azure Resource Manager template that creates a diagnostic setting, see [Diagnostic setting for Azure Storage](../../azure-monitor/essentials/resource-manager-diagnostic-settings.md#diagnostic-setting-for-azure-storage).
+>
+> - To learn how to create a diagnostic setting by using a policy definition, see [Azure Policy built-in definitions for Azure Storage](../common/policy-reference.md).
### [Azure portal](#tab/azure-portal)
For general guidance, see [Create diagnostic setting to collect platform logs an
5. Click **Add diagnostic setting**.
- > [!div class="mx-imgBorder"]
- > ![portal - Resource logs - add diagnostic setting](media/monitor-blob-storage/diagnostic-logs-settings-pane-2.png)
- The **Diagnostic settings** page appears. > [!div class="mx-imgBorder"]
For general guidance, see [Create diagnostic setting to collect platform logs an
#### Archive logs to a storage account
-If you choose to archive your logs to a storage account, you'll pay for the volume of logs that are sent to the storage account. For specific pricing, see the **Platform Logs** section of the [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/#platform-logs) page.
+If you choose to archive your logs to a storage account, you'll pay for the volume of logs that are sent to the storage account. For specific pricing, see the **Platform Logs** section of the [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/#platform-logs) page. You can't send logs to the same storage account that you are monitoring with this setting. This would lead to recursive logs in which a log entry describes the writing of another log entry. You must create an account or use another existing account to store log information.
1. Select the **Archive to a storage account** checkbox, and then select the **Configure** button.
- > [!div class="mx-imgBorder"]
- > ![Diagnostic settings page archive storage](media/monitor-blob-storage/diagnostic-logs-settings-pane-archive-storage.png)
- 2. In the **Storage account** drop-down list, select the storage account that you want to archive your logs to, and then select the **Save** button. [!INCLUDE [no retention policy](../../../includes/azure-storage-logs-retention-policy.md)]
If you choose to archive your logs to a storage account, you'll pay for the volu
#### Stream logs to Azure Event Hubs
-If you choose to stream your logs to an event hub, you'll pay for the volume of logs that are sent to the event hub. For specific pricing, see the **Platform Logs** section of the [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/#platform-logs) page.
+If you choose to stream your logs to an event hub, you'll pay for the volume of logs that are sent to the event hub. For specific pricing, see the **Platform Logs** section of the [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/#platform-logs) page. You'll need access to an existing event hub, or you'll need to create one before you complete this step.
1. Select the **Stream to an event hub** checkbox, and then select the **Configure** button. 2. In the **Select an event hub** pane, choose the namespace, name, and policy name of the event hub that you want to stream your logs to.
- > [!div class="mx-imgBorder"]
- > ![Diagnostic settings page event hub](media/monitor-blob-storage/diagnostic-logs-settings-pane-event-hub.png)
- 3. Select the **Save** button. #### Send logs to Azure Log Analytics
-1. Select the **Send to Log Analytics** checkbox, select a log analytics workspace, and then select the **Save** button.
-
- > [!div class="mx-imgBorder"]
- > ![Diagnostic settings page log analytics](media/monitor-blob-storage/diagnostic-logs-settings-pane-log-analytics.png)
+1. Select the **Send to Log Analytics** checkbox, select a log analytics workspace, and then select the **Save** button. You'll need access to an existing log analytics workspace, or you'll need to create one before you complete this step.
[!INCLUDE [no retention policy log analytics](../../../includes/azure-storage-logs-retention-policy-log-analytics.md)]
+#### Send to a partner solution
+
+You can also send platform metrics and logs to certain Azure Monitor partners. You must first install a partner integration into your subscription. Configuration options will vary by partner. Check the [Azure Monitor partner integrations documentation](../../partner-solutions/overview.md) for details.
+ ### [PowerShell](#tab/azure-powershell) 1. Open a Windows PowerShell command window, and sign in to your Azure subscription by using the `Connect-AzAccount` command. Then, follow the on-screen directions.
If you choose to stream your logs to an event hub, you'll pay for the volume of
#### Archive logs to a storage account
-If you choose to archive your logs to a storage account, you'll pay for the volume of logs that are sent to the storage account. For specific pricing, see the **Platform Logs** section of the [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/#platform-logs) page.
+If you choose to archive your logs to a storage account, you'll pay for the volume of logs that are sent to the storage account. For specific pricing, see the **Platform Logs** section of the [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/#platform-logs) page. You can't send logs to the same storage account that you are monitoring with this setting. This would lead to recursive logs in which a log entry describes the writing of another log entry. You must create an account or use another existing account to store log information.
Enable logs by using the [Set-AzDiagnosticSetting](/powershell/module/az.monitor/set-azdiagnosticsetting) PowerShell cmdlet along with the `StorageAccountId` parameter.
For a description of each parameter, see the [Archive Azure Resource logs via Az
#### Stream logs to an event hub
-If you choose to stream your logs to an event hub, you'll pay for the volume of logs that are sent to the event hub. For specific pricing, see the **Platform Logs** section of the [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/#platform-logs) page.
+If you choose to stream your logs to an event hub, you'll pay for the volume of logs that are sent to the event hub. For specific pricing, see the **Platform Logs** section of the [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/#platform-logs) page. You'll need access to an existing event hub, or you'll need to create one before you complete this step.
Enable logs by using the [Set-AzDiagnosticSetting](/powershell/module/az.monitor/set-azdiagnosticsetting) PowerShell cmdlet with the `EventHubAuthorizationRuleId` parameter.
For a description of each parameter, see the [Stream Data to Event Hubs via Powe
#### Send logs to Log Analytics
-Enable logs by using the [Set-AzDiagnosticSetting](/powershell/module/az.monitor/set-azdiagnosticsetting) PowerShell cmdlet with the `WorkspaceId` parameter.
+Enable logs by using the [Set-AzDiagnosticSetting](/powershell/module/az.monitor/set-azdiagnosticsetting) PowerShell cmdlet with the `WorkspaceId` parameter. You'll need access to an existing log analytics workspace, or you'll need to create one before you complete this step.
```powershell Set-AzDiagnosticSetting -ResourceId <storage-service-resource-id> -WorkspaceId <log-analytics-workspace-resource-id> -Enabled $true -Category <operations-to-log>
Here's an example:
For more information, see [Stream Azure Resource Logs to Log Analytics workspace in Azure Monitor](../../azure-monitor/essentials/resource-logs.md#send-to-log-analytics-workspace).
+#### Send to a partner solution
+
+You can also send platform metrics and logs to certain Azure Monitor partners. You must first install a partner integration into your subscription. Configuration options will vary by partner. Check the [Azure Monitor partner integrations documentation](../../partner-solutions/overview.md) for details.
+ ### [Azure CLI](#tab/azure-cli) 1. First, open the [Azure Cloud Shell](../../cloud-shell/overview.md), or if you've [installed](/cli/azure/install-azure-cli) the Azure CLI locally, open a command console application such as Windows PowerShell.
For more information, see [Stream Azure Resource Logs to Log Analytics workspace
#### Archive logs to a storage account
-If you choose to archive your logs to a storage account, you'll pay for the volume of logs that are sent to the storage account. For specific pricing, see the **Platform Logs** section of the [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/#platform-logs) page.
+If you choose to archive your logs to a storage account, you'll pay for the volume of logs that are sent to the storage account. For specific pricing, see the **Platform Logs** section of the [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/#platform-logs) page. You can't send logs to the same storage account that you are monitoring with this setting. This would lead to recursive logs in which a log entry describes the writing of another log entry. You must create an account or use another existing account to store log information.
Enable logs by using the [az monitor diagnostic-settings create](/cli/azure/monitor/diagnostic-settings#az-monitor-diagnostic-settings-create) command.
For a description of each parameter, see the [Archive Resource logs via the Azur
#### Stream logs to an event hub
-If you choose to stream your logs to an event hub, you'll pay for the volume of logs that are sent to the event hub. For specific pricing, see the **Platform Logs** section of the [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/#platform-logs) page.
+If you choose to stream your logs to an event hub, you'll pay for the volume of logs that are sent to the event hub. For specific pricing, see the **Platform Logs** section of the [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/#platform-logs) page. You'll need access to an existing event hub, or you'll need to create one before you complete this step.
Enable logs by using the [az monitor diagnostic-settings create](/cli/azure/monitor/diagnostic-settings#az-monitor-diagnostic-settings-create) command.
For a description of each parameter, see the [Stream data to Event Hubs via Azur
#### Send logs to Log Analytics
-Enable logs by using the [az monitor diagnostic-settings create](/cli/azure/monitor/diagnostic-settings#az-monitor-diagnostic-settings-create) command.
+Enable logs by using the [az monitor diagnostic-settings create](/cli/azure/monitor/diagnostic-settings#az-monitor-diagnostic-settings-create) command. You'll need access to an existing log analytics workspace, or you'll need to create one before you complete this step.
```azurecli-interactive az monitor diagnostic-settings create --name <setting-name> --workspace <log-analytics-workspace-resource-id> --resource <storage-account-resource-id> --logs '[{"category": <category name>, "enabled": true}]'
Here's an example:
For more information, see [Stream Azure Resource Logs to Log Analytics workspace in Azure Monitor](../../azure-monitor/essentials/resource-logs.md#send-to-log-analytics-workspace).
-### [Template](#tab/template)
-
-To view an Azure Resource Manager template that creates a diagnostic setting, see [Diagnostic setting for Azure Storage](../../azure-monitor/essentials/resource-manager-diagnostic-settings.md#diagnostic-setting-for-azure-storage).
-
-### [Azure Policy](#tab/policy)
+#### Send to a partner solution
-You can create a diagnostic setting by using a policy definition. That way, you can make sure that a diagnostic setting is created for every account that is created or updated. For more information, see [Azure Policy built-in definitions for Azure Storage](../common/policy-reference.md).
+You can also send platform metrics and logs to certain Azure Monitor partners. You must first install a partner integration into your subscription. Configuration options will vary by partner. Check the [Azure Monitor partner integrations documentation](../../partner-solutions/overview.md) for details.
## Analyzing metrics
+For a list of all Azure Monitor support metrics, which includes Azure Blob Storage, see [Azure Monitor supported metrics](../../azure-monitor/essentials/metrics-supported.md).
+
+### [Azure portal](#tab/azure-portal)
+ You can analyze metrics for Azure Storage with metrics from other Azure services by using Metrics Explorer. Open Metrics Explorer by choosing **Metrics** from the **Azure Monitor** menu. For details on using this tool, see [Getting started with Azure Metrics Explorer](../../azure-monitor/essentials/metrics-getting-started.md). This example shows how to view **Transactions** at the account level.
Metrics for Azure Blob Storage are in these namespaces:
- Microsoft.Storage/storageAccounts - Microsoft.Storage/storageAccounts/blobServices
+### [PowerShell](#tab/azure-powershell)
+
+#### List the metric definition
+
+You can list the metric definition of your storage account or the Blob storage service. Use the [Get-AzMetricDefinition](/powershell/module/az.monitor/get-azmetricdefinition) cmdlet.
+
+In this example, replace the `<resource-ID>` placeholder with the resource ID of the entire storage account or the resource ID of the Blob storage service. You can find these resource IDs on the **Endpoints** pages of your storage account in the Azure portal.
+
+```powershell
+ $resourceId = "<resource-ID>"
+ Get-AzMetricDefinition -ResourceId $resourceId
+```
+
+#### Reading metric values
+
+You can read account-level metric values of your storage account or the Blob storage service. Use the [Get-AzMetric](/powershell/module/Az.Monitor/Get-AzMetric) cmdlet.
+
+```powershell
+ $resourceId = "<resource-ID>"
+ Get-AzMetric -ResourceId $resourceId -MetricName "UsedCapacity" -TimeGrain 01:00:00
+```
+
+#### Reading metric values with dimensions
+
+When a metric supports dimensions, you can read metric values and filter them by using dimension values. Use the [Get-AzMetric](/powershell/module/Az.Monitor/Get-AzMetric) cmdlet.
+
+```powershell
+$resourceId = "<resource-ID>"
+$dimFilter = [String](New-AzMetricFilter -Dimension ApiName -Operator eq -Value "GetBlob" 3> $null)
+Get-AzMetric -ResourceId $resourceId -MetricName Transactions -TimeGrain 01:00:00 -MetricFilter $dimFilter -AggregationType "Total"
+```
+
+### [Azure CLI](#tab/azure-cli)
+ For a list of all Azure Monitor support metrics, which includes Azure Blob Storage, see [Azure Monitor supported metrics](../../azure-monitor/essentials/metrics-supported.md).
-### Accessing metrics
+#### List the account-level metric definition
+
+You can list the metric definition of your storage account or the Blob storage service. Use the [az monitor metrics list-definitions](/cli/azure/monitor/metrics#az-monitor-metrics-list-definitions) command.
+
+In this example, replace the `<resource-ID>` placeholder with the resource ID of the entire storage account or the resource ID of the Blob storage service. You can find these resource IDs on the **Endpoints** pages of your storage account in the Azure portal.
+
+```azurecli
+ az monitor metrics list-definitions --resource <resource-ID>
+```
+
+#### Read account-level metric values
+
+You can read the metric values of your storage account or the Blob storage service. Use the [az monitor metrics list](/cli/azure/monitor/metrics#az-monitor-metrics-list) command.
+
+```azurecli
+ az monitor metrics list --resource <resource-ID> --metric "UsedCapacity" --interval PT1H
+```
+
+#### Reading metric values with dimensions
+
+When a metric supports dimensions, you can read metric values and filter them by using dimension values. Use the [az monitor metrics list](/cli/azure/monitor/metrics#az-monitor-metrics-list) command.
+
+```azurecli
+az monitor metrics list --resource <resource-ID> --metric "Transactions" --interval PT1H --filter "ApiName eq 'GetBlob' " --aggregation "Total"
+```
-> [!TIP]
-> To view Azure CLI or .NET examples, choose the corresponding tabs listed here.
+
-### [.NET SDK](#tab/azure-portal)
+## Analyze metrics by using code
Azure Monitor provides the [.NET SDK](https://www.nuget.org/packages/Microsoft.Azure.Management.Monitor/) to read metric definition and values. The [sample code](https://azure.microsoft.com/resources/samples/monitor-dotnet-metrics-api/) shows how to use the SDK with different parameters. You need to use `0.18.0-preview` or a later version for storage metrics.
The following example shows how to read metric data on the metric supporting mul
```
-### [PowerShell](#tab/azure-powershell)
-
-#### List the metric definition
-
-You can list the metric definition of your storage account or the Blob storage service. Use the [Get-AzMetricDefinition](/powershell/module/az.monitor/get-azmetricdefinition) cmdlet.
-
-In this example, replace the `<resource-ID>` placeholder with the resource ID of the entire storage account or the resource ID of the Blob storage service. You can find these resource IDs on the **Endpoints** pages of your storage account in the Azure portal.
-
-```powershell
- $resourceId = "<resource-ID>"
- Get-AzMetricDefinition -ResourceId $resourceId
-```
-
-#### Reading metric values
-
-You can read account-level metric values of your storage account or the Blob storage service. Use the [Get-AzMetric](/powershell/module/Az.Monitor/Get-AzMetric) cmdlet.
-
-```powershell
- $resourceId = "<resource-ID>"
- Get-AzMetric -ResourceId $resourceId -MetricName "UsedCapacity" -TimeGrain 01:00:00
-```
-
-#### Reading metric values with dimensions
-
-When a metric supports dimensions, you can read metric values and filter them by using dimension values. Use the [Get-AzMetric](/powershell/module/Az.Monitor/Get-AzMetric) cmdlet.
-
-```powershell
-$resourceId = "<resource-ID>"
-$dimFilter = [String](New-AzMetricFilter -Dimension ApiName -Operator eq -Value "GetBlob" 3> $null)
-Get-AzMetric -ResourceId $resourceId -MetricName Transactions -TimeGrain 01:00:00 -MetricFilter $dimFilter -AggregationType "Total"
-```
--
-### [Azure CLI](#tab/azure-cli)
-
-#### List the account-level metric definition
-
-You can list the metric definition of your storage account or the Blob storage service. Use the [az monitor metrics list-definitions](/cli/azure/monitor/metrics#az-monitor-metrics-list-definitions) command.
-
-In this example, replace the `<resource-ID>` placeholder with the resource ID of the entire storage account or the resource ID of the Blob storage service. You can find these resource IDs on the **Endpoints** pages of your storage account in the Azure portal.
-
-```azurecli
- az monitor metrics list-definitions --resource <resource-ID>
-```
-
-#### Read account-level metric values
-
-You can read the metric values of your storage account or the Blob storage service. Use the [az monitor metrics list](/cli/azure/monitor/metrics#az-monitor-metrics-list) command.
-
-```azurecli
- az monitor metrics list --resource <resource-ID> --metric "UsedCapacity" --interval PT1H
-```
-
-#### Reading metric values with dimensions
-
-When a metric supports dimensions, you can read metric values and filter them by using dimension values. Use the [az monitor metrics list](/cli/azure/monitor/metrics#az-monitor-metrics-list) command.
-
-```azurecli
-az monitor metrics list --resource <resource-ID> --metric "Transactions" --interval PT1H --filter "ApiName eq 'GetBlob' " --aggregation "Total"
-```
-
-### [Template](#tab/template)
-
-N/A.
-
-### [Azure Policy](#tab/policy)
-
-N/A.
--- ## Analyzing logs You can access resource logs either as a blob in a storage account, as event data, or through Log Analytic queries.
No. Azure Compute supports the metrics on disks. For more information, see [Per
## Next steps -- For a reference of the logs and metrics created by Azure Blob Storage, see [Azure Blob Storage monitoring data reference](monitor-blob-storage-reference.md).-- For details on monitoring Azure resources, see [Monitor Azure resources with Azure Monitor](../../azure-monitor/essentials/monitor-azure-resource.md).-- For more information on metrics migration, see [Azure Storage metrics migration](../common/storage-metrics-migration.md).-- For commons scenarios and best practices, see [Best practices for monitoring Azure Blob Storage](blob-storage-monitoring-scenarios.md).
+Get started with any of these guides.
+
+| Guide | Description |
+|||
+| [Gather metrics from your Azure Blob Storage containers](/learn/modules/gather-metrics-blob-storage/) | Create charts that show metrics (Contains step-by-step guidance). |
+| [Monitor, diagnose, and troubleshoot your Azure Storage](/learn/modules/monitor-diagnose-and-troubleshoot-azure-storage/) | Troubleshoot storage account issues (contains step-by-step guidance). |
+| [Monitor storage with Azure Monitor Storage insights](../common/storage-insights-overview.md) | A unified view of storage performance, capacity, and availability |
+| [Best practices for monitoring Azure Blob Storage](blob-storage-monitoring-scenarios.md) | Guidance for common monitoring and troubleshooting scenarios. |
+| [Getting started with Azure Metrics Explorer](../../azure-monitor/essentials/metrics-getting-started.md) | A tour of Metrics Explorer.
+| [Overview of Log Analytics in Azure Monitor](../../azure-monitor/logs/log-analytics-overview.md) | A tour of Log Analytics. |
+| [Azure Monitor Metrics overview](../../azure-monitor/essentials/data-platform-metrics.md) | The basics of metrics and metric dimensions |
+| [Azure Monitor Logs overview](../../azure-monitor/logs/data-platform-logs.md)| The basics of logs and how to collect and analyze them |
+| [Transition to metrics in Azure Monitor](../common/storage-metrics-migration.md) | Move from Storage Analytics metrics to metrics in Azure Monitor. |
+| [Azure Blob Storage monitoring data reference](monitor-blob-storage-reference.md) | A reference of the logs and metrics created by Azure Blob Storage |
storage Storage Account Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-account-create.md
On the **Tags** tab, you can specify Resource Manager tags to help organize your
The following image shows a standard configuration of the index tag properties for a new storage account. ### Review + create tab
storage Storage Files Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-monitoring.md
Previously updated : 11/10/2021 Last updated : 05/06/2022
To get the list of SMB and REST operations that are logged, see [Storage logged
## Creating a diagnostic setting
-You can create a diagnostic setting by using the Azure portal, PowerShell, the Azure CLI, an Azure Resource Manager template, or Azure Policy.
+This section shows you how to create a diagnostic setting by using the Azure portal, PowerShell, and the Azure CLI. This section provides steps specific to Azure Storage. For general guidance about how to create a diagnostic setting, see [Create diagnostic setting to collect platform logs and metrics in Azure](../../azure-monitor/essentials/diagnostic-settings.md).
-For general guidance, see [Create diagnostic setting to collect platform logs and metrics in Azure](../../azure-monitor/essentials/diagnostic-settings.md).
+> [!TIP]
+> You can also create a diagnostic setting by using an Azure Resource manager template or by using a policy definition. A policy definition can ensure that a diagnostic setting is created for every account that is created or updated.
+>
+> This section doesn't describe templates or policy definitions.
+>
+> - To view an Azure Resource Manager template that creates a diagnostic setting, see [Diagnostic setting for Azure Storage](../../azure-monitor/essentials/resource-manager-diagnostic-settings.md#diagnostic-setting-for-azure-storage).
+>
+> - To learn how to create a diagnostic setting by using a policy definition, see [Azure Policy built-in definitions for Azure Storage](../common/policy-reference.md).
### [Azure portal](#tab/azure-portal)
For general guidance, see [Create diagnostic setting to collect platform logs an
5. Click **Add diagnostic setting**.
- > [!div class="mx-imgBorder"]
- > ![portal - Resource logs - add diagnostic setting](media/storage-files-monitoring/diagnostic-logs-settings-pane-2.png)
- The **Diagnostic settings** page appears. > [!div class="mx-imgBorder"]
For general guidance, see [Create diagnostic setting to collect platform logs an
#### Archive logs to a storage account
-If you choose to archive your logs to a storage account, you'll pay for the volume of logs that are sent to the storage account. For specific pricing, see the **Platform Logs** section of the [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/#platform-logs) page.
+If you choose to archive your logs to a storage account, you'll pay for the volume of logs that are sent to the storage account. For specific pricing, see the **Platform Logs** section of the [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/#platform-logs) page. You can't send logs to the same storage account that you are monitoring with this setting. This would lead to recursive logs in which a log entry describes the writing of another log entry. You must create an account or use another existing account to store log information.
1. Select the **Archive to a storage account** checkbox, and then click the **Configure** button.
- > [!div class="mx-imgBorder"]
- > ![Diagnostic settings page archive storage](media/storage-files-monitoring/diagnostic-logs-settings-pane-archive-storage.png)
- 2. In the **Storage account** drop-down list, select the storage account that you want to archive your logs to, click the **OK** button, and then click the **Save** button. [!INCLUDE [no retention policy](../../../includes/azure-storage-logs-retention-policy.md)]
If you choose to archive your logs to a storage account, you'll pay for the volu
#### Stream logs to Azure Event Hubs
-If you choose to stream your logs to an event hub, you'll pay for the volume of logs that are sent to the event hub. For specific pricing, see the **Platform Logs** section of the [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/#platform-logs) page.
+If you choose to stream your logs to an event hub, you'll pay for the volume of logs that are sent to the event hub. For specific pricing, see the **Platform Logs** section of the [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/#platform-logs) page. You'll need access to an existing event hub, or you'll need to create one before you complete this step.
1. Select the **Stream to an event hub** checkbox, and then click the **Configure** button. 2. In the **Select an event hub** pane, choose the namespace, name, and policy name of the event hub that you want to stream your logs to.
- > [!div class="mx-imgBorder"]
- > ![Diagnostic settings page event hub](media/storage-files-monitoring/diagnostic-logs-settings-pane-event-hub.png)
- 3. Click the **OK** button, and then click the **Save** button. #### Send logs to Azure Log Analytics
-1. Select the **Send to Log Analytics** checkbox, select a log analytics workspace, and then click the and then click the **Save** button.
-
- > [!div class="mx-imgBorder"]
- > ![Diagnostic settings page log analytics](media/storage-files-monitoring/diagnostic-logs-settings-pane-log-analytics.png)
+1. Select the **Send to Log Analytics** checkbox, select a log analytics workspace, and then click the **Save** button. You'll need access to an existing log analytics workspace, or you'll need to create one before you complete this step.
[!INCLUDE [no retention policy log analytics](../../../includes/azure-storage-logs-retention-policy-log-analytics.md)]
+#### Send to a partner solution
+
+You can also send platform metrics and logs to certain Azure Monitor partners. You must first install a partner integration into your subscription. Configuration options will vary by partner. Check the [Azure Monitor partner integrations documentation](../../partner-solutions/overview.md) for details.
+ ### [PowerShell](#tab/azure-powershell) 1. Open a Windows PowerShell command window, and sign in to your Azure subscription by using the `Connect-AzAccount` command. Then, follow the on-screen directions.
If you choose to stream your logs to an event hub, you'll pay for the volume of
#### Archive logs to a storage account
-If you choose to archive your logs to a storage account, you'll pay for the volume of logs that are sent to the storage account. For specific pricing, see the **Platform Logs** section of the [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/#platform-logs) page.
+If you choose to archive your logs to a storage account, you'll pay for the volume of logs that are sent to the storage account. For specific pricing, see the **Platform Logs** section of the [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/#platform-logs) page. You can't send logs to the same storage account that you are monitoring with this setting. This would lead to recursive logs in which a log entry describes the writing of another log entry. You must create an account or use another existing account to store log information.
Enable logs by using the [Set-AzDiagnosticSetting](/powershell/module/az.monitor/set-azdiagnosticsetting) PowerShell cmdlet along with the `StorageAccountId` parameter.
For a description of each parameter, see the [Archive Azure Resource logs via Az
#### Stream logs to an event hub
-If you choose to stream your logs to an event hub, you'll pay for the volume of logs that are sent to the event hub. For specific pricing, see the **Platform Logs** section of the [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/#platform-logs) page.
+If you choose to stream your logs to an event hub, you'll pay for the volume of logs that are sent to the event hub. For specific pricing, see the **Platform Logs** section of the [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/#platform-logs) page. You'll need access to an existing event hub, or you'll need to create one before you complete this step.
Enable logs by using the [Set-AzDiagnosticSetting](/powershell/module/az.monitor/set-azdiagnosticsetting) PowerShell cmdlet with the `EventHubAuthorizationRuleId` parameter.
For a description of each parameter, see the [Stream Data to Event Hubs via Powe
#### Send logs to Log Analytics
-Enable logs by using the [Set-AzDiagnosticSetting](/powershell/module/az.monitor/set-azdiagnosticsetting) PowerShell cmdlet with the `WorkspaceId` parameter.
+Enable logs by using the [Set-AzDiagnosticSetting](/powershell/module/az.monitor/set-azdiagnosticsetting) PowerShell cmdlet with the `WorkspaceId` parameter. You'll need access to an existing log analytics workspace, or you'll need to create one before you complete this step.
```powershell Set-AzDiagnosticSetting -ResourceId <storage-service-resource-id> -WorkspaceId <log-analytics-workspace-resource-id> -Enabled $true -Category <operations-to-log>
Here's an example:
For more information, see [Stream Azure Resource Logs to Log Analytics workspace in Azure Monitor](../../azure-monitor/essentials/resource-logs.md#send-to-log-analytics-workspace).
+#### Send to a partner solution
+
+You can also send platform metrics and logs to certain Azure Monitor partners. You must first install a partner integration into your subscription. Configuration options will vary by partner. Check the [Azure Monitor partner integrations documentation](../../partner-solutions/overview.md) for details.
+ ### [Azure CLI](#tab/azure-cli) 1. First, open the [Azure Cloud Shell](../../cloud-shell/overview.md), or if you've [installed](/cli/azure/install-azure-cli) the Azure CLI locally, open a command console application such as Windows PowerShell.
For more information, see [Stream Azure Resource Logs to Log Analytics workspace
#### Archive logs to a storage account
-If you choose to archive your logs to a storage account, you'll pay for the volume of logs that are sent to the storage account. For specific pricing, see the **Platform Logs** section of the [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/#platform-logs) page.
+If you choose to archive your logs to a storage account, you'll pay for the volume of logs that are sent to the storage account. For specific pricing, see the **Platform Logs** section of the [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/#platform-logs) page. You can't send logs to the same storage account that you are monitoring with this setting. This would lead to recursive logs in which a log entry describes the writing of another log entry. You must create an account or use another existing account to store log information.
Enable logs by using the [az monitor diagnostic-settings create](/cli/azure/monitor/diagnostic-settings#az-monitor-diagnostic-settings-create) command.
For a description of each parameter, see the [Archive Resource logs via the Azur
#### Stream logs to an event hub
-If you choose to stream your logs to an event hub, you'll pay for the volume of logs that are sent to the event hub. For specific pricing, see the **Platform Logs** section of the [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/#platform-logs) page.
+If you choose to stream your logs to an event hub, you'll pay for the volume of logs that are sent to the event hub. For specific pricing, see the **Platform Logs** section of the [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/#platform-logs) page. You'll need access to an existing event hub, or you'll need to create one before you complete this step.
Enable logs by using the [az monitor diagnostic-settings create](/cli/azure/monitor/diagnostic-settings#az-monitor-diagnostic-settings-create) command.
For a description of each parameter, see the [Stream data to Event Hubs via Azur
#### Send logs to Log Analytics
-Enable logs by using the [az monitor diagnostic-settings create](/cli/azure/monitor/diagnostic-settings#az-monitor-diagnostic-settings-create) command.
+Enable logs by using the [az monitor diagnostic-settings create](/cli/azure/monitor/diagnostic-settings#az-monitor-diagnostic-settings-create) command. You'll need access to an existing log analytics workspace, or you'll need to create one before you complete this step.
```azurecli-interactive az monitor diagnostic-settings create --name <setting-name> --workspace <log-analytics-workspace-resource-id> --resource <storage-account-resource-id> --logs '[{"category": <category name>, "enabled": true}]'
Here's an example:
For more information, see [Stream Azure Resource Logs to Log Analytics workspace in Azure Monitor](../../azure-monitor/essentials/resource-logs.md#send-to-log-analytics-workspace).
-### [Template](#tab/template)
-
-To view an Azure Resource Manager template that creates a diagnostic setting, see [Diagnostic setting for Azure Storage](../../azure-monitor/essentials/resource-manager-diagnostic-settings.md#diagnostic-setting-for-azure-storage).
+#### Send to a partner solution
-### [Azure Policy](#tab/policy)
-
-You can create a diagnostic setting by using a policy definition. That way, you can make sure that a diagnostic setting is created for every account that is created or updated. See [Azure Policy built-in definitions for Azure Storage](../common/policy-reference.md).
+You can also send platform metrics and logs to certain Azure Monitor partners. You must first install a partner integration into your subscription. Configuration options will vary by partner. Check the [Azure Monitor partner integrations documentation](../../partner-solutions/overview.md) for details.
## Analyzing metrics
+For a list of all Azure Monitor support metrics, which includes Azure Files, see [Azure Monitor supported metrics](../../azure-monitor/essentials/metrics-supported.md#microsoftstoragestorageaccountsfileservices).
+
+### [Azure portal](#tab/azure-portal)
+ You can analyze metrics for Azure Storage with metrics from other Azure services by using Metrics Explorer. Open Metrics Explorer by choosing **Metrics** from the **Azure Monitor** menu. For details on using this tool, see [Getting started with Azure Metrics Explorer](../../azure-monitor/essentials/metrics-getting-started.md). For metrics that support dimensions, you can filter the metric with the desired dimension value. For a complete list of the dimensions that Azure Storage supports, see [Metrics dimensions](storage-files-monitoring-reference.md#metrics-dimensions). Metrics for Azure Files are in these namespaces:
For metrics that support dimensions, you can filter the metric with the desired
- Microsoft.Storage/storageAccounts - Microsoft.Storage/storageAccounts/fileServices
-For a list of all Azure Monitor support metrics, which includes Azure Files, see [Azure Monitor supported metrics](../../azure-monitor/essentials/metrics-supported.md#microsoftstoragestorageaccountsfileservices).
-
-### Accessing metrics
-
-> [!TIP]
-> To view Azure CLI or .NET examples, choose the corresponding tabs listed here.
- ### [PowerShell](#tab/azure-powershell) #### List the metric definition
When a metric supports dimensions, you can read metric values and filter them by
```azurecli az monitor metrics list --resource <resource-ID> --metric "Transactions" --interval PT1H --filter "ApiName eq 'GetFile' " --aggregation "Total" ```
-### [.NET SDK](#tab/azure-portal)
+++
+## Analyze metrics by using code
Azure Monitor provides the [.NET SDK](https://www.nuget.org/packages/Microsoft.Azure.Management.Monitor/) to read metric definition and values. The [sample code](https://azure.microsoft.com/resources/samples/monitor-dotnet-metrics-api/) shows how to use the SDK with different parameters. You need to use `0.18.0-preview` or a later version for storage metrics.
In these examples, replace the `<resource-ID>` placeholder with the resource ID
Replace the `<subscription-ID>` variable with the ID of your subscription. For guidance on how to obtain values for `<tenant-ID>`, `<application-ID>`, and `<AccessKey>`, see [Use the portal to create an Azure AD application and service principal that can access resources](../../active-directory/develop/howto-create-service-principal-portal.md).
-#### List the account-level metric definition
+### List the account-level metric definition
The following example shows how to list a metric definition at the account level:
The following example shows how to list a metric definition at the account level
```
-#### Reading account-level metric values
+### Reading account-level metric values
The following example shows how to read `UsedCapacity` data at the account level:
The following example shows how to read `UsedCapacity` data at the account level
```
-#### Reading multidimensional metric values
+### Reading multidimensional metric values
For multidimensional metrics, you need to define metadata filters if you want to read metric data on specific dimension values.
The following example shows how to read metric data on the metric supporting mul
```
-# [Template](#tab/template)
-
-N/A.
-
-### [Azure Policy](#tab/policy)
-
-N/A.
--- ## Analyzing logs You can access resource logs either as a blob in a storage account, as event data, or through Log Analytic queries.
The following table lists some example scenarios to monitor and the proper metri
- [How to deploy Azure Files](./storage-how-to-create-file-share.md) - [Troubleshoot Azure Files on Windows](./storage-troubleshoot-windows-file-connection-problems.md) - [Troubleshoot Azure Files on Linux](./storage-troubleshoot-linux-file-connection-problems.md)+
storage Monitor Queue Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/monitor-queue-storage.md
Previously updated : 11/10/2021 Last updated : 05/06/2022
To collect resource logs, you must create a diagnostic setting. When you create
## Creating a diagnostic setting
-You can create a diagnostic setting by using the Azure portal, PowerShell, the Azure CLI, an Azure Resource Manager template, or Azure Policy.
+This section shows you how to create a diagnostic setting by using the Azure portal, PowerShell, and the Azure CLI. This section provides steps specific to Azure Storage. For general guidance about how to create a diagnostic setting, see [Create diagnostic setting to collect platform logs and metrics in Azure](../../azure-monitor/essentials/diagnostic-settings.md).
-For general guidance, see [Create diagnostic setting to collect platform logs and metrics in Azure](../../azure-monitor/essentials/diagnostic-settings.md).
+> [!TIP]
+> You can also create a diagnostic setting by using an Azure Resource manager template or by using a policy definition. A policy definition can ensure that a diagnostic setting is created for every account that is created or updated.
+>
+> This section doesn't describe templates or policy definitions.
+>
+> - To view an Azure Resource Manager template that creates a diagnostic setting, see [Diagnostic setting for Azure Storage](../../azure-monitor/essentials/resource-manager-diagnostic-settings.md#diagnostic-setting-for-azure-storage).
+>
+> - To learn how to create a diagnostic setting by using a policy definition, see [Azure Policy built-in definitions for Azure Storage](../common/policy-reference.md).
### [Azure portal](#tab/azure-portal)
For general guidance, see [Create diagnostic setting to collect platform logs an
5. Click **Add diagnostic setting**.
- > [!div class="mx-imgBorder"]
- > ![portal - Resource logs - add diagnostic setting](media/monitor-queue-storage/diagnostic-logs-settings-pane-2.png)
- The **Diagnostic settings** page appears. > [!div class="mx-imgBorder"]
For general guidance, see [Create diagnostic setting to collect platform logs an
#### Archive logs to a storage account
-If you choose to archive your logs to a storage account, you'll pay for the volume of logs that are sent to the storage account. For specific pricing, see the **Platform Logs** section of the [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/#platform-logs) page.
+If you choose to archive your logs to a storage account, you'll pay for the volume of logs that are sent to the storage account. For specific pricing, see the **Platform Logs** section of the [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/#platform-logs) page. You can't send logs to the same storage account that you are monitoring with this setting. This would lead to recursive logs in which a log entry describes the writing of another log entry. You must create an account or use another existing account to store log information.
1. Select the **Archive to a storage account** check box, and then select the **Configure** button.
- > [!div class="mx-imgBorder"]
- > ![Diagnostic settings page archive storage](media/monitor-queue-storage/diagnostic-logs-settings-pane-archive-storage.png)
- 2. In the **Storage account** drop-down list, select the storage account that you want to archive your logs to, click the **OK** button, and then select the **Save** button. [!INCLUDE [no retention policy](../../../includes/azure-storage-logs-retention-policy.md)]
If you choose to archive your logs to a storage account, you'll pay for the volu
#### Stream logs to Azure Event Hubs
-If you choose to stream your logs to an event hub, you'll pay for the volume of logs that are sent to the event hub. For specific pricing, see the **Platform Logs** section of the [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/#platform-logs) page.
+If you choose to stream your logs to an event hub, you'll pay for the volume of logs that are sent to the event hub. For specific pricing, see the **Platform Logs** section of the [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/#platform-logs) page. You'll need access to an existing event hub, or you'll need to create one before you complete this step.
1. Select the **Stream to an event hub** check box, and then select the **Configure** button. 2. In the **Select an event hub** pane, choose the namespace, name, and policy name of the event hub that you want to stream your logs to.
- > [!div class="mx-imgBorder"]
- > ![Diagnostic settings page event hub](media/monitor-queue-storage/diagnostic-logs-settings-pane-event-hub.png)
- 3. Click the **OK** button, and then select the **Save** button. #### Send logs to Azure Log Analytics
-1. Select the **Send to Log Analytics** check box, select a Log Analytics workspace, and then select the **Save** button.
-
- > [!div class="mx-imgBorder"]
- > ![Diagnostic settings page log analytics](media/monitor-queue-storage/diagnostic-logs-settings-pane-log-analytics.png)
+1. Select the **Send to Log Analytics** check box, select a Log Analytics workspace, and then select the **Save** button. You'll need access to an existing log analytics workspace, or you'll need to create one before you complete this step.
[!INCLUDE [no retention policy log analytics](../../../includes/azure-storage-logs-retention-policy-log-analytics.md)]
+#### Send to a partner solution
+
+You can also send platform metrics and logs to certain Azure Monitor partners. You must first install a partner integration into your subscription. Configuration options will vary by partner. Check the [Azure Monitor partner integrations documentation](../../partner-solutions/overview.md) for details.
+ ### [PowerShell](#tab/azure-powershell) 1. Open a Windows PowerShell command window, and sign in to your Azure subscription by using the `Connect-AzAccount` command. Then, follow the on-screen directions.
If you choose to stream your logs to an event hub, you'll pay for the volume of
#### Archive logs to a storage account
-If you choose to archive your logs to a storage account, you'll pay for the volume of logs that are sent to the storage account. For specific pricing, see the **Platform Logs** section of the [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/#platform-logs) page.
+If you choose to archive your logs to a storage account, you'll pay for the volume of logs that are sent to the storage account. For specific pricing, see the **Platform Logs** section of the [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/#platform-logs) page. You can't send logs to the same storage account that you are monitoring with this setting. This would lead to recursive logs in which a log entry describes the writing of another log entry. You must create an account or use another existing account to store log information.
Enable logs by using the [Set-AzDiagnosticSetting](/powershell/module/az.monitor/set-azdiagnosticsetting) PowerShell cmdlet along with the `StorageAccountId` parameter.
For a description of each parameter, see [Archive Azure resource logs via Azure
#### Stream logs to an event hub
-If you choose to stream your logs to an event hub, you'll pay for the volume of logs that are sent to the event hub. For specific pricing, see the **Platform Logs** section of the [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/#platform-logs) page.
+If you choose to stream your logs to an event hub, you'll pay for the volume of logs that are sent to the event hub. For specific pricing, see the **Platform Logs** section of the [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/#platform-logs) page. You'll need access to an existing event hub, or you'll need to create one before you complete this step.
Enable logs by using the [Set-AzDiagnosticSetting](/powershell/module/az.monitor/set-azdiagnosticsetting) PowerShell cmdlet with the `EventHubAuthorizationRuleId` parameter.
For a description of each parameter, see [Stream data to Event Hubs via PowerShe
#### Send logs to Log Analytics
-Enable logs by using the [Set-AzDiagnosticSetting](/powershell/module/az.monitor/set-azdiagnosticsetting) PowerShell cmdlet with the `WorkspaceId` parameter.
+Enable logs by using the [Set-AzDiagnosticSetting](/powershell/module/az.monitor/set-azdiagnosticsetting) PowerShell cmdlet with the `WorkspaceId` parameter. You'll need access to an existing log analytics workspace, or you'll need to create one before you complete this step.
```powershell Set-AzDiagnosticSetting -ResourceId <storage-service-resource-id> -WorkspaceId <log-analytics-workspace-resource-id> -Enabled $true -Category <operations-to-log>
Here's an example:
For more information, see [Stream Azure resource logs to Log Analytics workspace in Azure Monitor](../../azure-monitor/essentials/resource-logs.md#send-to-log-analytics-workspace).
+#### Send to a partner solution
+
+You can also send platform metrics and logs to certain Azure Monitor partners. You must first install a partner integration into your subscription. Configuration options will vary by partner. Check the [Azure Monitor partner integrations documentation](../../partner-solutions/overview.md) for details.
+ ### [Azure CLI](#tab/azure-cli) 1. First, open the [Azure Cloud Shell](../../cloud-shell/overview.md), or if you've [installed the Azure CLI](/cli/azure/install-azure-cli) locally, open a command console application such as PowerShell.
For more information, see [Stream Azure resource logs to Log Analytics workspace
#### Archive logs to a storage account
-If you choose to archive your logs to a storage account, you'll pay for the volume of logs that are sent to the storage account. For specific pricing, see the **Platform Logs** section of the [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/#platform-logs) page.
+If you choose to archive your logs to a storage account, you'll pay for the volume of logs that are sent to the storage account. For specific pricing, see the **Platform Logs** section of the [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/#platform-logs) page. You can't send logs to the same storage account that you are monitoring with this setting. This would lead to recursive logs in which a log entry describes the writing of another log entry. You must create an account or use another existing account to store log information.
Enable logs by using the [`az monitor diagnostic-settings create`](/cli/azure/monitor/diagnostic-settings#az-monitor-diagnostic-settings-create) command.
For a description of each parameter, see [Archive resource logs via the Azure CL
#### Stream logs to an event hub
-If you choose to stream your logs to an event hub, you'll pay for the volume of logs that are sent to the event hub. For specific pricing, see the **Platform Logs** section of the [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/#platform-logs) page.
+If you choose to stream your logs to an event hub, you'll pay for the volume of logs that are sent to the event hub. For specific pricing, see the **Platform Logs** section of the [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/#platform-logs) page. You'll need access to an existing event hub, or you'll need to create one before you complete this step.
Enable logs by using the [`az monitor diagnostic-settings create`](/cli/azure/monitor/diagnostic-settings#az-monitor-diagnostic-settings-create) command.
For a description of each parameter, see [Stream data to Event Hubs via Azure CL
#### Send logs to Log Analytics
-Enable logs by using the [`az monitor diagnostic-settings create`](/cli/azure/monitor/diagnostic-settings#az-monitor-diagnostic-settings-create) command.
+Enable logs by using the [`az monitor diagnostic-settings create`](/cli/azure/monitor/diagnostic-settings#az-monitor-diagnostic-settings-create) command. You'll need access to an existing log analytics workspace, or you'll need to create one before you complete this step.
```azurecli-interactive az monitor diagnostic-settings create --name <setting-name> --workspace <log-analytics-workspace-resource-id> --resource <storage-account-resource-id> --logs '[{"category": <category name>, "enabled": true}]'
Here's an example:
For more information, see [Stream Azure resource logs to Log Analytics workspace in Azure Monitor](../../azure-monitor/essentials/resource-logs.md#send-to-log-analytics-workspace).
-# [Template](#tab/template)
-
-To view an Azure Resource Manager template that creates a diagnostic setting, see [Diagnostic setting for Azure Storage](../../azure-monitor/essentials/resource-manager-diagnostic-settings.md#diagnostic-setting-for-azure-storage).
+#### Send to a partner solution
-### [Azure Policy](#tab/policy)
-
-You can create a diagnostic setting by using a policy definition. That way, you can make sure that a diagnostic setting is created for every account that is created or updated. See [Azure Policy built-in definitions for Azure Storage](../common/policy-reference.md).
+You can also send platform metrics and logs to certain Azure Monitor partners. You must first install a partner integration into your subscription. Configuration options will vary by partner. Check the [Azure Monitor partner integrations documentation](../../partner-solutions/overview.md) for details.
## Analyzing metrics
+For a list of all Azure Monitor support metrics, which includes Azure Queue Storage, see [Azure Monitor supported metrics](../../azure-monitor/essentials/metrics-supported.md).
+
+### [Azure portal](#tab/azure-portal)
+ You can analyze metrics for Azure Storage with metrics from other Azure services by using Azure Metrics Explorer. Open Metrics Explorer by choosing **Metrics** from the **Azure Monitor** menu. For details on using this tool, see [Getting started with Azure Metrics Explorer](../../azure-monitor/essentials/metrics-getting-started.md). This example shows how to view **Transactions** at the account level.
Metrics for Azure Queue Storage are in these namespaces:
- Microsoft.Storage/storageAccounts - Microsoft.Storage/storageAccounts/queueServices
-For a list of all Azure Monitor support metrics, which includes Azure Queue Storage, see [Azure Monitor supported metrics](../../azure-monitor/essentials/metrics-supported.md).
-
-### Accessing metrics
-
-> [!TIP]
-> To view Azure CLI or .NET examples, choose the corresponding tabs listed here.
- ### [PowerShell](#tab/azure-powershell) #### List the metric definition
When a metric supports dimensions, you can read metric values and filter them by
az monitor metrics list --resource <resource-ID> --metric "Transactions" --interval PT1H --filter "ApiName eq 'GetMessages' " --aggregation "Total" ```
-### [.NET SDK](#tab/azure-portal)
++
+## Analyzing metrics by using code
Azure Monitor provides the [.NET SDK](https://www.nuget.org/packages/microsoft.azure.management.monitor/) to read metric definition and values. The [sample code](https://azure.microsoft.com/resources/samples/monitor-dotnet-metrics-api/) shows how to use the SDK with different parameters. You need to use `0.18.0-preview` or a later version for storage metrics.
In these examples, replace the `<resource-ID>` placeholder with the resource ID
Replace the `<subscription-ID>` variable with the ID of your subscription. For guidance on how to obtain values for `<tenant-ID>`, `<application-ID>`, and `<AccessKey>`, see [Use the portal to create an Azure AD application and service principal that can access resources](../../active-directory/develop/howto-create-service-principal-portal.md).
-#### List the account-level metric definition
+### List the account-level metric definition
The following example shows how to list a metric definition at the account level:
The following example shows how to list a metric definition at the account level
```
-#### Reading account-level metric values
+### Reading account-level metric values
The following example shows how to read `UsedCapacity` data at the account level:
The following example shows how to read `UsedCapacity` data at the account level
```
-#### Reading multidimensional metric values
+### Reading multidimensional metric values
For multidimensional metrics, you need to define metadata filters if you want to read metric data on specific dimension values.
The following example shows how to read metric data on the metric supporting mul
```
-### [Template](#tab/template)
-
-N/A.
-
-### [Azure Policy](#tab/policy)
-
-N/A.
--- ## Analyzing logs You can access resource logs either as a queue in a storage account, as event data, or through Log Analytics queries.
No. Compute instances support the metrics on disks. For more information, see [P
## Next steps -- For a reference of the logs and metrics created by Azure Queue Storage, see [Azure Queue Storage monitoring data reference](monitor-queue-storage-reference.md).-- For details on monitoring Azure resources, see [Monitor Azure resources with Azure Monitor](../../azure-monitor/essentials/monitor-azure-resource.md).-- For more information on metrics migration, see [Azure Storage metrics migration](../common/storage-metrics-migration.md).
+Get started with any of these guides.
+
+| Guide | Description |
+|||
+| [Monitor, diagnose, and troubleshoot your Azure Storage](/learn/modules/monitor-diagnose-and-troubleshoot-azure-storage/) | Troubleshoot storage account issues (contains step-by-step guidance). |
+| [Monitor storage with Azure Monitor Storage insights](../common/storage-insights-overview.md) | A unified view of storage performance, capacity, and availability |
+| [Getting started with Azure Metrics Explorer](../../azure-monitor/essentials/metrics-getting-started.md) | A tour of Metrics Explorer.
+| [Overview of Log Analytics in Azure Monitor](../../azure-monitor/logs/log-analytics-overview.md) | A tour of Log Analytics. |
+| [Azure Monitor Metrics overview](../../azure-monitor/essentials/data-platform-metrics.md) | The basics of metrics and metric dimensions |
+| [Azure Monitor Logs overview](../../azure-monitor/logs/data-platform-logs.md)| The basics of logs and how to collect and analyze them |
+| [Transition to metrics in Azure Monitor](../common/storage-metrics-migration.md) | Move from Storage Analytics metrics to metrics in Azure Monitor. |
+| [Azure Queue Storage monitoring data reference](monitor-queue-storage-reference.md) | A reference of the logs and metrics created by Azure Queue Storage |
storage Monitor Table Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/tables/monitor-table-storage.md
Previously updated : 11/10/2021 Last updated : 05/06/2022 ms.devlang: csharp
To collect resource logs, you must create a diagnostic setting. When you create
## Creating a diagnostic setting
-You can create a diagnostic setting by using the Azure portal, PowerShell, the Azure CLI, an Azure Resource Manager template, or Azure Policy.
+This section shows you how to create a diagnostic setting by using the Azure portal, PowerShell, and the Azure CLI. This section provides steps specific to Azure Storage. For general guidance about how to create a diagnostic setting, see [Create diagnostic setting to collect platform logs and metrics in Azure](../../azure-monitor/essentials/diagnostic-settings.md).
-For general guidance, see [Create diagnostic setting to collect platform logs and metrics in Azure](../../azure-monitor/essentials/diagnostic-settings.md).
+> [!TIP]
+> You can also create a diagnostic setting by using an Azure Resource manager template or by using a policy definition. A policy definition can ensure that a diagnostic setting is created for every account that is created or updated.
+>
+> This section doesn't describe templates or policy definitions.
+>
+> - To view an Azure Resource Manager template that creates a diagnostic setting, see [Diagnostic setting for Azure Storage](../../azure-monitor/essentials/resource-manager-diagnostic-settings.md#diagnostic-setting-for-azure-storage).
+>
+> - To learn how to create a diagnostic setting by using a policy definition, see [Azure Policy built-in definitions for Azure Storage](../common/policy-reference.md).
### [Azure portal](#tab/azure-portal)
For general guidance, see [Create diagnostic setting to collect platform logs an
5. Click **Add diagnostic setting**.
- > [!div class="mx-imgBorder"]
- > ![portal - Resource logs - add diagnostic setting](media/monitor-table-storage/diagnostic-logs-settings-pane-2.png)
- The **Diagnostic settings** page appears. > [!div class="mx-imgBorder"]
For general guidance, see [Create diagnostic setting to collect platform logs an
#### Archive logs to a storage account
-If you choose to archive your logs to a storage account, you'll pay for the volume of logs that are sent to the storage account. For specific pricing, see the **Platform Logs** section of the [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/#platform-logs) page.
+If you choose to archive your logs to a storage account, you'll pay for the volume of logs that are sent to the storage account. For specific pricing, see the **Platform Logs** section of the [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/#platform-logs) page. You can't send logs to the same storage account that you are monitoring with this setting. This would lead to recursive logs in which a log entry describes the writing of another log entry. You must create an account or use another existing account to store log information.
1. Select the **Archive to a storage account** checkbox, and then click the **Configure** button.
- > [!div class="mx-imgBorder"]
- > ![Diagnostic settings page archive storage](media/monitor-table-storage/diagnostic-logs-settings-pane-archive-storage.png)
- 2. In the **Storage account** drop-down list, select the storage account that you want to archive your logs to, click the **OK** button, and then click the **Save** button. [!INCLUDE [no retention policy](../../../includes/azure-storage-logs-retention-policy.md)]
If you choose to archive your logs to a storage account, you'll pay for the volu
#### Stream logs to Azure Event Hubs
-If you choose to stream your logs to an event hub, you'll pay for the volume of logs that are sent to the event hub. For specific pricing, see the **Platform Logs** section of the [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/#platform-logs) page.
+If you choose to stream your logs to an event hub, you'll pay for the volume of logs that are sent to the event hub. For specific pricing, see the **Platform Logs** section of the [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/#platform-logs) page. You'll need access to an existing event hub, or you'll need to create one before you complete this step.
1. Select the **Stream to an event hub** checkbox, and then click the **Configure** button. 2. In the **Select an event hub** pane, choose the namespace, name, and policy name of the event hub that you want to stream your logs to.
- > [!div class="mx-imgBorder"]
- > ![Diagnostic settings page event hub](media/monitor-table-storage/diagnostic-logs-settings-pane-event-hub.png)
- 3. Click the **OK** button, and then click the **Save** button. #### Send logs to Azure Log Analytics
-1. Select the **Send to Log Analytics** checkbox, select a log analytics workspace, and then click the and then click the **Save** button.
-
- > [!div class="mx-imgBorder"]
- > ![Diagnostic settings page log analytics](media/monitor-table-storage/diagnostic-logs-settings-pane-log-analytics.png)
+1. Select the **Send to Log Analytics** checkbox, select a log analytics workspace, and then click the **Save** button. You'll need access to an existing log analytics workspace, or you'll need to create one before you complete this step.
[!INCLUDE [no retention policy log analytics](../../../includes/azure-storage-logs-retention-policy-log-analytics.md)]
+#### Send to a partner solution
+
+You can also send platform metrics and logs to certain Azure Monitor partners. You must first install a partner integration into your subscription. Configuration options will vary by partner. Check the [Azure Monitor partner integrations documentation](../../partner-solutions/overview.md) for details.
+ ### [PowerShell](#tab/azure-powershell) 1. Open a Windows PowerShell command window, and sign in to your Azure subscription by using the `Connect-AzAccount` command. Then, follow the on-screen directions.
If you choose to stream your logs to an event hub, you'll pay for the volume of
#### Archive logs to a storage account
-If you choose to archive your logs to a storage account, you'll pay for the volume of logs that are sent to the storage account. For specific pricing, see the **Platform Logs** section of the [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/#platform-logs) page.
+If you choose to archive your logs to a storage account, you'll pay for the volume of logs that are sent to the storage account. For specific pricing, see the **Platform Logs** section of the [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/#platform-logs) page. You can't send logs to the same storage account that you are monitoring with this setting. This would lead to recursive logs in which a log entry describes the writing of another log entry. You must create an account or use another existing account to store log information.
Enable logs by using the [Set-AzDiagnosticSetting](/powershell/module/az.monitor/set-azdiagnosticsetting) PowerShell cmdlet along with the `StorageAccountId` parameter.
For more information about archiving resource logs to Azure Storage, see [Azure
#### Stream logs to an event hub
-If you choose to stream your logs to an event hub, you'll pay for the volume of logs that are sent to the event hub. For specific pricing, see the **Platform Logs** section of the [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/#platform-logs) page.
+If you choose to stream your logs to an event hub, you'll pay for the volume of logs that are sent to the event hub. For specific pricing, see the **Platform Logs** section of the [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/#platform-logs) page. You'll need access to an existing event hub, or you'll need to create one before you complete this step.
Enable logs by using the [Set-AzDiagnosticSetting](/powershell/module/az.monitor/set-azdiagnosticsetting) PowerShell cmdlet with the `EventHubAuthorizationRuleId` parameter.
For more information about sending resource logs to event hubs, see [Azure Resou
#### Send logs to Log Analytics
-Enable logs by using the [Set-AzDiagnosticSetting](/powershell/module/az.monitor/set-azdiagnosticsetting) PowerShell cmdlet with the `WorkspaceId` parameter.
+Enable logs by using the [Set-AzDiagnosticSetting](/powershell/module/az.monitor/set-azdiagnosticsetting) PowerShell cmdlet with the `WorkspaceId` parameter. You'll need access to an existing log analytics workspace, or you'll need to create one before you complete this step.
```powershell Set-AzDiagnosticSetting -ResourceId <storage-service-resource-id> -WorkspaceId <log-analytics-workspace-resource-id> -Enabled $true -Category <operations-to-log>
Here's an example:
For more information, see [Stream Azure Resource Logs to Log Analytics workspace in Azure Monitor](../../azure-monitor/essentials/resource-logs.md#send-to-log-analytics-workspace).
+#### Send to a partner solution
+
+You can also send platform metrics and logs to certain Azure Monitor partners. You must first install a partner integration into your subscription. Configuration options will vary by partner. Check the [Azure Monitor partner integrations documentation](../../partner-solutions/overview.md) for details.
+ ### [Azure CLI](#tab/azure-cli) 1. First, open the [Azure Cloud Shell](../../cloud-shell/overview.md), or if you've [installed](/cli/azure/install-azure-cli) the Azure CLI locally, open a command console application such as Windows PowerShell.
For more information, see [Stream Azure Resource Logs to Log Analytics workspace
#### Archive logs to a storage account
-If you choose to archive your logs to a storage account, you'll pay for the volume of logs that are sent to the storage account. For specific pricing, see the **Platform Logs** section of the [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/#platform-logs) page.
+If you choose to archive your logs to a storage account, you'll pay for the volume of logs that are sent to the storage account. For specific pricing, see the **Platform Logs** section of the [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/#platform-logs) page. You can't send logs to the same storage account that you are monitoring with this setting. This would lead to recursive logs in which a log entry describes the writing of another log entry. You must create an account or use another existing account to store log information.
Enable logs by using the [az monitor diagnostic-settings create](/cli/azure/monitor/diagnostic-settings#az-monitor-diagnostic-settings-create) command.
Here's an example:
#### Stream logs to an event hub
-If you choose to stream your logs to an event hub, you'll pay for the volume of logs that are sent to the event hub. For specific pricing, see the **Platform Logs** section of the [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/#platform-logs) page.
+If you choose to stream your logs to an event hub, you'll pay for the volume of logs that are sent to the event hub. For specific pricing, see the **Platform Logs** section of the [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/#platform-logs) page. You'll need access to an existing event hub, or you'll need to create one before you complete this step.
Enable logs by using the [az monitor diagnostic-settings create](/cli/azure/monitor/diagnostic-settings#az-monitor-diagnostic-settings-create) command.
Here's an example:
#### Send logs to Log Analytics
-Enable logs by using the [az monitor diagnostic-settings create](/cli/azure/monitor/diagnostic-settings#az-monitor-diagnostic-settings-create) command.
+Enable logs by using the [az monitor diagnostic-settings create](/cli/azure/monitor/diagnostic-settings#az-monitor-diagnostic-settings-create) command. You'll need access to an existing log analytics workspace, or you'll need to create one before you complete this step.
```azurecli-interactive az monitor diagnostic-settings create --name <setting-name> --workspace <log-analytics-workspace-resource-id> --resource <storage-account-resource-id> --logs '[{"category": <category name>, "enabled": true}]'
Here's an example:
For more information, see [Stream Azure Resource Logs to Log Analytics workspace in Azure Monitor](../../azure-monitor/essentials/resource-logs.md#send-to-log-analytics-workspace).
-### [Template](#tab/template)
-
-To view an Azure Resource Manager template that creates a diagnostic setting, see [Diagnostic setting for Azure Storage](../../azure-monitor/essentials/resource-manager-diagnostic-settings.md#diagnostic-setting-for-azure-storage).
+#### Send to a partner solution
-### [Azure Policy](#tab/policy)
-
-You can create a diagnostic setting by using a policy definition. That way, you can make sure that a diagnostic setting is created for every account that is created or updated. See [Azure Policy built-in definitions for Azure Storage](../common/policy-reference.md).
+You can also send platform metrics and logs to certain Azure Monitor partners. You must first install a partner integration into your subscription. Configuration options will vary by partner. Check the [Azure Monitor partner integrations documentation](../../partner-solutions/overview.md) for details.
## Analyzing metrics
+For a list of all Azure Monitor support metrics, which includes Azure Table storage, see [Azure Monitor supported metrics](../../azure-monitor/essentials/metrics-supported.md).
+
+### [Azure portal](#tab/azure-portal)
+ You can analyze metrics for Azure Storage with metrics from other Azure services by using Metrics Explorer. Open Metrics Explorer by choosing **Metrics** from the **Azure Monitor** menu. For details on using this tool, see [Getting started with Azure Metrics Explorer](../../azure-monitor/essentials/metrics-getting-started.md). This example shows how to view **Transactions** at the account level.
Metrics for Azure Table storage are in these namespaces:
- Microsoft.Storage/storageAccounts - Microsoft.Storage/storageAccounts/tableServices
-For a list of all Azure Monitor support metrics, which includes Azure Table storage, see [Azure Monitor supported metrics](../../azure-monitor/essentials/metrics-supported.md).
--
-### Accessing metrics
-
-> [!TIP]
-> To view Azure CLI or .NET examples, choose the corresponding tabs listed here.
- ### [PowerShell](#tab/azure-powershell) #### List the metric definition
When a metric supports dimensions, you can read metric values and filter them by
az monitor metrics list --resource <resource-ID> --metric "Transactions" --interval PT1H --filter "ApiName eq 'QueryEntities' " --aggregation "Total" ```
-### [.NET SDK](#tab/azure-portal)
++
+## Analyzing metrics by using code
Azure Monitor provides the [.NET SDK](https://www.nuget.org/packages/Microsoft.Azure.Management.Monitor/) to read metric definition and values. The [sample code](https://azure.microsoft.com/resources/samples/monitor-dotnet-metrics-api/) shows how to use the SDK with different parameters. You need to use `0.18.0-preview` or a later version for storage metrics.
The following example shows how to read metric data on the metric supporting mul
```
-### [Template](#tab/template)
-
-N/A.
-
-### [Azure Policy](#tab/policy)
-
-N/A.
--- ## Analyzing logs You can access resource logs either as a blob in a storage account, as event data, or through Log Analytic queries.
No. Azure Compute supports the metrics on disks. For more information, see [Per
## Next steps -- For a reference of the logs and metrics created by Azure Table storage, see [Azure Table storage monitoring data reference](monitor-table-storage-reference.md).-- For details on monitoring Azure resources, see [Monitor Azure resources with Azure Monitor](../../azure-monitor/essentials/monitor-azure-resource.md).-- For more information on metrics migration, see [Azure Storage metrics migration](../common/storage-metrics-migration.md).
+| Guide | Description |
+|||
+| [Monitor, diagnose, and troubleshoot your Azure Storage](/learn/modules/monitor-diagnose-and-troubleshoot-azure-storage/) | Troubleshoot storage account issues (contains step-by-step guidance). |
+| [Monitor storage with Azure Monitor Storage insights](../common/storage-insights-overview.md) | A unified view of storage performance, capacity, and availability |
+| [Getting started with Azure Metrics Explorer](../../azure-monitor/essentials/metrics-getting-started.md) | A tour of Metrics Explorer.
+| [Overview of Log Analytics in Azure Monitor](../../azure-monitor/logs/log-analytics-overview.md) | A tour of Log Analytics. |
+| [Azure Monitor Metrics overview](../../azure-monitor/essentials/data-platform-metrics.md) | The basics of metrics and metric dimensions |
+| [Azure Monitor Logs overview](../../azure-monitor/logs/data-platform-logs.md)| The basics of logs and how to collect and analyze them |
+| [Transition to metrics in Azure Monitor](../common/storage-metrics-migration.md) | Move from Storage Analytics metrics to metrics in Azure Monitor. |
+| [Azure Table storage monitoring data reference](monitor-table-storage-reference.md)| A reference of the logs and metrics created by Azure Table Storage |
vpn-gateway Tutorial Create Gateway Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/tutorial-create-gateway-portal.md
Previously updated : 04/22/2022 Last updated : 05/06/2022
You can view the gateway public IP address on the **Overview** page for your gat
:::image type="content" source="./media/tutorial-create-gateway-portal/address.png" alt-text="Screenshot of Overview page.":::
-To see additional information about the public IP address object, click the name/IP address link next to **Public IP address**.
+To see additional information about the public IP address object, select the name/IP address link next to **Public IP address**.
## <a name="resize"></a>Resize a gateway SKU