Updates from: 03/08/2021 04:05:35
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Quickstart V2 Aspnet Core Webapp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-aspnet-core-webapp.md
Title: "Quickstart: Add sign-in with Microsoft to an ASP.NET Core web app | Azure"
-description: In this quickstart, you learn how an app implements Microsoft sign-in on an ASP.NET Core web app using OpenID Connect
+description: In this quickstart, you learn how an app implements Microsoft sign-in on an ASP.NET Core web app by using OpenID Connect
Last updated 09/11/2020
-#Customer intent: As an application developer, I want to know how to write an ASP.NET Core web app that can sign in personal accounts, as well as work and school accounts from any Azure Active Directory instance.
+#Customer intent: As an application developer, I want to know how to write an ASP.NET Core web app that can sign in personal accounts, as well as work and school accounts, from any Azure Active Directory instance.
# Quickstart: Add sign-in with Microsoft to an ASP.NET Core web app In this quickstart, you download and run a code sample that demonstrates how an ASP.NET Core web app can sign in users from any Azure Active Directory (Azure AD) organization.
-See [How the sample works](#how-the-sample-works) for an illustration.
- > [!div renderon="docs"]
+> The following diagram shows how the sample app works:
+>
+> ![Diagram of the interaction between the web browser, the web app, and the Microsoft identity platform in the sample app.](media/quickstart-v2-aspnet-core-webapp/aspnetcorewebapp-intro.svg)
+>
> ## Prerequisites > > * [Visual Studio 2019](https://visualstudio.microsoft.com/vs/) or [Visual Studio Code](https://code.visualstudio.com/) > * [.NET Core SDK 3.1+](https://dotnet.microsoft.com/download) >
-> ## Register and download your quickstart app
-> You have two options to start your quickstart application:
-> * [Express] [Option 1: Register and auto configure your app and then download your code sample](#option-1-register-and-auto-configure-your-app-and-then-download-your-code-sample)
-> * [Manual] [Option 2: Register and manually configure your application and code sample](#option-2-register-and-manually-configure-your-application-and-code-sample)
+> ## Register and download the app
+> You have two options to start building your application: automatic or manual configuration.
>
-> ### Option 1: Register and auto configure your app and then download your code sample
+> ### Automatic configuration
+> If you want to automatically configure your app and then download the code sample, follow these steps:
>
-> 1. Go to the <a href="https://aka.ms/aspnetcore2-1-aad-quickstart-v2/" target="_blank">Azure portal - App registrations</a> quickstart experience.
+> 1. Go to the <a href="https://aka.ms/aspnetcore2-1-aad-quickstart-v2/" target="_blank">Azure portal page for app registration</a>.
> 1. Enter a name for your application and select **Register**.
-> 1. Follow the instructions to download and automatically configure your new application for you in one click.
->
-> ### Option 2: Register and manually configure your application and code sample
+> 1. Follow the instructions to download and automatically configure your new application in one click.
>
+> ### Manual configuration
+> If you want to manually configure your application and code sample, use the following procedures.
> #### Step 1: Register your application
-> To register your application and add the app's registration information to your solution manually, follow these steps:
->
> 1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
-> 1. If you have access to multiple tenants, use the **Directory + subscription** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to select the tenant in which you want to register an application.
+> 1. If you have access to multiple tenants, use the **Directory + subscription** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: on the top menu to select the tenant in which you want to register the application.
> 1. Search for and select **Azure Active Directory**. > 1. Under **Manage**, select **App registrations** > **New registration**.
-> 1. Enter a **Name** for your application, for example `AspNetCore-Quickstart`. Users of your app might see this name, and you can change it later.
-> 1. Enter a **Redirect URI** of `https://localhost:44321/`.
+> 1. For **Name**, enter a name for your application. For example, enter **AspNetCore-Quickstart**. Users of your app will see this name, and you can change it later.
+> 1. For **Redirect URI**, enter **https://localhost:44321/signin-oidc**.
> 1. Select **Register**. > 1. Under **Manage**, select **Authentication**.
-> 1. Under **Redirect URIs**, select **Add URI**, and then enter `https://localhost:44321/signin-oidc`.
-> 1. Enter a **Front-channel logout URL** of `https://localhost:44321/signout-oidc`.
+> 1. For **Front-channel logout URL**, enter **https://localhost:44321/signout-oidc**.
> 1. Under **Implicit grant and hybrid flows**, select **ID tokens**. > 1. Select **Save**. > [!div class="sxs-lookup" renderon="portal"] > #### Step 1: Configure your application in the Azure portal
-> For the code sample in this quickstart to work, add a **Redirect URI** of `https://localhost:44321/` and `https://localhost:44321/signin-oidc` and a **Front-channel logout URL** of `https://localhost:44321/signout-oidc`. Request ID tokens will be issued by the authorization endpoint.
+> For the code sample in this quickstart to work:
+> - For **Redirect URI**, enter **https://localhost:44321/** and **https://localhost:44321/signin-oidc**.
+> - For **Front-channel logout URL**, enter **https://localhost:44321/signout-oidc**.
+>
+> The authorization endpoint will issue request ID tokens.
> > [!div renderon="portal" id="makechanges" class="nextstepaction"] > > [Make this change for me]() > > > [!div id="appconfigured" class="alert alert-info"] > > ![Already configured](media/quickstart-v2-aspnet-webapp/green-check.png) Your application is configured with these attributes.
-#### Step 2: Download your ASP.NET Core project
+#### Step 2: Download the ASP.NET Core project
> [!div renderon="docs"] > [Download the ASP.NET Core solution](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/archive/aspnetcore3-1.zip)
See [How the sample works](#how-the-sample-works) for an illustration.
> [!div class="sxs-lookup" renderon="portal"] > #### Step 3: Your app is configured and ready to run
-> We have configured your project with values of your app's properties and it's ready to run.
+> We've configured your project with values of your app's properties, and it's ready to run.
> [!div class="sxs-lookup" renderon="portal"] > > [!NOTE] > > `Enter_the_Supported_Account_Info_Here` > [!div renderon="docs"] > #### Step 3: Configure your ASP.NET Core project
-> 1. Extract the .zip archive into a local folder near the root of your drive. For example, into *C:\Azure-Samples*.
+> 1. Extract the .zip archive into a local folder near the root of your drive. For example, extract into *C:\Azure-Samples*.
+>
+> We recommend extracting the archive into a directory near the root of your drive to avoid errors caused by path length limitations on Windows.
> 1. Open the solution in Visual Studio 2019.
-> 1. Open the *appsettings.json* file and modify the following:
+> 1. Open the *appsettings.json* file and modify the following code:
> > ```json > "ClientId": "Enter_the_Application_Id_here", > "TenantId": "common", > ``` >
-> - Replace `Enter_the_Application_Id_here` with the **Application (client) ID** of the application you registered in the Azure portal. You can find **Application (client) ID** in the app's **Overview** page.
+> - Replace `Enter_the_Application_Id_here` with the application (client) ID of the application that you registered in the Azure portal. You can find the **Application (client) ID** value on the app's **Overview** page.
> - Replace `common` with one of the following:
-> - If your application supports **Accounts in this organizational directory only**, replace this value with the **Directory (tenant) ID** (a GUID) or **tenant name** (for example, `contoso.onmicrosoft.com`). You can find the **Directory (tenant) ID** on the app's **Overview** page.
-> - If your application supports **Accounts in any organizational directory**, replace this value with `organizations`
-> - If your application supports **All Microsoft account users**, leave this value as `common`
+> - If your application supports **Accounts in this organizational directory only**, replace this value with the directory (tenant) ID (a GUID) or the tenant name (for example, `contoso.onmicrosoft.com`). You can find the **Directory (tenant) ID** value on the app's **Overview** page.
+> - If your application supports **Accounts in any organizational directory**, replace this value with `organizations`.
+> - If your application supports **All Microsoft account users**, leave this value as `common`.
> > For this quickstart, don't change any other values in the *appsettings.json* file. > > #### Step 4: Build and run the application >
-> Build and run the app in Visual Studio by selecting the **Debug** menu > **Start Debugging**, or by pressing the `F5` key.
+> Build and run the app in Visual Studio by selecting the **Debug** menu > **Start Debugging**, or by pressing the F5 key.
>
-> You're prompted for your credentials, and then asked to consent to the permissions your app requires. Select **Accept** on the consent prompt.
+> You're prompted for your credentials, and then asked to consent to the permissions that your app requires. Select **Accept** on the consent prompt.
>
-> :::image type="content" source="media/quickstart-v2-aspnet-core-webapp/webapp-01-consent.png" alt-text="Consent dialog showing the permissions the app is requesting from the > user":::
+> :::image type="content" source="media/quickstart-v2-aspnet-core-webapp/webapp-01-consent.png" alt-text="Screenshot of the consent dialog box, showing the permissions that the app is requesting from the user.":::
>
-> After consenting to the requested permissions, the app displays that you've successfully logged in using your Azure Active Directory credentials.
+> After you consent to the requested permissions, the app displays that you've successfully signed in with your Azure Active Directory credentials.
>
-> :::image type="content" source="media/quickstart-v2-aspnet-core-webapp/webapp-02-signed-in.png" alt-text="Web browser displaying the running web app and the user signed in":::
+> :::image type="content" source="media/quickstart-v2-aspnet-core-webapp/webapp-02-signed-in.png" alt-text="Screenshot of a web browser that shows the running web app and the signed-in user.":::
## More information
-This section gives an overview of the code required to sign in users. This overview can be useful to understand how the code works, main arguments, and also if you want to add sign-in to an existing ASP.NET Core application.
+This section gives an overview of the code required to sign in users. This overview can be useful to understand how the code works, what the main arguments are, and how to add sign-in to an existing ASP.NET Core application.
-### How the sample works
-![Shows how the sample app generated by this quickstart works](media/quickstart-v2-aspnet-core-webapp/aspnetcorewebapp-intro.svg)
+> [!div class="sxs-lookup" renderon="portal"]
+> ### How the sample works
+>
+> ![Diagram of the interaction between the web browser, the web app, and the Microsoft identity platform in the sample app.](media/quickstart-v2-aspnet-core-webapp/aspnetcorewebapp-intro.svg)
### Startup class
-The *Microsoft.AspNetCore.Authentication* middleware uses a `Startup` class that's executed when the hosting process initializes:
+The *Microsoft.AspNetCore.Authentication* middleware uses a `Startup` class that's run when the hosting process starts:
```csharp public void ConfigureServices(IServiceCollection services)
The *Microsoft.AspNetCore.Authentication* middleware uses a `Startup` class that
} ```
-The `AddAuthentication()` method configures the service to add cookie-based authentication, which is used in browser scenarios and to set the challenge to OpenID Connect.
+The `AddAuthentication()` method configures the service to add cookie-based authentication. This authentication is used in browser scenarios and to set the challenge to OpenID Connect.
-The line containing `.AddMicrosoftIdentityWebApp` adds the Microsoft identity platform authentication to your application. It's then configured to sign in using the Microsoft identity platform based on the information in the `AzureAD` section of the *appsettings.json* configuration file:
+The line that contains `.AddMicrosoftIdentityWebApp` adds Microsoft identity platform authentication to your application. The application is then configured to sign in users based on the following information in the `AzureAD` section of the *appsettings.json* configuration file:
| *appsettings.json* key | Description | ||-|
-| `ClientId` | **Application (client) ID** of the application registered in the Azure portal. |
+| `ClientId` | Application (client) ID of the application registered in the Azure portal. |
| `Instance` | Security token service (STS) endpoint for the user to authenticate. This value is typically `https://login.microsoftonline.com/`, indicating the Azure public cloud. |
-| `TenantId` | Name of your tenant or its tenant ID (a GUID), or *common* to sign in users with work or school accounts or Microsoft personal accounts. |
+| `TenantId` | Name of your tenant or the tenant ID (a GUID), or `common` to sign in users with work or school accounts or Microsoft personal accounts. |
-The `Configure()` method contains two important methods, `app.UseAuthentication()` and `app.UseAuthorization()`, that enable their named functionality. Also in the `Configure()` method, you must register Microsoft Identity Web's routes with at least one call to `endpoints.MapControllerRoute()` or a call to `endpoints.MapControllers()`.
+The `Configure()` method contains two important methods, `app.UseAuthentication()` and `app.UseAuthorization()`, that enable their named functionality. Also in the `Configure()` method, you must register Microsoft Identity Web routes with at least one call to `endpoints.MapControllerRoute()` or a call to `endpoints.MapControllers()`:
```csharp app.UseAuthentication();
app.UseEndpoints(endpoints =>
// endpoints.MapControllers(); // REQUIRED if MapControllerRoute() isn't called. ```
-### Protect a controller or a controller's method
+### Attribute for protecting a controller or methods
-You can protect a controller or controller methods using the `[Authorize]` attribute. This attribute restricts access to the controller or methods by only allowing authenticated users, which means that authentication challenge can be started to access the controller if the user isn't authenticated.
+You can protect a controller or controller methods by using the `[Authorize]` attribute. This attribute restricts access to the controller or methods by allowing only authenticated users. An authentication challenge can then be started to access the controller if the user isn't authenticated.
[!INCLUDE [Help and support](../../../includes/active-directory-develop-help-support-include.md)]
You can protect a controller or controller methods using the `[Authorize]` attri
The GitHub repo that contains this ASP.NET Core tutorial includes instructions and more code samples that show you how to: -- Add authentication to a new ASP.NET Core Web application-- Call Microsoft Graph, other Microsoft APIs, or your own web APIs-- Add authorization-- Sign in users in national clouds or with social identities
+- Add authentication to a new ASP.NET Core web application.
+- Call Microsoft Graph, other Microsoft APIs, or your own web APIs.
+- Add authorization.
+- Sign in users in national clouds or with social identities.
> [!div class="nextstepaction"] > [ASP.NET Core web app tutorials on GitHub](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/)
active-directory Sample V2 Code https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/sample-v2-code.md
The following samples illustrate web applications that sign in users. Some sampl
| ![This image shows the ASP.NET Core logo](media/sample-v2-code/logo_NETcore.png)</p>ASP.NET Core | [ASP.NET Core WebApp signs-in users tutorial](https://aka.ms/aspnetcore-webapp-sign-in) | Same sample in the [ASP.NET Core web app calls Microsoft Graph](https://aka.ms/aspnetcore-webapp-call-msgraph) phase</p>Advanced sample [Accessing the logged-in user's token cache from background apps, APIs and services](https://github.com/Azure-Samples/ms-identity-dotnet-advanced-token-cache) | | ![This image shows the ASP.NET Framework logo](media/sample-v2-code/logo_NETframework.png)</p>ASP.NET Core | [AD FS to Azure AD application migration playbook for developers](https://github.com/Azure-Samples/ms-identity-dotnet-adfs-to-aad) to learn how to safely and securely migrate your applications integrated with Active Directory Federation Services (AD FS) to Azure Active Directory (Azure AD) | | | ![This image shows the ASP.NET Framework logo](media/sample-v2-code/logo_NETframework.png)</p> ASP.NET | [ASP.NET Quickstart](https://github.com/AzureAdQuickstarts/AppModelv2-WebApp-OpenIDConnect-DotNet) </p> [dotnet-webapp-openidconnect-v2](https://github.com/azure-samples/active-directory-dotnet-webapp-openidconnect-v2) | [dotnet-admin-restricted-scopes-v2](https://github.com/azure-samples/active-directory-dotnet-admin-restricted-scopes-v2) </p> |[msgraph-training-aspnetmvcapp](https://github.com/microsoftgraph/msgraph-training-aspnetmvcapp)
-| ![This image shows the Java logo](media/sample-v2-code/logo_java.png) |[Java Servlet web app chapter wise tutorial - Chapter 1](https://github.com/Azure-Samples/ms-identity-java-servlet-webapp-authentication)| [Java Servlet web app chapter wise tutorial - Chapter 2](https://github.com/Azure-Samples/ms-identity-java-servlet-webapp-authentication) |
-| ![This image shows the Java logo](media/sample-v2-code/logo_java.png) | | [ms-identity-java-webapp](https://github.com/Azure-Samples/ms-identity-java-webapp) |
+| ![This image shows the Java logo](medi) Sign in with AAD| |
+| ![This image shows the Java logo](medi) Sign in with B2C |
+| ![This image shows the Java logo](medi) Sign in with AAD and call Graph|
+| ![This image shows the Java logo](medi) Sign in with AAD and control access with Roles claim| |
+| ![This image shows the Java logo](medi) Sign in with AAD and control access with Groups claim|
+| ![This image shows the Java logo](medi) Deploy to Azure App Service|
+| ![This image shows the Java logo](media/sample-v2-code/logo_java.png) | | [ms-identity-java-webapp](https://github.com/Azure-Samples/ms-identity-java-webapp) |
| ![This image shows the Java logo](media/sample-v2-code/logo_java.png) | [ms-identity-b2c-java-servlet-webapp-authentication](https://github.com/Azure-Samples/ms-identity-b2c-java-servlet-webapp-authentication)| | | ![This image shows the Node.js logo](media/sample-v2-code/logo_nodejs.png)</p>Node.js (MSAL Node) | [Express web app signs-in users tutorial](https://github.com/Azure-Samples/ms-identity-node) | |
-| ![This image shows the Python logo](media/sample-v2-code/logo_python.png) | [ms-identity-python-flask-webapp-authentication](https://github.com/Azure-Samples/ms-identity-python-flask-webapp-authentication) | [ms-identity-python-webapp](https://github.com/Azure-Samples/ms-identity-python-webapp) |
-| ![This image shows the Python logo](medi) signs-in users and calls Graph tutorial |
-| ![This image shows the Python logo](medi) signs-in users with B2C | |
+| ![This image shows the Python logo](medi) Sign in with AAD | |
+| ![This image shows the Python logo](medi) Sign in with B2C | |
+| ![This image shows the Python logo](medi) Sign in with AAD and Call Graph |
+| ![This image shows the Python logo](medi) Deploy to Azure App Service |
+| ![This image shows the Python logo](medi) Sign in with AAD | |
+| ![This image shows the Python logo](medi) Sign in with B2C | |
+| ![This image shows the Python logo](medi) Sign in with AAD and Call Graph|
+| ![This image shows the Python logo](medi) Deploy to Azure App Service |
+| ![This image shows the Python logo](media/sample-v2-code/logo_python.png) | | [Python Flask web app](https://github.com/Azure-Samples/ms-identity-python-webapp) |
| ![This image shows the Ruby logo](media/sample-v2-code/logo_ruby.png) | | [msgraph-training-rubyrailsapp](https://github.com/microsoftgraph/msgraph-training-rubyrailsapp) | | ![This image shows the Blazor logo](media/sample-v2-code/logo-blazor.png)</p>Blazor Server | [Blazor Server app signs-in users tutorial](https://github.com/Azure-Samples/ms-identity-blazor-server/tree/main/WebApp-OIDC) | [Blazor Server app calls Microsoft Graph](https://github.com/Azure-Samples/ms-identity-blazor-server/tree/main/WebApp-graph-user/Call-MSGraph)</p>Chapterwise Tutorial: [Blazor Server app to sign-in users and call APIs with Azure Active Directory](https://github.com/Azure-Samples/ms-identity-blazor-server) |
active-directory Box Userprovisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/box-userprovisioning-tutorial.md
To configure Azure AD integration with Box, you need the following items:
> [!NOTE] > Apps need to be enabled in the Box application first.
+> [!NOTE]
+> This integration is also available to use from Azure AD US Government Cloud environment. You can find this application in the Azure AD US Government Cloud Application Gallery and configure it in the same way as you do from public cloud.
+ To test the steps in this tutorial, follow these recommendations: - Do not use your production environment, unless it is necessary.
active-directory Envoy Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/envoy-provisioning-tutorial.md
The scenario outlined in this tutorial assumes that you already have the followi
* [An Envoy tenant](https://envoy.com/pricing/). * A user account in Envoy with Admin permissions.
+> [!NOTE]
+> This integration is also available to use from Azure AD US Government Cloud environment. You can find this application in the Azure AD US Government Cloud Application Gallery and configure it in the same way as you do from public cloud.
+ ## Step 1. Plan your provisioning deployment 1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md). 2. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
active-directory Workplacebyfacebook Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/workplacebyfacebook-provisioning-tutorial.md
The scenario outlined in this tutorial assumes that you already have the followi
> [!NOTE] > To test the steps in this tutorial, we do not recommend using a production environment.
+> [!NOTE]
+> This integration is also available to use from Azure AD US Government Cloud environment. You can find this application in the Azure AD US Government Cloud Application Gallery and configure it in the same way as you do from public cloud.
+ To test the steps in this tutorial, you should follow these recommendations: - Do not use your production environment, unless it is necessary.
aks Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Kubernetes Service (AKS) description: Lists Azure Policy Regulatory Compliance controls available for Azure Kubernetes Service (AKS). These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 02/09/2021 Last updated : 03/05/2021
api-management Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure API Management description: Lists Azure Policy Regulatory Compliance controls available for Azure API Management. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 02/09/2021 Last updated : 03/05/2021
app-service Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure App Service description: Lists Azure Policy Regulatory Compliance controls available for Azure App Service. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 02/09/2021 Last updated : 03/05/2021
automation Automation Runbook Gallery https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-runbook-gallery.md
Title: Use Azure Automation runbooks and modules in PowerShell Gallery
description: This article tells how to use runbooks and modules from Microsoft and the community in PowerShell Gallery. Previously updated : 01/08/2021 Last updated : 03/04/2021 # Use runbooks and modules in PowerShell Gallery
You can only import directly from the PowerShell Gallery using the Azure portal.
## Modules in PowerShell Gallery
-PowerShell modules contain cmdlets that you can use in your runbooks, and existing modules that you can install in Azure Automation are available in the [PowerShell Gallery](https://www.powershellgallery.com). You can launch this gallery from the Azure portal and install them directly into Azure Automation. You can also download them and install them manually.
+PowerShell modules contain cmdlets that you can use in your runbooks. Existing modules that you can install in Azure Automation are available in the [PowerShell Gallery](https://www.powershellgallery.com). You can launch this gallery from the Azure portal and install the modules directly into Azure Automation, or you can manually download and install them.
## Common scenarios available in PowerShell Gallery
The list below contains a few runbooks that support common scenarios. For a full
## Import a PowerShell runbook from the runbook gallery with the Azure portal 1. In the Azure portal, open your Automation account.
-2. Select **Runbooks gallery** under **Process Automation**.
-3. Select **Source: PowerShell Gallery**.
-4. Locate the gallery item you want and select it to view its details. On the left, you can enter additional search parameters for the publisher and type.
+1. Select **Runbooks gallery** under **Process Automation**.
+1. Select **Source: PowerShell Gallery**. This shows a list of available runbooks that you can browse.
+1. You can use the search box above the list to narrow the list, or you can use the filters to narrow the display by publisher, type, and sort. Locate the gallery item you want and select it to view its details.
- ![Browse gallery](media/automation-runbook-gallery/browse-gallery.png)
+ :::image type="content" source="media/automation-runbook-gallery/browse-gallery-sm.png" alt-text="Browsing the runbook gallery" lightbox="media/automation-runbook-gallery/browse-gallery-lg.png":::
-5. Click on **View source project** to view the item in the [Azure Automation GitHub Organization](https://github.com/azureautomation).
-6. To import an item, click on it to view its details and then click **Import**.
+1. To import an item, click **Import** on the details blade.
- ![Import button](media/automation-runbook-gallery/gallery-item-detail.png)
+ :::image type="content" source="media/automation-runbook-gallery/gallery-item-detail-sm.png" alt-text="Show a runbook gallery item detail" lightbox="media/automation-runbook-gallery/gallery-item-detail-lg.png":::
-7. Optionally, change the name of the runbook and then click **OK** to import the runbook.
-8. The runbook appears on the **Runbooks** tab for the Automation account.
+1. Optionally, change the name of the runbook and then click **OK** to import the runbook.
+1. The runbook appears on the **Runbooks** tab for the Automation account.
+
+## Import a PowerShell runbook from GitHub with the Azure portal
+
+1. In the Azure portal, open your Automation account.
+1. Select **Runbooks gallery** under **Process Automation**.
+1. Select **Source: GitHub**.
+1. You can use the filters above the list to narrow the display by publisher, type, and sort. Locate the gallery item you want and select it to view its details.
+
+ :::image type="content" source="media/automation-runbook-gallery/browse-gallery-github-sm.png" alt-text="Browsing the GitHub gallery" lightbox="media/automation-runbook-gallery/browse-gallery-github-lg.png":::
+
+1. To import an item, click **Import** on the details blade.
+
+ :::image type="content" source="media/automation-runbook-gallery/gallery-item-details-blade-github-sm.png" alt-text="Detailed view of a runbook from the GitHub gallery" lightbox="media/automation-runbook-gallery/gallery-item-details-blade-github-lg.png":::
+
+1. Optionally, change the name of the runbook and then click **OK** to import the runbook.
+1. The runbook appears on the **Runbooks** tab for the Automation account.
## Add a PowerShell runbook to the gallery
Microsoft encourages you to add runbooks to the PowerShell Gallery that you thin
## Import a module from the module gallery with the Azure portal 1. In the Azure portal, open your Automation account.
-2. Select **Modules** under **Shared Resources** to open the list of modules.
-3. Click **Browse gallery** from the top of the page.
+1. Select **Modules** under **Shared Resources** to open the list of modules.
+1. Click **Browse gallery** from the top of the page.
- ![Module gallery](media/automation-runbook-gallery/modules-blade.png)
+ :::image type="content" source="media/automation-runbook-gallery/modules-blade-sm.png" alt-text="View of the module gallery" lightbox="media/automation-runbook-gallery/modules-blade-lg.png":::
-4. On the Browse gallery page, you can search by the following fields:
+1. On the Browse gallery page, you can use the search box to find matches in any of the following fields:
* Module Name * Tags * Author * Cmdlet/DSC resource name
-5. Locate a module that you're interested in and select it to view its details.
+1. Locate a module that you're interested in and select it to view its details.
When you drill into a specific module, you can view more information. This information includes a link back to the PowerShell Gallery, any required dependencies, and all of the cmdlets or DSC resources that the module contains.
- ![PowerShell module details](media/automation-runbook-gallery/gallery-item-details-blade.png)
+ :::image type="content" source="media/automation-runbook-gallery/gallery-item-details-blade-sm.png" alt-text="Detailed view of a module from the gallery" lightbox="media/automation-runbook-gallery/gallery-item-details-blade-lg.png":::
-6. To install the module directly into Azure Automation, click **Import**.
-7. On the Import pane, you can see the name of the module to import. If all the dependencies are installed, the **OK** button is activated. If you're missing dependencies, you need to import those dependencies before you can import this module.
-8. On the Import pane, click **OK** to import the module. While Azure Automation imports a module to your account, it extracts metadata about the module and the cmdlets. This action might take a couple of minutes since each activity needs to be extracted.
-9. You receive an initial notification that the module is being deployed and another notification when it has completed.
-10. After the module is imported, you can see the available activities. You can use module resources in your runbooks and DSC resources.
+1. To install the module directly into Azure Automation, click **Import**.
+1. On the Import pane, you can see the name of the module to import. If all the dependencies are installed, the **OK** button is activated. If you're missing dependencies, you need to import those dependencies before you can import this module.
+1. On the Import pane, click **OK** to import the module. While Azure Automation imports a module to your account, it extracts metadata about the module and the cmdlets. This action might take a couple of minutes since each activity needs to be extracted.
+1. You receive an initial notification that the module is being deployed and another notification when it has completed.
+1. After the module is imported, you can see the available activities. You can use module resources in your runbooks and DSC resources.
> [!NOTE] > Modules that only support PowerShell core are not supported in Azure Automation and are unable to be imported in the Azure portal, or deployed directly from the PowerShell Gallery.
automation Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Automation description: Lists Azure Policy Regulatory Compliance controls available for Azure Automation. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 02/09/2021 Last updated : 03/05/2021
azure-app-configuration Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure App Configuration description: Lists Azure Policy Regulatory Compliance controls available for Azure App Configuration. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 02/09/2021 Last updated : 03/05/2021
azure-arc Restore Adventureworks Sample Db Into Postgresql Hyperscale Server Group https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/restore-adventureworks-sample-db-into-postgresql-hyperscale-server-group.md
Title: Restore the AdventureWorks sample database to Azure Arc enabled PostgreSQL Hyperscale
+ Title: Import the AdventureWorks sample database to Azure Arc enabled PostgreSQL Hyperscale
description: Restore the AdventureWorks sample database to Azure Arc enabled PostgreSQL Hyperscale
Last updated 09/22/2020
-# Restore the AdventureWorks sample database to Azure Arc enabled PostgreSQL Hyperscale
+# Import the AdventureWorks sample database to Azure Arc enabled PostgreSQL Hyperscale
[AdventureWorks](/sql/samples/adventureworks-install-configure) is a sample database containing an OLTP database used in tutorials, and examples. It's provided and maintained by Microsoft as part of the [SQL Server samples GitHub repository](https://github.com/microsoft/sql-server-samples/tree/master/samples/databases).
An open-source project has converted the AdventureWorks database to be compatibl
- [Original project](https://github.com/lorint/AdventureWorks-for-Postgres) - [Follow on project that pre-converts the CSV files to be compatible with PostgreSQL](https://github.com/NorfolkDataSci/adventure-works-postgres)
-This document describes a simple process to get the AdventureWorks sample database restored into your PostgreSQL Hyperscale server group.
+This document describes a simple process to get the AdventureWorks sample database imported into your PostgreSQL Hyperscale server group.
[!INCLUDE [azure-arc-data-preview](../../../includes/azure-arc-data-preview.md)]
Run a command like this to download the files replace the value of the pod name
> Your container will need to have Internet connectivity over 443 to download the file from GitHub. > [!NOTE]
-> Use the pod name of the Coordinator node of the Postgres Hyperscale server group. Its name is <server group name>-0. If you are not sure of the pod name run the command `kubectl get pod`
+> Use the pod name of the Coordinator node of the Postgres Hyperscale server group. Its name is <server group name>c-0 (for example postgres01c-0, where c stands for Coordinator node). If you are not sure of the pod name run the command `kubectl get pod`
```console kubectl exec <PostgreSQL pod name> -n <namespace name> -c postgres -- /bin/bash -c "cd /tmp && curl -k -O https://raw.githubusercontent.com/microsoft/azure_arc/main/azure_arc_data_jumpstart/aks/arm_template/postgres_hs/AdventureWorks.sql"
kubectl exec <PostgreSQL pod name> -n <namespace name> -c postgres -- /bin/bash
#kubectl exec postgres02-0 -n arc -c postgres -- /bin/bash -c "cd /tmp && curl -k -O https://raw.githubusercontent.com/microsoft/azure_arc/main/azure_arc_data_jumpstart/aks/arm_template/postgres_hs/AdventureWorks.sql" ```
-## Step 2: Restore the AdventureWorks database
+## Step 2: Import the AdventureWorks database
Similarly, you can run a kubectl exec command to use the psql CLI tool that is included in the PostgreSQL Hyperscale server group containers to create and load the database.
kubectl exec <PostgreSQL pod name> -n <namespace name> -c postgres -- psql --use
#kubectl exec postgres02-0 -n arc -c postgres -- psql --username postgres -c 'CREATE DATABASE "adventureworks";' ```
-Then, run a command like this to restore the database substituting the value of the pod name and the namespace name before you run it.
+Then, run a command like this to import the database substituting the value of the pod name and the namespace name before you run it.
```console kubectl exec <PostgreSQL pod name> -n <namespace name> -c postgres -- psql --username postgres -d adventureworks -f /tmp/AdventureWorks.sql
azure-arc Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Arc enabled servers (preview) description: Lists Azure Policy Regulatory Compliance controls available for Azure Arc enabled servers (preview). These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 02/09/2021 Last updated : 03/05/2021
azure-cache-for-redis Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Cache for Redis description: Lists Azure Policy Regulatory Compliance controls available for Azure Cache for Redis. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 02/09/2021 Last updated : 03/05/2021
azure-functions Functions Host Json https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-host-json.md
Controls the logging behaviors of the function app, including Application Insigh
|Property |Default | Description | |||| |fileLoggingMode|debugOnly|Defines what level of file logging is enabled. Options are `never`, `always`, `debugOnly`. |
-|logLevel|n/a|Object that defines the log category filtering for functions in the app. Versions 2.x and later follow the ASP.NET Core layout for log category filtering. This setting lets you filter logging for specific functions. For more information, see [Log filtering](/aspnet/core/fundamentals/logging/?view=aspnetcore-2.1&preserve-view=true#log-filtering) in the ASP.NET Core documentation. |
+|logLevel|n/a|Object that defines the log category filtering for functions in the app. This setting lets you filter logging for specific functions. For more information, see [Configure log levels](configure-monitoring.md#configure-log-levels). |
|console|n/a| The [console](#console) logging setting. | |applicationInsights|n/a| The [applicationInsights](#applicationinsights) setting. |
azure-monitor Container Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/containers/container-insights-overview.md
The main differences in monitoring a Windows Server cluster compared to a Linux
Check out the following video providing an intermediate level deep dive to help you learn about monitoring your AKS cluster with Container insights.
-> [!VIDEO https://youtu.be/XEdwGvS2AwA]
+> [!VIDEO https://www.youtube.com/embed/XEdwGvS2AwA]
## How do I access this feature?
azure-monitor Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Monitor description: Lists Azure Policy Regulatory Compliance controls available for Azure Monitor. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 02/09/2021 Last updated : 03/05/2021
azure-resource-manager Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Resource Manager description: Lists Azure Policy Regulatory Compliance controls available for Azure Resource Manager. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 02/09/2021 Last updated : 03/05/2021
azure-signalr Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-signalr/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure SignalR description: Lists Azure Policy Regulatory Compliance controls available for Azure SignalR. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 02/09/2021 Last updated : 03/05/2021
azure-sql Connect Query Content Reference Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/connect-query-content-reference-guide.md
The following table lists connectivity libraries or *drivers* that client applic
| Language | Platform | Additional resources | Download | Get started | | :-- | :-- | :-- | :-- | :-- | | C# | Windows, Linux, macOS | [Microsoft ADO.NET for SQL Server](/sql/connect/ado-net/microsoft-ado-net-sql-server) | [Download](https://www.microsoft.com/net/download/) | [Get started](https://www.microsoft.com/sql-server/developer-get-started/csharp/ubuntu)
-| Java | Windows, Linux, macOS | [Microsoft JDBC driver for SQL Server](/sql/connect/jdbc/microsoft-jdbc-driver-for-sql-server/) | [Download](https://go.microsoft.com/fwlink/?linkid=852460) | [Get started](https://www.microsoft.com/sql-server/developer-get-started/java/ubuntu)
+| Java | Windows, Linux, macOS | [Microsoft JDBC driver for SQL Server](/sql/connect/jdbc/microsoft-jdbc-driver-for-sql-server/) | [Download](/sql/connect/jdbc/download-microsoft-jdbc-driver-for-sql-server) | [Get started](https://www.microsoft.com/sql-server/developer-get-started/java/ubuntu)
| PHP | Windows, Linux, macOS| [PHP SQL driver for SQL Server](/sql/connect/php/microsoft-php-driver-for-sql-server) | [Download](/sql/connect/php/download-drivers-php-sql-server) | [Get started](https://www.microsoft.com/sql-server/developer-get-started/php/ubuntu/) | Node.js | Windows, Linux, macOS | [Node.js driver for SQL Server](/sql/connect/node-js/node-js-driver-for-sql-server/) | [Install](/sql/connect/node-js/step-1-configure-development-environment-for-node-js-development/) | [Get started](https://www.microsoft.com/sql-server/developer-get-started/node/ubuntu) | Python | Windows, Linux, macOS | [Python SQL driver](/sql/connect/python/python-driver-for-sql-server/) | Install choices: <br/> \* [pymssql](/sql/connect/python/pymssql/step-1-configure-development-environment-for-pymssql-python-development/) <br/> \* [pyodbc](/sql/connect/python/pyodbc/step-1-configure-development-environment-for-pyodbc-python-development/) | [Get started](https://www.microsoft.com/sql-server/developer-get-started/python/ubuntu)
azure-sql Database Copy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/database-copy.md
Monitor the copying process by querying the [sys.databases](/sql/relational-data
> [!IMPORTANT] > If you need to create a copy with a substantially smaller service objective than the source, the target database may not have sufficient resources to complete the seeding process and it can cause the copy operaion to fail. In this scenario use a geo-restore request to create a copy in a different server and/or a different region. See [Recover an Azure SQL Database using database backups](recovery-using-backups.md#geo-restore) for more information.
-## Azure roles to manage database copy
+## Azure RBAC roles and permissions to manage database copy
To create a database copy, you will need to be in the following roles
azure-sql Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure SQL Database description: Lists Azure Policy Regulatory Compliance controls available for Azure SQL Database and SQL Managed Instance. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 02/09/2021 Last updated : 03/05/2021
azure-vmware Azure Vmware Solution Platform Updates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/azure-vmware-solution-platform-updates.md
+
+ Title: Platform updates for Azure VMware Solution
+description: Learn about the platform updates to Azure VMware Solution.
+ Last updated : 03/05/2021++
+# Platform updates for Azure VMware Solution
++
+## March 4, 2021
+
+Important updates to Azure VMware Solutions will be applied starting in March 2021. You'll receive notification through Azure Service Health that includes the timeline of the maintenance. In this article, you learn what to expect during this maintenance operation and changes to your private cloud.
+
+- Azure VMware Solutions will apply patches to ESXi in existing private clouds to [VMware ESXi 6.7, Patch Release ESXi670-202011002](https://docs.vmware.com/en/VMware-vSphere/6.7/rn/esxi670-202011002.html) through March 15, 2021.
+
+- Documented workarounds for the vSphere stack, as per [VMSA-2021-0002](https://www.vmware.com/security/advisories/VMSA-2021-0002.html), will also be applied through March 15, 2021.
+
+>[!NOTE]
+>This is non-disruptive and should not impact Azure VMware Services or workloads. During maintenance, various VMware alerts, such as _Lost network connectivity on DVPorts_ and _Lost uplink redundancy on DVPorts_, appear in vCenter and clear automatically as the maintenance progresses.
++
+## Post update
+Once complete, newer versions of VMware components appear. If you notice any issues or have any questions, contact our support team by opening a support ticket.
+++
backup Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Backup description: Lists Azure Policy Regulatory Compliance controls available for Azure Backup. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 02/09/2021 Last updated : 03/05/2021
batch Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Batch description: Lists Azure Policy Regulatory Compliance controls available for Azure Batch. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 02/09/2021 Last updated : 03/05/2021
cognitive-services Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Cognitive Services description: Lists Azure Policy Regulatory Compliance controls available for Azure Cognitive Services. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 02/09/2021 Last updated : 03/05/2021
communication-services Privacy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/privacy.md
[!INCLUDE [Public Preview Notice](../includes/public-preview-include.md)]
-Azure Communication Services is committed to helping our customers meet their privacy and personal data requirements. As a developer using Communication Services with a direct relationship with humans using the application, you are potentially a controller of their data. Since Azure Communication Services is storing this data on your behalf, we are most likely a processor of this data. This page summarizes how the service retains data and how you can identify, export, and delete this data.
+Azure Communication Services is committed to helping our customers meet their privacy and personal data requirements. As a developer using Communication Services with a direct relationship with humans using the application, you are potentially a controller of their data. Since Azure Communication Services is storing and encrypting this data at rest on your behalf, we are most likely a processor of this data. This page summarizes how the service retains data and how you can identify, export, and delete this data.
## Data residency
container-registry Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Container Registry description: Lists Azure Policy Regulatory Compliance controls available for Azure Container Registry. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 02/09/2021 Last updated : 03/05/2021
cosmos-db Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Cosmos DB description: Lists Azure Policy Regulatory Compliance controls available for Azure Cosmos DB. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 02/09/2021 Last updated : 03/05/2021
cosmos-db Troubleshoot Dot Net Sdk Request Timeout https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/troubleshoot-dot-net-sdk-request-timeout.md
description: Learn how to diagnose and fix .NET SDK request timeout exceptions.
Previously updated : 08/06/2020 Last updated : 03/05/2021
The following list contains known causes and solutions for request timeout excep
### High CPU utilization High CPU utilization is the most common case. For optimal latency, CPU usage should be roughly 40 percent. Use 10 seconds as the interval to monitor maximum (not average) CPU utilization. CPU spikes are more common with cross-partition queries where it might do multiple connections for a single query.
+If the error contains `TransportException` information, it might contain also `CPU History`:
+
+```
+CPU history:
+(2020-08-28T00:40:09.1769900Z 0.114),
+(2020-08-28T00:40:19.1763818Z 1.732),
+(2020-08-28T00:40:29.1759235Z 0.000),
+(2020-08-28T00:40:39.1763208Z 0.063),
+(2020-08-28T00:40:49.1767057Z 0.648),
+(2020-08-28T00:40:59.1689401Z 0.137),
+CPU count: 8)
+```
+
+* If the CPU measurements are over 70%, the timeout is likely to be caused by CPU exhaustion. In this case, the solution is to investigate the source of the high CPU utilization and reduce it, or scale the machine to a larger resource size.
+* If the CPU measurements are not happening every 10 seconds (e.g., gaps or measurement times indicate larger times in between measurements), the cause is thread starvation. In this case the solution is to investigate the source/s of the thread starvation (potentially locked threads), or scale the machine/s to a larger resource size.
+ #### Solution: The client application that uses the SDK should be scaled up or out.
cosmos-db Troubleshoot Dot Net Sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/troubleshoot-dot-net-sdk.md
Title: Diagnose and troubleshoot issues when using Azure Cosmos DB .NET SDK
description: Use features like client-side logging and other third-party tools to identify, diagnose, and troubleshoot Azure Cosmos DB issues when using .NET SDK. Previously updated : 02/05/2021 Last updated : 03/05/2021
If your app is deployed on [Azure Virtual Machines without a public IP address](
### <a name="high-network-latency"></a>High network latency High network latency can be identified by using the [diagnostics string](/dotnet/api/microsoft.azure.documents.client.resourceresponsebase.requestdiagnosticsstring) in the V2 SDK or [diagnostics](/dotnet/api/microsoft.azure.cosmos.responsemessage.diagnostics#Microsoft_Azure_Cosmos_ResponseMessage_Diagnostics) in V3 SDK.
-If no [timeouts](troubleshoot-dot-net-sdk-request-timeout.md) are present and the diagnostics show single requests where the high latency is evident on the difference between `ResponseTime` and `RequestStartTime`, like so (>300 milliseconds in this example):
+If no [timeouts](troubleshoot-dot-net-sdk-request-timeout.md) are present and the diagnostics show single requests where the high latency is evident.
+
+# [V3 SDK](#tab/diagnostics-v3)
+
+Diagnostics can be obtained from any `ResponseMessage`, `ItemResponse`, `FeedResponse`, or `CosmosException` by the `Diagnostics` property:
+
+```csharp
+ItemResponse<MyItem> response = await container.CreateItemAsync<MyItem>(item);
+Console.WriteLine(response.Diagnostics.ToString());
+```
+
+Network interactions in the diagnostics will be for example:
+
+```json
+{
+ "name": "Microsoft.Azure.Documents.ServerStoreModel Transport Request",
+ "id": "0e026cca-15d3-4cf6-bb07-48be02e1e82e",
+ "component": "Transport",
+ "start time": "12: 58: 20: 032",
+ "duration in milliseconds": 1638.5957
+}
+```
+
+Where the `duration in milliseconds` would show the latency.
+
+# [V2 SDK](#tab/diagnostics-v2)
+
+The diagnostics are available when the client is configured in [direct mode](sql-sdk-connection-modes.md), through the `RequestDiagnosticsString` property:
+
+```csharp
+ResourceResponse<Document> response = await client.ReadDocumentAsync(documentLink, new RequestOptions() { PartitionKey = new PartitionKey(partitionKey) });
+Console.WriteLine(response.RequestDiagnosticsString);
+```
+
+And the latency would be on the difference between `ResponseTime` and `RequestStartTime`:
```bash RequestStartTime: 2020-03-09T22:44:49.5373624Z, RequestEndTime: 2020-03-09T22:44:49.9279906Z, Number of regions attempted:1 ResponseTime: 2020-03-09T22:44:49.9279906Z, StoreResult: StorePhysicalAddress: rntbd://..., ... ```
+
This latency can have multiple causes:
data-factory Data Flow Expression Functions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-flow-expression-functions.md
Previously updated : 02/04/2021 Last updated : 03/04/2021 # Data transformation expressions in mapping data flow
___
## Conversion functions
-Conversion functions are used to convert data and data types
+Conversion functions are used to convert data and test for data types
+
+<code>isBoolean</code>
+<code><b>isBoolean(<value1> : string) => boolean</b></code><br/><br/>
+Checks if the string value is a boolean value according to the rules of ``toBoolean()``
+* ``isBoolean('true') -> true``
+* ``isBoolean('no') -> true``
+* ``isBoolean('microsoft') -> false``
+
+<code>isByte</code>
+<code><b>isByte(<value1> : string) => boolean</b></code><br/><br/>
+Checks if the string value is a byte value given an optional format according to the rules of ``toByte()``
+* ``isByte('123') -> true``
+* ``isByte('chocolate') -> false``
+
+<code>isDate</code>
+<code><b>isDate (<value1> : string, [<format>: string]) => boolean</b></code><br/><br/>
+Checks if the input date string is a date using an optional input date format. Refer Java's SimpleDateFormat for available formats. If the input date format is omitted, default format is ``yyyy-[M]M-[d]d``. Accepted formats are ``[ yyyy, yyyy-[M]M, yyyy-[M]M-[d]d, yyyy-[M]M-[d]dT* ]``
+* ``isDate('2012-8-18') -> true``
+* ``isDate('12/18--234234' -> 'MM/dd/yyyy') -> false``
+
+<code>isShort</code>
+<code><b>isShort (<value1> : string, [<format>: string]) => boolean</b></code><br/><br/>
+Checks of the string value is a short value given an optional format according to the rules of ``toShort()``
+* ``isShort('123') -> true``
+* ``isShort('$123' -> '$###') -> true``
+* ``isShort('microsoft') -> false``
+
+<code>isInteger</code>
+<code><b>isInteger (<value1> : string, [<format>: string]) => boolean</b></code><br/><br/>
+Checks of the string value is a integer value given an optional format according to the rules of ``toInteger()``
+* ``isInteger('123') -> true``
+* ``isInteger('$123' -> '$###') -> true``
+* ``isInteger('microsoft') -> false``
+
+<code>isLong</code>
+<code><b>isLong (<value1> : string, [<format>: string]) => boolean</b></code><br/><br/>
+Checks of the string value is a long value given an optional format according to the rules of ``toLong()``
+* ``isLong('123') -> true``
+* ``isLong('$123' -> '$###') -> true``
+* ``isLong('gunchus') -> false``
+
+<code>isFloat</code>
+<code><b>isFloat (<value1> : string, [<format>: string]) => boolean</b></code><br/><br/>
+Checks of the string value is a float value given an optional format according to the rules of ``toFloat()``
+* ``isFloat('123') -> true``
+* ``isFloat('$123.45' -> '$###.00') -> true``
+* ``isFloat('icecream') -> false``
+
+<code>isDouble</code>
+<code><b>isDouble (<value1> : string, [<format>: string]) => boolean</b></code><br/><br/>
+Checks of the string value is a double value given an optional format according to the rules of ``toDouble()``
+* ``isDouble('123') -> true``
+* ``isDouble('$123.45' -> '$###.00') -> true``
+* ``isDouble('icecream') -> false``
+
+<code>isDecimal</code>
+<code><b>isDecimal (<value1> : string) => boolean</b></code><br/><br/>
+Checks of the string value is a decimal value given an optional format according to the rules of ``toDecimal()``
+* ``isDecimal('123.45') -> true``
+* ``isDecimal('12/12/2000') -> false``
+
+<code>isTimestamp</code>
+<code><b>isTimestamp (<value1> : string, [<format>: string]) => boolean</b></code><br/><br/>
+Checks if the input date string is a timestamp using an optional input timestamp format. Refer to Java's SimpleDateFormat for available formats. If the timestamp is omitted the default pattern ``yyyy-[M]M-[d]d hh:mm:ss[.f...]`` is used. You can pass an optional timezone in the form of 'GMT', 'PST', 'UTC', 'America/Cayman'. Timestamp supports up to millisecond accuracy with value of 999 Refer to Java's SimpleDateFormat for available formats.
+* ``isTimestamp('2016-12-31 00:12:00') -> true``
+* ``isTimestamp('2016-12-31T00:12:00' -> 'yyyy-MM-dd\\'T\\'HH:mm:ss' -> 'PST') -> true``
+* ``isTimestamp('2012-8222.18') -> false``
### <code>toBase64</code> <code><b>toBase64(<i>&lt;value1&gt;</i> : string) => string</b></code><br/><br/>
data-factory Enable Customer Managed Key https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/enable-customer-managed-key.md
To learn more about user-assigned managed identity, see [Managed identity types]
1. Make sure that User-assigned Managed Identity (UA-MI) has _Get_, _Unwrap Key_ and _Wrap Key_ permissions to Key Vault 1. Under __Advanced__ tab, check the box for _Enable encryption using a customer managed key_
- :::image type="content" source="media/enable-customer-managed-key/06-uami-cmk.png" alt-text="Screenshot of Advanced tab for data factory creation experience in Azure portal.":::
+ :::image type="content" source="media/enable-customer-managed-key/06-user-assigned-managed-identity.png" alt-text="Screenshot of Advanced tab for data factory creation experience in Azure portal.":::
-1. Provide the url for Key Vault
+1. Provide the url for the customer managed key stored in Key Vault
1. Select an appropriate user assigned managed identity to authenticate with Key Vault
data-lake-analytics Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-lake-analytics/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Data Lake Analytics description: Lists Azure Policy Regulatory Compliance controls available for Azure Data Lake Analytics. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 02/09/2021 Last updated : 03/05/2021
data-lake-store Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-lake-store/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Data Lake Storage Gen1 description: Lists Azure Policy Regulatory Compliance controls available for Azure Data Lake Storage Gen1. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 02/09/2021 Last updated : 03/05/2021
databox-online Azure Stack Edge Contact Microsoft Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-contact-microsoft-support.md
Previously updated : 01/07/2021 Last updated : 03/05/2021 # Open a support ticket for Azure Stack Edge Pro and Azure Data Box Gateway + This article applies to Azure Stack Edge Pro and Azure Data Box Gateway both of which are managed by the Azure Stack Edge Pro / Azure Data Box Gateway service. If you encounter any issues with your service, you can create a service request for technical support. This article walks you through: * How to create a support request.
databox-online Azure Stack Edge Gpu 2008 Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-2008-release-notes.md
Previously updated : 09/07/2020 Last updated : 03/05/2021 # Azure Stack Edge Pro with GPU Preview release notes + The following release notes identify the critical open issues and the resolved issues for 2008 preview release for your Azure Stack Edge Pro devices with GPU. The release notes are continuously updated, and as critical issues requiring a workaround are discovered, they are added. Before you deploy your Azure Stack Edge Pro device, carefully review the information contained in the release notes.
-This article applies to the following software release - **Azure Stack Edge Pro 2008**.
+This article applies to the following software release - **Azure Stack Edge Pro 2008**.
<! **2.1.1328.1904**-->
The following table provides a summary of known issues for the Azure Stack Edge
|**10.**|Kubernetes |Port 31000 is reserved for Kubernetes Dashboard. Similarly, in the default configuration, the IP addresses 172.28.0.1 and 172.28.0.10, are reserved for Kubernetes service and Core DNS service respectively.|Do not use reserved IPs.| |**11.**|Kubernetes |Kubernetes does not currently allow multi-protocol LoadBalancer services. For example, a DNS service that would have to listen on both TCP and UDP. |To work around this limitation of Kubernetes with MetalLB, two services (one for TCP, one for UDP) can be created on the same pod selector. These services use the same sharing key and spec.loadBalancerIP to share the same IP address. IPs can also be shared if you have more services than available IP addresses. <br> For more information, see [IP address sharing](https://metallb.universe.tf/usage/#ip-address-sharing).| |**12.**|Kubernetes cluster|Existing Azure IoT Edge marketplace modules will not run on the Kubernetes cluster as the hosting platform for IoT Edge on Azure Stack Edge device.|The modules will need to be modified before these are deployed on the Azure Stack Edge device. For more information, see Modify Azure IoT Edge modules from marketplace to run on Azure Stack Edge device.<!-- insert link-->|
-|**13.**|Kubernetes |File-based bind mounts are not supported with Azure IoT Edge on Kubernetes on Azure Stack Edge device.|IoT Edge uses a translation layer to translate `ContainerCreate` options to Kubernetes constructs. Creating `Binds` maps to hostpath directory or create and thus file-based bind mounts cannot be bound to paths in IoT Edge containers.|
+|**13.**|Kubernetes |File-based bind mounts are not supported with Azure IoT Edge on Kubernetes on Azure Stack Edge device.|IoT Edge uses a translation layer to translate `ContainerCreate` options to Kubernetes constructs. Creating `Binds` maps to `hostpath` directory or create and thus file-based bind mounts cannot be bound to paths in IoT Edge containers.|
|**14.**|Kubernetes |If you bring your own certificates for IoT Edge and add those on your Azure Stack Edge device, the new certificates are not picked up as part of the Helm charts update.|To workaround this problem, [Connect to the PowerShell interface of the device](azure-stack-edge-gpu-connect-powershell-interface.md). Restart `iotedged` and `edgehub` pods.| |**15.**|Certificates |In certain instances, certificate state in the local UI may take several seconds to update. |The following scenarios in the local UI may be affected.<ul><li>**Status** column in **Certificates** page.</li><li>**Security** tile in **Get started** page.</li><li>**Configuration** tile in **Overview** page.</li></ul> |
databox-online Azure Stack Edge Gpu 2010 Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-2010-release-notes.md
+
+ Title: Azure Stack Edge Pro GA release notes| Microsoft Docs
+description: Describes critical open issues and resolutions for the Azure Stack Edge Pro running general availability release.
++
+
+++ Last updated : 03/05/2021+++
+# Azure Stack Edge Pro with GPU General Availability (GA) release notes
++
+The following release notes identify the critical open issues and the resolved issues for general availability (GA) release for your Azure Stack Edge Pro devices with GPU.
+
+The release notes are continuously updated, and as critical issues requiring a workaround are discovered, they are added. Before you deploy your Azure Stack Edge Pro device, carefully review the information contained in the release notes.
+
+This article applies to the **Azure Stack Edge Pro 2010** release which maps to software version number **2.1.1377.2170**.
+
+## What's new
+
+The following new features are available in the Azure Stack Edge 2010 release.
+
+- **Storage classes** - In this release, Storage classes are available that let you dynamically provision storage. For more information, see [Kubernetes storage management on your Azure Stack Edge Pro GPU device](azure-stack-edge-gpu-kubernetes-storage.md#dynamicprovisioning).
+- **Kubernetes dashboard with metrics server** - In this release, a Kubernetes Dashboard is added with a metrics server add-on. You can use the dashboard to get an overview of the applications running on your Azure Stack Edge Pro device, view status of Kubernetes cluster resources, and see any errors that have occurred on the device. The Metrics server aggregates the CPU and memory usage across Kubernetes resources on the device. For more information, see [Use Kubernetes dashboard to monitor your Azure Stack Edge Pro GPU device](azure-stack-edge-gpu-monitor-kubernetes-dashboard.md).
+- **Azure Arc enabled Kubernetes on Azure Stack Edge Pro** - Beginning this release, you can deploy application workloads on your Azure Stack Edge Pro device via Azure Arc enabled Kubernetes. Azure Arc is a hybrid management tool that allows you to deploy applications on your Kubernetes clusters. For more information, see [Deploy workloads via Azure Arc on your Azure Stack Edge Pro device](azure-stack-edge-gpu-deploy-arc-kubernetes-cluster.md).
+
+## Known issues
+
+The following table provides a summary of known issues for the Azure Stack Edge Pro device.
+
+| No. | Feature | Issue | Workaround/comments |
+| | | | |
+|**1.**|Preview features |For this GA release, the following features: Local Azure Resource Manager, VMs, Kubernetes, Azure Arc enabled Kubernetes, Multi-Process service (MPS) for GPU - are all available in preview for your Azure Stack Edge Pro device. |These features will be generally available in a later release. |
+| **2.** |Azure Stack Edge Pro + Azure SQL | Creating SQL database requires Administrator access. |Do the following steps instead of Steps 1-2 in [https://docs.microsoft.com/azure/iot-edge/tutorial-store-data-sql-server#create-the-sql-database](../iot-edge/tutorial-store-data-sql-server.md#create-the-sql-database). <ul><li>In the local UI of your device, enable compute interface. Select **Compute > Port # > Enable for compute > Apply.**</li><li>Download `sqlcmd` on your client machine from https://docs.microsoft.com/sql/tools/sqlcmd-utility </li><li>Connect to your compute interface IP address (the port that was enabled), adding a ",1401" to the end of the address.</li><li>Final command will look like this: sqlcmd -S {Interface IP},1401 -U SA -P "Strong!Passw0rd".</li>After this, steps 3-4 from the current documentation should be identical. </li></ul> |
+| **3.** |Refresh| Incremental changes to blobs restored via **Refresh** are NOT supported |For Blob endpoints, partial updates of blobs after a Refresh, may result in the updates not getting uploaded to the cloud. For example, sequence of actions such as:<ul><li>Create blob in cloud. Or delete a previously uploaded blob from the device.</li><li>Refresh blob from the cloud into the appliance using the refresh functionality.</li><li>Update only a portion of the blob using Azure SDK REST APIs.</li></ul>These actions can result in the updated sections of the blob to not get updated in the cloud. <br>**Workaround**: Use tools such as robocopy, or regular file copy through Explorer or command line, to replace entire blobs.|
+|**4.**|Throttling|During throttling, if new writes are not allowed into the device, writes done by NFS client fail with "Permission Denied" error.| The error will show as below:<br>`hcsuser@ubuntu-vm:~/nfstest$ mkdir test`<br>mkdir: cannot create directory 'test': Permission deniedΓÇï|
+|**5.**|Blob Storage ingestion|When using AzCopy version 10 for Blob storage ingestion, run AzCopy with the following argument: `Azcopy <other arguments> --cap-mbps 2000`| If these limits are not provided for AzCopy, then it could potentially send a large number of requests to the device and result in issues with the service.|
+|**6.**|Tiered storage accounts|The following apply when using tiered storage accounts:<ul><li> Only block blobs are supported. Page blobs are not supported.</li><li>There is no snapshot or copy API support.</li><li> Hadoop workload ingestion through `distcp` is not supported as it uses the copy operation heavily.</li></ul>||
+|**7.**|NFS share connection|If multiple processes are copying to the same share, and the `nolock` attribute is not used, you may see errors during the copy.ΓÇï|The `nolock` attribute must be passed to the mount command to copy files to the NFS share. For example: `C:\Users\aseuser mount -o anon \\10.1.1.211\mnt\vms Z:`.|
+|**8.**|Kubernetes cluster|When applying an update on your device that is running a kubernetes cluster, the kubernetes virtual machines will restart and reboot. In this instance, only pods that are deployed with replicas specified are automatically restored after an update. |If you have created individual pods outside of a replication controller without specifying a replica set, then these pods will not be automatically restored after the device update. You will need to restore these pods.<br>A replica set replaces pods that are deleted or terminated for any reason, such as node failure or disruptive node upgrade. For this reason, we recommend that you use a replica set even if your application requires only a single pod.|
+|**9.**|Kubernetes cluster|Kubernetes on Azure Stack Edge Pro is supported only with Helm v3 or later. For more information, go to [Frequently asked questions: Removal of Tiller](https://v3.helm.sh/docs/faq/).|
+|**10.**|Azure Arc enabled Kubernetes |For the GA release, Azure Arc enabled Kubernetes is updated from version 0.1.18 to 0.2.9. As the Azure Arc enabled Kubernetes update is not supported on Azure Stack Edge device, you will need to redeploy Azure Arc enabled Kubernetes.|Follow these steps:<ol><li>[Apply device software and Kubernetes updates](azure-stack-edge-gpu-install-update.md).</li><li>Connect to the [PowerShell interface of the device](azure-stack-edge-gpu-connect-powershell-interface.md).</li><li>Remove the existing Azure Arc agent. Type: `Remove-HcsKubernetesAzureArcAgent`.</li><li>Deploy [Azure Arc to a new resource](azure-stack-edge-gpu-deploy-arc-kubernetes-cluster.md). Do not use an existing Azure Arc resource.</li></ol>|
+|**11.**|Azure Arc enabled Kubernetes|Azure Arc deployments are not supported if web proxy is configured on your Azure Stack Edge Pro device.||
+|**12.**|Kubernetes |Port 31000 is reserved for Kubernetes Dashboard. Similarly, in the default configuration, the IP addresses 172.28.0.1 and 172.28.0.10, are reserved for Kubernetes service and Core DNS service respectively.|Do not use reserved IPs.|
+|**13.**|Kubernetes |Kubernetes does not currently allow multi-protocol LoadBalancer services. For example, a DNS service that would have to listen on both TCP and UDP. |To work around this limitation of Kubernetes with MetalLB, two services (one for TCP, one for UDP) can be created on the same pod selector. These services use the same sharing key and spec.loadBalancerIP to share the same IP address. IPs can also be shared if you have more services than available IP addresses. <br> For more information, see [IP address sharing](https://metallb.universe.tf/usage/#ip-address-sharing).|
+|**14.**|Kubernetes cluster|Existing Azure IoT Edge marketplace modules may require modifications to run on IoT Edge on Azure Stack Edge device.|For more information, see Modify Azure IoT Edge modules from marketplace to run on Azure Stack Edge device.<!-- insert link-->|
+|**15.**|Kubernetes |File-based bind mounts are not supported with Azure IoT Edge on Kubernetes on Azure Stack Edge device.|IoT Edge uses a translation layer to translate `ContainerCreate` options to Kubernetes constructs. Creating `Binds` maps to `hostpath` directory and thus file-based bind mounts cannot be bound to paths in IoT Edge containers. If possible, map the parent directory.|
+|**16.**|Kubernetes |If you bring your own certificates for IoT Edge and add those on your Azure Stack Edge device after the compute is configured on the device, the new certificates are not picked up.|To work around this problem, you should upload the certificates before you configure compute on the device. If the compute is already configured, [Connect to the PowerShell interface of the device and run IoT Edge commands](azure-stack-edge-gpu-connect-powershell-interface.md#use-iotedge-commands). Restart `iotedged` and `edgehub` pods.|
+|**17.**|Certificates |In certain instances, certificate state in the local UI may take several seconds to update. |The following scenarios in the local UI may be affected.<ul><li>**Status** column in **Certificates** page.</li><li>**Security** tile in **Get started** page.</li><li>**Configuration** tile in **Overview** page.</li></ul> |
+|**17.**|IoT Edge |Modules deployed through IoT Edge can't use host network. | |
+|**18.**|Compute + Kubernetes |Compute/Kubernetes does not support NTLM web proxy. ||
+|**19.**|Compute + web proxy + update |If you have compute configured with web proxy, then compute update may fail. |We recommend that you disable compute before the update. |
+
+<!--|**18.**|Azure Private Edge Zone (Preview) |There is a known issue with Virtual Network Function VM if the VM was created on Azure Stack Edge device running earlier preview builds such as 2006/2007b and then the device was updated to 2009 GA release. The issue is that the VNF information can't be retrieved or any new VNFs can't be created unless the VNF VMs are deleted before the device is updated. |Before you update Azure Stack Edge device to 2009 release, use the PowerShell command `get-mecvnf` followed by `remove-mecvnf <VNF guid>` to remove all Virtual Network Function VMs one at a time. After the upgrade, you will need to redeploy the same VNFs.|-->
++++
+## Next steps
+
+- [Prepare to deploy Azure Stack Edge Pro device with GPU](azure-stack-edge-gpu-deploy-prep.md)
databox-online Azure Stack Edge Gpu 2101 Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-2101-release-notes.md
Previously updated : 02/08/2021 Last updated : 02/22/2021 # Azure Stack Edge 2101 release notes + The following release notes identify the critical open issues and the resolved issues for the 2101 release for your Azure Stack Edge devices. These release notes are applicable for Azure Stack Edge Pro GPU, Azure Stack Edge Pro R, and Azure Stack Edge Mini R devices. Features and issues that correspond to a specific model are called out wherever applicable. The release notes are continuously updated, and as critical issues requiring a workaround are discovered, they are added. Before you deploy your device, carefully review the information contained in the release notes.
databox-online Azure Stack Edge Gpu Activation Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-activation-key-vault.md
Previously updated : 10/10/2020 Last updated : 02/22/2021 # Azure Key Vault integration with Azure Stack Edge + Azure Key Vault is integrated with Azure Stack Edge resource for secret management. This article provides details on how an Azure Key Vault is created for Azure Stack Edge resource during device activation and is then used for secret management.
databox-online Azure Stack Edge Gpu Certificate Requirements https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-certificate-requirements.md
Previously updated : 11/17/2020 Last updated : 02/22/2021 # Certificate requirements + This article describes the certificate requirements that must be met before certificates can be installed on your Azure Stack Edge Pro device. The requirements are related to PFX certificates, issuing authority, certificate subject name and subject alternative name, and supported certificate algorithms. ## Certificate issuing authority
databox-online Azure Stack Edge Gpu Certificate Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-certificate-troubleshooting.md
Previously updated : 11/20/2020 Last updated : 02/22/2021 # Troubleshooting certificate errors + The article provides troubleshooting common certificate errors when installing certificates to your Azure Stack Edge Pro device. ## Common certificate errors
databox-online Azure Stack Edge Gpu Connect Powershell Interface https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-connect-powershell-interface.md
Previously updated : 10/06/2020 Last updated : 02/22/2021 # Manage an Azure Stack Edge Pro GPU device via Windows PowerShell + Azure Stack Edge Pro solution lets you process data and send it over the network to Azure. This article describes some of the configuration and management tasks for your Azure Stack Edge Pro device. You can use the Azure portal, local web UI, or the Windows PowerShell interface to manage your device. This article focuses on how you can connect to the PowerShell interface of the device and the tasks you can do using this interface.
databox-online Azure Stack Edge Gpu Create Certificates Tool https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-create-certificates-tool.md
Previously updated : 11/24/2020 Last updated : 02/22/2021 # Create certificates for your Azure Stack Edge Pro using Azure Stack Hub Readiness Checker tool
-<!--[!INCLUDE [applies-to-skus](../../includes/azure-stack-edge-applies-to-all-sku.md)]-->
This article describes how to create certificates for your Azure Stack Edge Pro using the Azure Stack Hub Readiness Checker tool.
databox-online Azure Stack Edge Gpu Create Kubernetes Cluster https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-create-kubernetes-cluster.md
Previously updated : 01/27/2021 Last updated : 02/22/2021 # Connect to and manage a Kubernetes cluster via kubectl on your Azure Stack Edge Pro GPU device + On your Azure Stack Edge Pro device, a Kubernetes cluster is created when you configure compute role. Once the Kubernetes cluster is created, then you can connect to and manage the cluster locally from a client machine via a native tool such as *kubectl*. This article describes how to connect to a Kubernetes cluster on your Azure Stack Edge Pro device and then manage it using *kubectl*.
databox-online Azure Stack Edge Gpu Create Virtual Machine Image https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-create-virtual-machine-image.md
Previously updated : 01/25/2021 Last updated : 02/22/2021 #Customer intent: As an IT admin, I need to understand how to create and upload Azure VM images that I can use with my Azure Stack Edge Pro device so that I can deploy VMs on the device. # Create custom VM images for your Azure Stack Edge Pro device
-<!--[!INCLUDE [applies-to-skus](../../includes/azure-stack-edge-applies-to-all-sku.md)]-->
To deploy VMs on your Azure Stack Edge Pro device, you need to be able to create custom VM images that you can use to create VMs. This article describes the steps that are required to create Linux or Windows VM custom images that you can use to deploy VMs on your Azure Stack Edge Pro device.
databox-online Azure Stack Edge Gpu Deploy Arc Data Controller https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-deploy-arc-data-controller.md
Previously updated : 02/08/2021 Last updated : 03/05/2021 # Deploy Azure Data Services on your Azure Stack Edge Pro GPU device This article describes the process of creating an Azure Arc Data Controller and then deploying Azure Data Services on your Azure Stack Edge Pro GPU device.
databox-online Azure Stack Edge Gpu Deploy Arc Kubernetes Cluster https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-deploy-arc-kubernetes-cluster.md
Previously updated : 11/12/2020 Last updated : 03/05/2021 # Enable Azure Arc on Kubernetes cluster on your Azure Stack Edge Pro GPU device + This article shows you how to enable Azure Arc on an existing Kubernetes cluster on your Azure Stack Edge Pro device. This procedure is intended for those who have reviewed the [Kubernetes workloads on Azure Stack Edge Pro device](azure-stack-edge-gpu-kubernetes-workload-management.md) and are familiar with the concepts of [What is Azure Arc enabled Kubernetes (Preview)?](../azure-arc/kubernetes/overview.md).
You can also register resource providers via the `az cli`. For more information,
1. To create a service principal, use the following command via the `az cli`.
- `az ad sp create-for-rbac --skip assignment --name "<Informative name for service principal>"`
+ `az ad sp create-for-rbac --skip-assignment --name "<Informative name for service principal>"`
For information on how to log into the `az cli`, [Start Cloud Shell in Azure portal](../cloud-shell/quickstart-powershell.md#start-cloud-shell)
To remove the Azure Arc management, follow these steps:
## Next steps To understand how to run an Azure Arc deployment, see
-[Deploy a stateless PHP Guestbook application with Redis via GitOps on an Azure Stack Edge Pro device](azure-stack-edge-gpu-deploy-stateless-application-git-ops-guestbook.md)
+[Deploy a stateless PHP `Guestbook` application with Redis via GitOps on an Azure Stack Edge Pro device](azure-stack-edge-gpu-deploy-stateless-application-git-ops-guestbook.md)
databox-online Azure Stack Edge Gpu Deploy Compute Acceleration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-deploy-compute-acceleration.md
Previously updated : 11/05/2020 Last updated : 02/22/2021 # Use compute acceleration on Azure Stack Edge Pro GPU for Kubernetes deployment + This article describes how to use compute acceleration on Azure Stack Edge devices when using Kubernetes deployments. The article applies to Azure Stack Edge Pro GPU, Azure Stack Edge Pro R, and Azure Stack Edge Mini R devices.
databox-online Azure Stack Edge Gpu Deploy Compute Module Simple https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-deploy-compute-module-simple.md
Previously updated : 02/03/2021 Last updated : 02/22/2021 Customer intent: As an IT admin, I need to understand how to configure compute on Azure Stack Edge Pro so I can use it to transform the data before sending it to Azure. # Tutorial: Run a compute workload with IoT Edge module on Azure Stack Edge Pro GPU
-<!--[!INCLUDE [applies-to-skus](../../includes/azure-stack-edge-applies-to-all-sku.md)]-->
This tutorial describes how to run a compute workload using an IoT Edge module on your Azure Stack Edge Pro GPU device. After you configure the compute, the device will transform the data before sending it to Azure.
For the simple deployment in this tutorial, you'll need two shares: one Edge sha
`rsync <source file path> < destination file path>`
- For more information about the `rsync` command, go to [Rsync documentation](https://www.computerhope.com/unix/rsync.htm).
+ For more information about the `rsync` command, go to [`Rsync` documentation](https://www.computerhope.com/unix/rsync.htm).
3. Go to **Cloud storage gateway > Shares** to see the updated list of shares.
databox-online Azure Stack Edge Gpu Deploy Configure Compute https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-deploy-configure-compute.md
Customer intent: As an IT admin, I need to understand how to configure compute o
# Tutorial: Configure compute on Azure Stack Edge Pro GPU device
-<!--[!INCLUDE [applies-to-skus](../../includes/azure-stack-edge-applies-to-all-sku.md)]-->
+<!--ALPA WILL VERIFY - [!INCLUDE [applies-to-skus](../../includes/azure-stack-edge-applies-to-all-sku.md)]-->
This tutorial describes how to configure a compute role and create a Kubernetes cluster on your Azure Stack Edge Pro device.
databox-online Azure Stack Edge Gpu Deploy Gpu Virtual Machine https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-deploy-gpu-virtual-machine.md
Previously updated : 12/21/2020 Last updated : 02/22/2021 #Customer intent: As an IT admin, I need to understand how to create and manage virtual machines (VMs) on my Azure Stack Edge Pro device using APIs so that I can efficiently manage my VMs. # GPU VMs for your Azure Stack Edge Pro device + This article provides an overview of GPU virtual machines (VMs) on your Azure Stack Edge Pro device. The article describes how to create a GPU VM and then install GPU driver extension to install appropriate Nvidia drivers. Use the Azure Resource Manager templates to create the GPU VM and install the GPU driver extension. This article applies to Azure Stack Edge Pro GPU and Azure Stack Edge Pro R devices.
databox-online Azure Stack Edge Gpu Deploy Sample Module Marketplace https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-deploy-sample-module-marketplace.md
Previously updated : 01/28/2021 Last updated : 02/22/2021 # Deploy a GPU enabled IoT module from Azure Marketplace on Azure Stack Edge Pro GPU device + This article describes how to deploy a Graphics Processing Unit (GPU) enabled IoT Edge module from Azure Marketplace on your Azure Stack Edge Pro device. In this article, you learn how to:
databox-online Azure Stack Edge Gpu Deploy Sample Module https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-deploy-sample-module.md
Previously updated : 08/31/2020 Last updated : 02/22/2021 # Deploy a GPU enabled IoT module on Azure Stack Edge Pro GPU device + This article describes how to deploy a GPU enabled IoT Edge module on your Azure Stack Edge Pro GPU device. In this article, you learn how to:
databox-online Azure Stack Edge Gpu Deploy Stateful Application Dynamic Provision Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-deploy-stateful-application-dynamic-provision-kubernetes.md
Previously updated : 01/25/2021 Last updated : 02/22/2021 # Use kubectl to run a Kubernetes stateful application with StorageClass on your Azure Stack Edge Pro GPU device + This article shows you how to deploy a single-instance stateful application in Kubernetes using a StorageClass to dynamically provision storage and a deployment. The deployment uses `kubectl` commands on an existing Kubernetes cluster and deploys the MySQL application. This procedure is intended for those who have reviewed the [Kubernetes storage on Azure Stack Edge Pro device](azure-stack-edge-gpu-kubernetes-storage.md) and are familiar with the concepts of [Kubernetes storage](https://kubernetes.io/docs/concepts/storage/).
databox-online Azure Stack Edge Gpu Deploy Stateful Application Static Provision Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-deploy-stateful-application-static-provision-kubernetes.md
Previously updated : 01/25/2021 Last updated : 02/22/2021 # Use kubectl to run a Kubernetes stateful application with a PersistentVolume on your Azure Stack Edge Pro device + This article shows you how to deploy a single-instance stateful application in Kubernetes using a PersistentVolume (PV) and a deployment. The deployment uses `kubectl` commands on an existing Kubernetes cluster and deploys the MySQL application. This procedure is intended for those who have reviewed the [Kubernetes storage on Azure Stack Edge Pro device](azure-stack-edge-gpu-kubernetes-storage.md) and are familiar with the concepts of [Kubernetes storage](https://kubernetes.io/docs/concepts/storage/).
databox-online Azure Stack Edge Gpu Deploy Stateless Application Git Ops Guestbook https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-deploy-stateless-application-git-ops-guestbook.md
Title: Deploy `PHP Guestbook` app on Arc enabled Kubernetes on Azure Stack Edge Pro GPU device| Microsoft Docs
+ Title: Deploy PHP `Guestbook` app on Arc enabled Kubernetes on Azure Stack Edge Pro GPU device| Microsoft Docs
description: Describes how to deploy a PHP `Guestbook` stateless application with Redis using GitOps on an Arc enabled Kubernetes cluster of your Azure Stack Edge Pro device.
Previously updated : 01/25/2021 Last updated : 02/22/2021 # Deploy a PHP `Guestbook` stateless application with Redis on Arc enabled Kubernetes cluster on Azure Stack Edge Pro GPU + This article shows you how to build and deploy a simple, multi-tier web application using Kubernetes and Azure Arc. This example consists of the following components: - A single-instance Redis master to store `guestbook` entries
databox-online Azure Stack Edge Gpu Deploy Stateless Application Iot Edge Module https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-deploy-stateless-application-iot-edge-module.md
Previously updated : 08/26/2020 Last updated : 02/22/2021 # Use IoT Edge module to run a Kubernetes stateless application on your Azure Stack Edge Pro GPU device + This article describes how you can use an IoT Edge module to deploy a stateless application on your Azure Stack Edge Pro device. To deploy the stateless application, you'll take the following steps:
databox-online Azure Stack Edge Gpu Deploy Virtual Machine Custom Script Extension https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-deploy-virtual-machine-custom-script-extension.md
Previously updated : 01/25/2021 Last updated : 02/22/2021 #Customer intent: As an IT admin, I need to understand how to create and manage virtual machines (VMs) on my Azure Stack Edge Pro device using APIs so that I can efficiently manage my VMs. # Deploy Custom Script Extension on VMs running on your Azure Stack Edge Pro device + The Custom Script Extension downloads and runs scripts or commands on virtual machines running on your Azure Stack Edge Pro devices. This article details how to install and run the Custom Script Extension by using an Azure Resource Manager template. This article applies to Azure Stack Edge Pro GPU, Azure Stack Edge Pro R, and Azure Stack Edge Mini R devices.
databox-online Azure Stack Edge Gpu Deploy Virtual Machine Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-deploy-virtual-machine-portal.md
Previously updated : 11/02/2020 Last updated : 02/22/2021 Customer intent: As an IT admin, I need to understand how to configure compute on Azure Stack Edge Pro device so I can use it to transform the data before sending it to Azure. # Deploy VMs on your Azure Stack Edge Pro GPU device via the Azure portal + You can create and manage virtual machines (VMs) on an Azure Stack Edge device using Azure portal, templates, Azure PowerShell cmdlets and via Azure CLI/Python scripts. This article describes how to create and manage a VM on your Azure Stack Edge device using the Azure portal. This article applies to Azure Stack Edge Pro GPU, Azure Stack Edge Pro R, and Azure Stack Edge Mini R devices.
databox-online Azure Stack Edge Gpu Deploy Virtual Machine Powershell Script https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-deploy-virtual-machine-powershell-script.md
Previously updated : 12/22/2020 Last updated : 02/22/2021 #Customer intent: As an IT admin, I need to understand how to create and manage virtual machines (VMs) on my Azure Stack Edge Pro device using an Azure PowerShell script so that I can efficiently manage my VMs. # Deploy VMs on your Azure Stack Edge Pro GPU device via Azure PowerShell script
-<!--[!INCLUDE [applies-to-skus](../../includes/azure-stack-edge-applies-to-all-sku.md)]-->
This tutorial describes how to create and manage a VM on your Azure Stack Edge Pro device using an Azure PowerShell script.
databox-online Azure Stack Edge Gpu Deploy Virtual Machine Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-deploy-virtual-machine-powershell.md
Previously updated : 01/22/2021 Last updated : 02/22/2021 #Customer intent: As an IT admin, I need to understand how to create and manage virtual machines (VMs) on my Azure Stack Edge Pro device. I want to use APIs so that I can efficiently manage my VMs. # Deploy VMs on your Azure Stack Edge device via Azure PowerShell + This article describes how to create and manage a virtual machine (VM) on your Azure Stack Edge device by using Azure PowerShell. The information applies to Azure Stack Edge Pro with GPU (graphical processing unit), Azure Stack Edge Pro R, and Azure Stack Edge Mini R devices. ## VM deployment workflow
databox-online Azure Stack Edge Gpu Deploy Virtual Machine Templates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-deploy-virtual-machine-templates.md
Previously updated : 01/25/2021 Last updated : 02/22/2021 #Customer intent: As an IT admin, I need to understand how to create and manage virtual machines (VMs) on my Azure Stack Edge Pro device using APIs so that I can efficiently manage my VMs. # Deploy VMs on your Azure Stack Edge Pro GPU device via templates + This tutorial describes how to create and manage a VM on your Azure Stack Edge Pro device using templates. These templates are JavaScript Object Notation (JSON) files that define the infrastructure and configuration for your VM. In these templates, you specify the resources to deploy and the properties for those resources. Templates are flexible in different environments as they can take parameters as input at runtime from a file. The standard naming structure is `TemplateName.json` for the template and `TemplateName.parameters.json` for the parameters file. For more information on ARM templates, go to [What are Azure Resource Manager templates?](../azure-resource-manager/templates/overview.md).
databox-online Azure Stack Edge Gpu Edge Container Registry https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-edge-container-registry.md
Previously updated : 11/11/2020 Last updated : 02/22/2021 # Enable Edge container registry on your Azure Stack Edge Pro GPU device + This article describes how to enable the Edge container registry and use it from within the Kubernetes cluster on your Azure Stack Edge Pro device. The example used in the article details how to push an image from a source registry, in this case, Microsoft Container registry, to the registry on the Azure Stack Edge device, the Edge container registry. ### About Edge container registry
databox-online Azure Stack Edge Gpu Enable Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-enable-azure-monitor.md
Previously updated : 02/02/2021 Last updated : 02/22/2021 # Enable Azure Monitor on your Azure Stack Edge Pro GPU device + Monitoring containers on your Azure Stack Edge Pro GPU device is critical, specially when you are running multiple compute applications. Azure Monitor lets you collect container logs and memory and processor metrics from the Kubernetes cluster running on your device. This article describes the steps required to enable Azure Monitor on your device and gather container logs in Log Analytics workspace. The Azure Monitor metrics store is currently not supported with your Azure Stack Edge Pro GPU device.
databox-online Azure Stack Edge Gpu Install Update https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-install-update.md
Previously updated : 01/19/2021 Last updated : 02/21/2021 # Update your Azure Stack Edge Pro GPU + This article describes the steps required to install update on your Azure Stack Edge Pro with GPU via the local web UI and via the Azure portal. You apply the software updates or hotfixes to keep your Azure Stack Edge Pro device and the associated Kubernetes cluster on the device up-to-date. The procedure described in this article was performed using a different version of software, but the process remains the same for the current software version.
databox-online Azure Stack Edge Gpu Kubernetes Networking https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-kubernetes-networking.md
Previously updated : 08/21/2020 Last updated : 02/21/2021 # Kubernetes networking in your Azure Stack Edge Pro GPU device + On your Azure Stack Edge Pro device, a Kubernetes cluster is created when you configure compute role. Once the Kubernetes cluster is created, then containerized applications can be deployed on the Kubernetes cluster in Pods. There are distinct ways that networking is used for the Pods in your Kubernetes cluster. This article describes the networking in a Kubernetes cluster in general and specifically in the context of your Azure Stack Edge Pro device.
databox-online Azure Stack Edge Gpu Kubernetes Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-kubernetes-overview.md
# Kubernetes on your Azure Stack Edge Pro GPU device + Kubernetes is a popular open-source platform to orchestrate containerized applications. This article provides an overview of Kubernetes and then describes how Kubernetes works on your Azure Stack Edge Pro device. ## About Kubernetes
databox-online Azure Stack Edge Gpu Kubernetes Rbac https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-kubernetes-rbac.md
Previously updated : 09/22/2020 Last updated : 02/22/2021 # Kubernetes role-based access control on your Azure Stack Edge Pro GPU device On your Azure Stack Edge Pro device, when you configure compute role, a Kubernetes cluster is created. You can use Kubernetes role-based access control (Kubernetes RBAC) to limit access to the cluster resources on your device.
databox-online Azure Stack Edge Gpu Kubernetes Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-kubernetes-storage.md
Previously updated : 01/28/2021 Last updated : 02/22/2021 # Kubernetes storage management on your Azure Stack Edge Pro GPU device + On your Azure Stack Edge Pro device, a Kubernetes cluster is created when you configure compute role. Once the Kubernetes cluster is created, then containerized applications can be deployed on the Kubernetes cluster in pods. There are distinct ways to provide storage to pods in your Kubernetes cluster. This article describes the methods to provision storage on a Kubernetes cluster in general and specifically in the context of your Azure Stack Edge Pro device.
databox-online Azure Stack Edge Gpu Kubernetes Workload Management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-kubernetes-workload-management.md
# Kubernetes workload management on your Azure Stack Edge Pro device + On your Azure Stack Edge Pro device, a Kubernetes cluster is created when you configure compute role. Once the Kubernetes cluster is created, then containerized applications can be deployed on the Kubernetes cluster in Pods. There are distinct ways to deploy workloads in your Kubernetes cluster. This article describes the various methods that can be used to deploy workloads on your Azure Stack Edge Pro device.
databox-online Azure Stack Edge Gpu Manage Access Power Connectivity Mode https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-manage-access-power-connectivity-mode.md
Previously updated : 01/28/2021 Last updated : 02/22/2021 # Manage access, power, and connectivity mode for your Azure Stack Edge Pro GPU + This article describes how to manage the access, power, and connectivity mode for your Azure Stack Edge Pro with GPU device. These operations are performed via the local web UI or the Azure portal. This article applies to Azure Stack Edge Pro GPU, Azure Stack Edge Pro R, and Azure Stack Edge Mini R devices.
databox-online Azure Stack Edge Gpu Manage Certificates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-manage-certificates.md
Previously updated : 09/29/2020 Last updated : 02/22/2021 # Use certificates with Azure Stack Edge Pro GPU device
-<!--[!INCLUDE [applies-to-skus](../../includes/azure-stack-edge-applies-to-all-sku.md)]-->
This article describes the types of certificates that can be installed on your Azure Stack Edge Pro device. The article also includes the details for each certificate type along with the procedure to install and identify the expiration date.
databox-online Azure Stack Edge Gpu Manage Device Event Alert Notifications https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-manage-device-event-alert-notifications.md
Previously updated : 02/02/2021 Last updated : 02/22/2021 # Manage device event alert notifications on Azure Stack Edge Pro resources + This article describes how to create action rules in the Azure portal to trigger or suppress alert notifications for device events that occur within a resource group, an Azure subscription, or an individual Azure Stack Edge resource. This article applies to all models of Azure Stack Edge. ## About action rules
databox-online Azure Stack Edge Gpu Modify Fpga Modules Gpu https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-modify-fpga-modules-gpu.md
Previously updated : 02/03/2021 Last updated : 02/22/2021 # Run existing IoT Edge modules from Azure Stack Edge Pro FPGA devices on Azure Stack Edge Pro GPU device + This article details the changes needed for a docker-based IoT Edge module that runs on Azure Stack Edge Pro FPGA so it can run on a Kubernetes-based IoT Edge platform on Azure Stack Edge Pro GPU device. ## About IoT Edge implementation
databox-online Azure Stack Edge Gpu Monitor Kubernetes Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-monitor-kubernetes-dashboard.md
Previously updated : 09/22/2020 Last updated : 02/22/2021 # Use Kubernetes dashboard to monitor your Azure Stack Edge Pro GPU device + This article describes how to access and use the Kubernetes dashboard to monitor your Azure Stack Edge Pro GPU device. To monitor your device, you can use charts in Azure portal, view the Kubernetes dashboard, or run `kubectl` commands via the PowerShell interface of the device. This article focuses only on the monitoring tasks that can be performed on the Kubernetes dashboard.
databox-online Azure Stack Edge Gpu Prepare Device Failure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-prepare-device-failure.md
Previously updated : 12/11/2020 Last updated : 02/22/2021 # Prepare for an Azure Stack Edge Pro GPU device failure + This article helps you prepare for a device failure by detailing how to save and back up the device configuration and data on your Azure Stack Edge Pro GPU device. The article does not include steps to back up Kubernetes and IoT containers deployed on your Azure Stack Edge Pro GPU device.
databox-online Azure Stack Edge Gpu Proactive Log Collection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-proactive-log-collection.md
# Proactive log collection on your Azure Stack Edge device + Proactive log collection gathers system health indicators on your Azure Stack Edge device to help you efficiently troubleshoot any device issues. Proactive log collection is enabled by default. This article describes what is logged, how Microsoft handles the data, and how to disable or enable proactive log collection. The information in this article applies to Azure Stack Edge Pro GPU, Azure Stack Edge Pro R, and Azure Stack Edge Mini R devices.
databox-online Azure Stack Edge Gpu Recover Device Failure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-recover-device-failure.md
Previously updated : 12/11/2020 Last updated : 02/22/2021 # Recover from a failed Azure Stack Edge Pro GPU device + This article describes how to recover from a non-tolerable failure on your Azure Stack Edge Pro GPU device. A non-tolerable failure on Azure Stack Edge Pro GPU device requires a device replacement. ## Before you begin
databox-online Azure Stack Edge Gpu Troubleshoot Activation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-troubleshoot-activation.md
Previously updated : 10/08/2020 Last updated : 02/22/2021 # Troubleshoot activation issues on your Azure Stack Edge Pro GPU device
-<!--[!INCLUDE [applies-to-skus](../../includes/azure-stack-edge-applies-to-all-sku.md)]-->
This article describes how to troubleshoot activation issues on your Azure Stack Edge Pro GPU device.
databox-online Azure Stack Edge Gpu Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-troubleshoot.md
Previously updated : 02/04/2021 Last updated : 02/22/2021 # Troubleshoot issues on your Azure Stack Edge Pro GPU device
-<!--[!INCLUDE [applies-to-skus](../../includes/azure-stack-edge-applies-to-all-sku.md)]-->
This article describes how to troubleshoot issues on your Azure Stack Edge Pro GPU device.
databox-online Azure Stack Edge Gpu Virtual Machine Sizes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-virtual-machine-sizes.md
Previously updated : 12/21/2020 Last updated : 02/22/2021 #Customer intent: As an IT admin, I need to understand how to create and manage virtual machines (VMs) on my Azure Stack Edge Pro device by using APIs, so that I can efficiently manage my VMs. # VM sizes and types for Azure Stack Edge Pro + This article describes the supported sizes for the virtual machines running on your Azure Stack Edge Pro devices. Use this article before you deploy virtual machines on your Azure Stack Edge Pro devices. ## Supported VM sizes
databox-online Azure Stack Edge J Series Configure Gpu Modules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-j-series-configure-gpu-modules.md
Previously updated : 01/04/2021 Last updated : 02/22/2021 # Configure and run a module on GPU on Azure Stack Edge Pro device + Your Azure Stack Edge Pro device contains one or more Graphics Processing Unit (GPU). GPUs are a popular choice for AI computations as they offer parallel processing capabilities and are faster at image rendering than Central Processing Units (CPUs). For more information on the GPU contained in your Azure Stack Edge Pro device, go to [Azure Stack Edge Pro device technical specifications](azure-stack-edge-gpu-technical-specifications-compliance.md). This article describes how to configure and run a module on the GPU on your Azure Stack Edge Pro device. In this article, you will use a publicly available container module **Digits** written for Nvidia T4 GPUs. This procedure can be used to configure any other modules published by Nvidia for these GPUs.
databox-online Azure Stack Edge J Series Configure Tls Settings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-j-series-configure-tls-settings.md
Previously updated : 08/28/2020 Last updated : 02/22/2021 # Configure TLS 1.2 on Windows clients accessing Azure Stack Edge Pro device
-<!--[!INCLUDE [applies-to-skus](../../includes/azure-stack-edge-applies-to-all-sku.md)]-->
If you are using a Windows client to access your Azure Stack Edge Pro device, you are required to configure TLS 1.2 on your client. This article provides resources and guidelines to configure TLS 1.2 on your Windows client.
databox-online Azure Stack Edge J Series Connect Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-j-series-connect-resource-manager.md
# Connect to Azure Resource Manager on your Azure Stack Edge Pro device
-<!--[!INCLUDE [applies-to-skus](../../includes/azure-stack-edge-applies-to-all-sku.md)]-->
Azure Resource Manager provides a management layer that enables you to create, update, and delete resources in your Azure subscription. The Azure Stack Edge Pro device supports the same Azure Resource Manager APIs to create, update, and delete VMs in a local subscription. This support lets you manage the device in a manner consistent with the cloud.
databox-online Azure Stack Edge J Series Create Iot Edge Module https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-j-series-create-iot-edge-module.md
Previously updated : 08/28/2020 Last updated : 03/05/2021 # Develop a C# IoT Edge module to move files on Azure Stack Edge Pro
-<!--[!INCLUDE [applies-to-skus](../../includes/azure-stack-edge-applies-to-all-sku.md)]-->
This article steps you through how to create an IoT Edge module for deployment with your Azure Stack Edge Pro device. Azure Stack Edge Pro is a storage solution that allows you to process data and send it over network to Azure.
databox-online Azure Stack Edge J Series Deploy Add Shares https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-j-series-deploy-add-shares.md
Previously updated : 01/04/2021 Last updated : 02/22/2021 Customer intent: As an IT admin, I need to understand how to add and connect to shares on Azure Stack Edge Pro so I can use it to transfer data to Azure. # Tutorial: Transfer data via shares with Azure Stack Edge Pro GPU
-<!--[!INCLUDE [applies-to-skus](../../includes/azure-stack-edge-applies-to-all-sku.md)]-->
This tutorial describes how to add and connect to shares on your Azure Stack Edge Pro device. After you've added the shares, Azure Stack Edge Pro can transfer data to Azure.
databox-online Azure Stack Edge J Series Deploy Add Storage Accounts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-j-series-deploy-add-storage-accounts.md
Previously updated : 08/31/2020 Last updated : 02/22/2021 Customer intent: As an IT admin, I need to understand how to add and connect to storage accounts on Azure Stack Edge Pro so I can use it to transfer data to Azure. # Tutorial: Transfer data via storage accounts with Azure Stack Edge Pro GPU
-<!--[!INCLUDE [applies-to-skus](../../includes/azure-stack-edge-applies-to-all-sku.md)]-->
This tutorial describes how to add and connect to storage accounts on your Azure Stack Edge Pro device. After you've added the storage accounts, Azure Stack Edge Pro can transfer data to Azure.
databox-online Azure Stack Edge J Series Deploy Stateless Application Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-j-series-deploy-stateless-application-kubernetes.md
Previously updated : 01/22/2021 Last updated : 03/05/2021 # Deploy a Kubernetes stateless application via kubectl on your Azure Stack Edge Pro GPU device + This article describes how to deploy a stateless application using kubectl commands on an existing Kubernetes cluster. This article also walks you through the process of creating and setting up pods in your stateless application. ## Prerequisites
databox-online Azure Stack Edge J Series Deploy Virtual Machine Cli Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-j-series-deploy-virtual-machine-cli-python.md
Previously updated : 01/22/2021 Last updated : 03/04/2021 #Customer intent: As an IT admin, I need to understand how to create and manage virtual machines (VMs) on my Azure Stack Edge Pro device using APIs so that I can efficiently manage my VMs. # Deploy VMs on your Azure Stack Edge Pro GPU device using Azure CLI and Python
-<!--[!INCLUDE [applies-to-skus](../../includes/azure-stack-edge-applies-to-all-sku.md)]-->
[!INCLUDE [azure-stack-edge-gateway-deploy-virtual-machine-overview](../../includes/azure-stack-edge-gateway-deploy-virtual-machine-overview.md)]
databox-online Azure Stack Edge J Series Manage Bandwidth Schedules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-j-series-manage-bandwidth-schedules.md
Previously updated : 01/05/2021 Last updated : 02/22/2021 # Use the Azure portal to manage bandwidth schedules on your Azure Stack Edge Pro GPU
-<!--[!INCLUDE [applies-to-skus](../../includes/azure-stack-edge-applies-to-all-sku.md)]-->
This article describes how to manage bandwidth schedules on your Azure Stack Edge Pro. Bandwidth schedules allow you to configure network bandwidth usage across multiple time-of-day schedules. These schedules can be applied to the upload and download operations from your device to the cloud.
databox-online Azure Stack Edge J Series Manage Compute https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-j-series-manage-compute.md
Previously updated : 01/27/2021 Last updated : 03/04/2021 # Manage compute on your Azure Stack Edge Pro GPU
-<!--[!INCLUDE [applies-to-skus](../../includes/azure-stack-edge-applies-to-all-sku.md)]-->
This article describes how to manage compute via IoT Edge service on your Azure Stack Edge Pro GPU device. You can manage the compute via the Azure portal or via the local web UI. Use the Azure portal to manage modules, triggers, and IoT Edge configuration, and the local web UI to manage compute network settings.
Take the following steps in the Azure portal to create a trigger.
1. In the Azure portal, go to your Azure Stack Edge resource and then go to **IoT Edge**. Go to **Triggers** and select **+ Add trigger** on the command bar.
- ![Select add trigger](media/azure-stack-edge-j-series-manage-compute/add-trigger-1m.png)
+ ![Select add trigger](media/azure-stack-edge-j-series-manage-compute/add-trigger-1-m.png)
2. In **Add trigger** blade, provide a unique name for your trigger.
databox-online Azure Stack Edge J Series Manage Shares https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-j-series-manage-shares.md
Previously updated : 01/04/2021 Last updated : 02/22/2021 # Use Azure portal to manage shares on your Azure Stack Edge Pro
-<!--[!INCLUDE [applies-to-skus](../../includes/azure-stack-edge-applies-to-all-sku.md)]-->
This article describes how to manage shares on your Azure Stack Edge Pro. You can manage the Azure Stack Edge Pro via the Azure portal or via the local web UI. Use the Azure portal to add, delete, refresh shares, or sync storage key for storage account associated with the shares. This article applies to Azure Stack Edge Pro GPU, Azure Stack Edge Pro R, and Azure Stack Edge Mini R devices.
databox-online Azure Stack Edge J Series Manage Storage Accounts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-j-series-manage-storage-accounts.md
Previously updated : 08/28/2020 Last updated : 02/22/2021 # Use the Azure portal to manage Edge storage accounts on your Azure Stack Edge Pro
-<!--[!INCLUDE [applies-to-skus](../../includes/azure-stack-edge-applies-to-all-sku.md)]-->
This article describes how to manage Edge storage accounts on your Azure Stack Edge Pro. You can manage the Azure Stack Edge Pro via the Azure portal or via the local web UI. Use the Azure portal to add or delete Edge storage accounts on your device.
databox-online Azure Stack Edge J Series Manage Users https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-j-series-manage-users.md
Previously updated : 01/05/2021 Last updated : 02/21/2021 # Use the Azure portal to manage users on your Azure Stack Edge Pro
-<!--[!INCLUDE [applies-to-skus](../../includes/azure-stack-edge-applies-to-all-sku.md)]-->
This article describes how to manage users on your Azure Stack Edge Pro. You can manage the Azure Stack Edge Pro via the Azure portal or via the local web UI. Use the Azure portal to add, modify, or delete users.
databox-online Azure Stack Edge J Series Set Azure Resource Manager Password https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-j-series-set-azure-resource-manager-password.md
Previously updated : 01/27/2021 Last updated : 02/21/2021 #Customer intent: As an IT admin, I need to understand how to connect to Azure Resource Manager on my Azure Stack Edge Pro device so that I can manage resources. # Set Azure Resource Manager password on Azure Stack Edge Pro GPU device
-<!--[!INCLUDE [applies-to-skus](../../includes/azure-stack-edge-applies-to-all-sku.md)]-->
This article describes how to set your Azure Resource Manager password. You need to set this password when you are connecting to the device local APIs via the Azure Resource Manager.
databox-online Azure Stack Edge Monitor https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-monitor.md
Previously updated : 02/02/2021 Last updated : 03/04/2021 # Monitor your Azure Stack Edge Pro + This article describes how to monitor your Azure Stack Edge Pro. To monitor your device, you can use Azure portal or the local web UI. Use the Azure portal to view device events, configure and manage alerts, and view metrics. Use the local web UI on your physical device to view the hardware status of the various device components. In this article, you learn how to:
databox-online Azure Stack Edge Replace Device https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-replace-device.md
Previously updated : 01/22/2021 Last updated : 02/22/2021 # Replace your Azure Stack Edge Pro device + This article describes how to replace your Azure Stack Edge Pro device. A replacement device is needed when the existing device has a hardware failure or needs an upgrade.
databox-online Azure Stack Edge Return Device https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-return-device.md
# Return your Azure Stack Edge Pro device + This article describes how to wipe the data and then return your Azure Stack Edge Pro device. After you've returned the device, you can also delete the resource associated with the device. In this article, you learn how to:
You can initiate the device return even before the device is reset.
You can reset your device in the local web UI or in PowerShell. For PowerShell instructions, see [Reset your device](./azure-stack-edge-connect-powershell-interface.md#reset-your-device). - [!INCLUDE [Reset data from the device](../../includes/azure-stack-edge-device-reset.md)] > [!NOTE]
databox-online Azure Stack Edge Troubleshoot Ordering https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-troubleshoot-ordering.md
Previously updated : 08/11/2020 Last updated : 02/22/2021 # Troubleshoot your Azure Stack Edge Pro ordering issues + This article describes how to troubleshoot Azure Stack Edge Pro ordering issues. In this tutorial, you learn how to:
databox Data Box Quickstart Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox/data-box-quickstart-portal.md
Previously updated : 09/03/2019 Last updated : 03/05/2021 ms.localizationpriority: high #Customer intent: As an IT admin, I need to quickly deploy Data Box so as to import data into Azure.
This guide describes how to deploy the Azure Data Box for import using the Azure
Before you begin: - Make sure that the subscription you use for Data Box service is one of the following types:
- - Microsoft Enterprise Agreement (EA). Read more about [EA subscriptions](https://azure.microsoft.com/pricing/enterprise-agreement/).
+ - Microsoft Customer Agreement (MCA) for new subscriptions or Microsoft Enterprise Agreement (EA) for existing subscriptions. Read more about [MCA for new subscriptions](https://www.microsoft.com/licensing/how-to-buy/microsoft-customer-agreement) and [EA subscriptions](https://azure.microsoft.com/pricing/enterprise-agreement/).
- Cloud Solution Provider (CSP). Learn more about [Azure CSP program](/azure/cloud-solution-provider/overview/azure-csp-overview). - Microsoft Azure Sponsorship. Learn more about [Azure sponsorship program](https://azure.microsoft.com/offers/ms-azr-0036p/).
In this quickstart, youΓÇÖve deployed an Azure Data Box to help import your data
> [!div class="nextstepaction"] > [Use the Azure portal to administer Data Box](data-box-portal-admin.md)
databox Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Data Box description: Lists Azure Policy Regulatory Compliance controls available for Azure Data Box. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 02/09/2021 Last updated : 03/05/2021
event-grid Event Schema App Service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/event-schema-app-service.md
Title: Azure App Service as Event Grid source
description: This article describes how to use Azure App Service as an Event Grid event source. It provides the schema and links to tutorial and how-to articles. Previously updated : 02/12/2021 Last updated : 03/06/2021
This section contains an example of what that data would look like for each even
"appEventTypeDetail": { "action": "Started" },
- "siteName": "<site-name>",
+ "name": "<site-name>",
"clientRequestId": "None", "correlationRequestId": "None", "requestId": "292f499d-04ee-4066-994d-c2df57b99198",
This section contains an example of what that data would look like for each even
"appEventTypeDetail": { "action": "Started" },
- "siteName": "<site-name>",
+ "name": "<site-name>",
"clientRequestId": "None", "correlationRequestId": "None", "requestId": "292f499d-04ee-4066-994d-c2df57b99198",
The data object contains the following properties:
"appEventTypeDetail": { "action": "Started" },
- "siteName": "<site-name>",
+ "name": "<site-name>",
"clientRequestId": "None", "correlationRequestId": "None", "requestId": "292f499d-04ee-4066-994d-c2df57b99198",
The data object contains the following properties:
"appEventTypeDetail": { "action": "Started" },
- "siteName": "<site-name>",
+ "name": "<site-name>",
"clientRequestId": "None", "correlationRequestId": "None", "requestId": "292f499d-04ee-4066-994d-c2df57b99198",
The data object contains the following properties:
"eventTime": "2020-01-28T18:26:51.7194887Z", "data": { "appEventTypeDetail": null,
- "siteName": "<site-name>",
+ "name": "<site-name>",
"clientRequestId": "922f4841-20d9-4dd6-8c5b-23f0d85e5592", "correlationRequestId": "9ac46505-2b8a-4e06-834c-05ffbe2e8c3a", "requestId": "765117aa-eaf8-4bd2-a644-1dbf69c7b0fd",
The data object contains the following properties:
"time": "2020-01-28T18:26:51.7194887Z", "data": { "appEventTypeDetail": null,
- "siteName": "<site-name>",
+ "name": "<site-name>",
"clientRequestId": "922f4841-20d9-4dd6-8c5b-23f0d85e5592", "correlationRequestId": "9ac46505-2b8a-4e06-834c-05ffbe2e8c3a", "requestId": "765117aa-eaf8-4bd2-a644-1dbf69c7b0fd",
The data object contains the following properties:
"eventTime": "2020-01-28T18:26:51.7194887Z", "data": { "appEventTypeDetail": null,
- "siteName": "<site-name>",
+ "name": "<site-name>",
"clientRequestId": "922f4841-20d9-4dd6-8c5b-23f0d85e5592", "correlationRequestId": "9ac46505-2b8a-4e06-834c-05ffbe2e8c3a", "requestId": "765117aa-eaf8-4bd2-a644-1dbf69c7b0fd",
The data object contains the following properties:
"time": "2020-01-28T18:26:51.7194887Z", "data": { "appEventTypeDetail": null,
- "siteName": "<site-name>",
+ "name": "<site-name>",
"clientRequestId": "922f4841-20d9-4dd6-8c5b-23f0d85e5592", "correlationRequestId": "9ac46505-2b8a-4e06-834c-05ffbe2e8c3a", "requestId": "765117aa-eaf8-4bd2-a644-1dbf69c7b0fd",
The data object contains the following properties:
"appEventTypeDetail": { "action": "Stopped" },
- "siteName": "<site-name>",
+ "name": "<site-name>",
"clientRequestId": "64a5e0aa-7cee-4ff1-9093-b9197b820014", "correlationRequestId": "25bb36a5-8f6c-4f04-b615-e9a0ee045756", "requestId": "f2e8eb3f-b190-42de-b99e-6acefe587374",
The data object contains the following properties:
"appEventTypeDetail": { "action": "Stopped" },
- "siteName": "<site-name>",
+ "name": "<site-name>",
"clientRequestId": "64a5e0aa-7cee-4ff1-9093-b9197b820014", "correlationRequestId": "25bb36a5-8f6c-4f04-b615-e9a0ee045756", "requestId": "f2e8eb3f-b190-42de-b99e-6acefe587374",
event-grid Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Event Grid description: Lists Azure Policy Regulatory Compliance controls available for Azure Event Grid. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 02/09/2021 Last updated : 03/05/2021
event-hubs Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Event Hubs description: Lists Azure Policy Regulatory Compliance controls available for Azure Event Hubs. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 02/09/2021 Last updated : 03/05/2021
expressroute Expressroute Howto Coexist Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-howto-coexist-resource-manager.md
Previously updated : 12/11/2019 Last updated : 03/06/2021
The steps to configure both scenarios are covered in this article. This article
> ## Limits and limitations
-* **Transit routing is not supported.** You cannot route (via Azure) between your local network connected via Site-to-Site VPN and your local network connected via ExpressRoute.
-* **Basic SKU gateway is not supported.** You must use a non-Basic SKU gateway for both the [ExpressRoute gateway](expressroute-about-virtual-network-gateways.md) and the [VPN gateway](../vpn-gateway/vpn-gateway-about-vpngateways.md).
* **Only route-based VPN gateway is supported.** You must use a route-based [VPN gateway](../vpn-gateway/vpn-gateway-about-vpngateways.md). You also can use a route-based VPN gateway with a VPN connection configured for 'policy-based traffic selectors' as described in [Connect to multiple policy-based VPN devices](../vpn-gateway/vpn-gateway-connect-multiple-policybased-rm-ps.md).
-* **Static route should be configured for your VPN gateway.** If your local network is connected to both ExpressRoute and a Site-to-Site VPN, you must have a static route configured in your local network to route the Site-to-Site VPN connection to the public Internet.
-* **VPN Gateway defaults to ASN 65515 if not specified.** Azure VPN Gateway supports the BGP routing protocol. You can specify ASN (AS Number) for a virtual network by adding the -Asn switch. If you don't specify this parameter, the default AS number is 65515. You can use any ASN for the configuration, but if you select something other than 65515, you must reset the gateway for the setting to take effect.
+* **The ASN of Azure VPN Gateway must be set to 65515.** Azure VPN Gateway supports the BGP routing protocol. For ExpressRoute and Azure VPN to work together, you must keep the Autonomous System Number of your Azure VPN gateway at its default value, 65515. If you previously selected an ASN other than 65515 and you change the setting to 65515, you must reset the VPN gateway for the setting to take effect.
* **The gateway subnet must be /27 or a shorter prefix**, (such as /26, /25), or you will receive an error message when you add the ExpressRoute virtual network gateway. * **Coexistence in a dual-stack vnet is not supported.** If you are using ExpressRoute IPv6 support and a dual-stack ExpressRoute gateway, coexistence with VPN Gateway will not be possible.
The steps to configure both scenarios are covered in this article. This article
### Configure a Site-to-Site VPN as a failover path for ExpressRoute You can configure a Site-to-Site VPN connection as a backup for ExpressRoute. This connection applies only to virtual networks linked to the Azure private peering path. There is no VPN-based failover solution for services accessible through Azure Microsoft peering. The ExpressRoute circuit is always the primary link. Data flows through the Site-to-Site VPN path only if the ExpressRoute circuit fails. To avoid asymmetrical routing, your local network configuration should also prefer the ExpressRoute circuit over the Site-to-Site VPN. You can prefer the ExpressRoute path by setting higher local preference for the routes received the ExpressRoute.
+>[!NOTE]
+> If you have ExpressRoute Microsoft Peering enabled, you can receive the public IP address of your Azure VPN gateway on the ExpressRoute connection. To set up your site-to-site VPN connection as a backup, you must configure your on-premises network so that the VPN connection is routed to the Internet.
+>
+ > [!NOTE] > While ExpressRoute circuit is preferred over Site-to-Site VPN when both routes are the same, Azure will use the longest prefix match to choose the route towards the packet's destination. >
You can follow the steps below to add Point-to-Site configuration to your VPN ga
Add-AzVpnClientRootCertificate -VpnClientRootCertificateName $p2sCertFullName -VirtualNetworkGatewayname $azureVpn.Name -ResourceGroupName $resgrp.ResourceGroupName -PublicCertData $p2sCertData ```
+## To enable transit routing between ExpressRoute and Azure VPN
+If you want to enable connectivity between one of your local network that is connected to ExpressRoute and another of your local network that is connected to a site-to-site VPN connection, you'll need to set up [Azure Route Server](../route-server/expressroute-vpn-support.md).
+ For more information on Point-to-Site VPN, see [Configure a Point-to-Site connection](../vpn-gateway/vpn-gateway-howto-point-to-site-rm-ps.md). ## Next steps
germany Germany Migration Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/germany/germany-migration-storage.md
To begin, install [Azure Storage Explorer](https://azure.microsoft.com/features/
You use Storage Explorer to copy tables from the source Azure Storage account.
-Connect Storage Explorer to the your source table resources in Microsoft Azure Germany. You can [sign in to access resources in your subscription](../vs-azure-tools-storage-manage-with-storage-explorer.md?tabs=windows#sign-in-to-azure) or you can [attach to specific Storage resources](../vs-azure-tools-storage-manage-with-storage-explorer.md?tabs=windows#attach-a-specific-resource).
+Connect Storage Explorer to the your source table resources in Microsoft Azure Germany. You can [sign in to access resources in your subscription](../vs-azure-tools-storage-manage-with-storage-explorer.md?tabs=windows#sign-in-to-azure) or you can [attach to specific Storage resources](../vs-azure-tools-storage-manage-with-storage-explorer.md?tabs=windows#attach-to-an-individual-resource).
### Connect to target You use Storage Explorer to paste tables to the target Azure Storage account.
-Connect Storage Explorer to your target Microsoft Azure subscription or Azure Storage. You can [sign in to access resources in your subscription](../vs-azure-tools-storage-manage-with-storage-explorer.md?tabs=windows#sign-in-to-azure) or you can [attach to specific Storage resources](../vs-azure-tools-storage-manage-with-storage-explorer.md?tabs=windows#attach-a-specific-resource).
+Connect Storage Explorer to your target Microsoft Azure subscription or Azure Storage. You can [sign in to access resources in your subscription](../vs-azure-tools-storage-manage-with-storage-explorer.md?tabs=windows#sign-in-to-azure) or you can [attach to specific Storage resources](../vs-azure-tools-storage-manage-with-storage-explorer.md?tabs=windows#attach-to-an-individual-resource).
### Migrate tables
governance Determine Non Compliance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/how-to/determine-non-compliance.md
setting.
:::image type="content" source="../media/determine-non-compliance/guestconfig-compliance-details.png" alt-text="Screenshot of the Guest Assignment compliance details." border="false":::
-### Azure PowerShell
-
-You can also view compliance details from Azure PowerShell. First, make sure you have the Guest
-Configuration module installed.
-
-```azurepowershell-interactive
-Install-Module Az.GuestConfiguration
-```
-
-You can view the current status of all Guest Assignments for a VM using the following command:
-
-```azurepowershell-interactive
-Get-AzVMGuestPolicyStatus -ResourceGroupName <resourcegroupname> -VMName <vmname>
-```
-
-```output
-PolicyDisplayName ComplianceReasons
--
-Audit that an application is installed inside Windows VMs {[InstalledApplication]bwhitelistedapp}
-Audit that an application is not installed inside Windows VMs. {[InstalledApplication]NotInstalledApplica...
-```
-
-To view only the _reason_ phrase that describes why the VM is _Non-compliant_, return only the
-Reason child property.
-
-```azurepowershell-interactive
-Get-AzVMGuestPolicyStatus -ResourceGroupName <resourcegroupname> -VMName <vmname> | % ComplianceReasons | % Reasons | % Reason
-```
-
-```output
-The following applications are not installed: '<name>'.
-```
-
-You can also output a compliance history for Guest Assignments in scope for the machine. The output
-from this command includes the details of each report for the VM.
-
-> [!NOTE]
-> The output may return a large volume of data. It's recommended to store the output in a variable.
-
-```azurepowershell-interactive
-$guestHistory = Get-AzVMGuestPolicyStatusHistory -ResourceGroupName <resourcegroupname> -VMName <vmname>
-$guestHistory
-```
-
-```output
-PolicyDisplayName ComplianceStatus ComplianceReasons StartTime EndTime VMName LatestRepor
- tId
- -- - --
-[Preview]: Audit that an application is installed inside Windows VMs NonCompliant 02/10/2019 12:00:38 PM 02/10/2019 12:00:41 PM VM01 ../17fg0...
-<truncated>
-```
-
-To simplify this view, use the **ShowChanged** parameter. The output from this command only includes
-the reports that followed a change in compliance status.
-
-```azurepowershell-interactive
-$guestHistory = Get-AzVMGuestPolicyStatusHistory -ResourceGroupName <resourcegroupname> -VMName <vmname> -ShowChanged
-$guestHistory
-```
-
-```output
-PolicyDisplayName ComplianceStatus ComplianceReasons StartTime EndTime VMName LatestRepor
- tId
- -- - --
-Audit that an application is installed inside Windows VMs NonCompliant 02/10/2019 10:00:38 PM 02/10/2019 10:00:41 PM VM01 ../12ab0...
-Audit that an application is installed inside Windows VMs. Compliant 02/09/2019 11:00:38 AM 02/09/2019 11:00:39 AM VM01 ../e3665...
-Audit that an application is installed inside Windows VMs NonCompliant 02/09/2019 09:00:20 AM 02/09/2019 09:00:23 AM VM01 ../15ze1...
-```
- ## <a name="change-history"></a>Change history (Preview) As part of a new **public preview**, the last 14 days of change history are available for all Azure
governance Azure Security Benchmark https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/azure-security-benchmark.md
Title: Regulatory Compliance details for Azure Security Benchmark description: Details of the Azure Security Benchmark Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 02/09/2021 Last updated : 03/05/2021
initiative definition.
||||| |[App Configuration should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fca610c1d-041c-4332-9d88-7ed3094967c7) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your app configuration instances instead of the entire service, you'll also be protected against data leakage risks. Learn more at: [https://aka.ms/appconfig/private-endpoint](https://aka.ms/appconfig/private-endpoint). |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Configuration/PrivateLink_Audit.json) | |[Azure Cache for Redis should reside within a virtual network](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7d092e0a-7acd-40d2-a975-dca21cae48c4) |Azure Virtual Network deployment provides enhanced security and isolation for your Azure Cache for Redis, as well as subnets, access control policies, and other features to further restrict access.When an Azure Cache for Redis instance is configured with a virtual network, it is not publicly addressable and can only be accessed from virtual machines and applications within the virtual network. |Audit, Deny, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cache/RedisCache_CacheInVnet_Audit.json) |
-|[Azure Event Grid domains should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9830b652-8523-49cc-b1b3-e17dce1127ca) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network.By mapping private endpoints to your Event Grid domains instead of the entire service, you'll also be protected against data leakage risks.Learn more at: [https://aka.ms/privateendpoints](https://aka.ms/privateendpoints). |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Event%20Grid/EventGridDomains_EnablePrivateEndpoint_Audit.json) |
-|[Azure Event Grid topics should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4b90e17e-8448-49db-875e-bd83fb6f804f) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your topics instead of the entire service, you'll also be protected against data leakage risks. Learn more at: [https://aka.ms/privateendpoints](https://aka.ms/privateendpoints). |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Event%20Grid/EventGridTopics_EnablePrivateEndpoint_Audit.json) |
+|[Azure Event Grid domains should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9830b652-8523-49cc-b1b3-e17dce1127ca) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your Event Grid domain instead of the entire service, you'll also be protected against data leakage risks. Learn more at: [https://aka.ms/privateendpoints](https://aka.ms/privateendpoints). |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Event%20Grid/Domains_PrivateEndpoint_Audit.json) |
+|[Azure Event Grid topics should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4b90e17e-8448-49db-875e-bd83fb6f804f) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your Event Grid topic instead of the entire service, you'll also be protected against data leakage risks. Learn more at: [https://aka.ms/privateendpoints](https://aka.ms/privateendpoints). |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Event%20Grid/Topics_PrivateEndpoint_Audit.json) |
|[Azure Machine Learning workspaces should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F40cec1dd-a100-4920-b15b-3024fe8901ab) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your Azure Machine Learning workspaces instead of the entire service, you'll also be protected against data leakage risks. Learn more at: [https://aka.ms/azureml-workspaces-privatelink](https://aka.ms/azureml-workspaces-privatelink). |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Machine%20Learning/Workspace_PrivateLinkEnabled_Audit.json) | |[Azure SignalR Service should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F53503636-bcc9-4748-9663-5348217f160f) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your SignalR resources instead of the entire service, you'll also be protected against data leakage risks .Learn more at: [https://aka.ms/asrs/privatelink](https://aka.ms/asrs/privatelink). |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SignalR/SignalR_PrivateEndpointEnabled_Audit.json) | |[Azure Spring Cloud should use network injection](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faf35e2a4-ef96-44e7-a9ae-853dd97032c4) |Azure Spring Cloud instances should use virtual network injection for the following purposes: 1. Isolate Azure Spring Cloud from Internet. 2. Enable Azure Spring Cloud to interact with systems in either on premises data centers or Azure service in other virtual networks. 3. Empower customers to control inbound and outbound network communications for Azure Spring Cloud. |Audit, Disabled, Deny |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Platform/Spring_VNETEnabled_Audit.json) |
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | ||||| |[App Configuration should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fca610c1d-041c-4332-9d88-7ed3094967c7) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your app configuration instances instead of the entire service, you'll also be protected against data leakage risks. Learn more at: [https://aka.ms/appconfig/private-endpoint](https://aka.ms/appconfig/private-endpoint). |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Configuration/PrivateLink_Audit.json) |
-|[Azure Event Grid domains should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9830b652-8523-49cc-b1b3-e17dce1127ca) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network.By mapping private endpoints to your Event Grid domains instead of the entire service, you'll also be protected against data leakage risks.Learn more at: [https://aka.ms/privateendpoints](https://aka.ms/privateendpoints). |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Event%20Grid/EventGridDomains_EnablePrivateEndpoint_Audit.json) |
-|[Azure Event Grid topics should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4b90e17e-8448-49db-875e-bd83fb6f804f) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your topics instead of the entire service, you'll also be protected against data leakage risks. Learn more at: [https://aka.ms/privateendpoints](https://aka.ms/privateendpoints). |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Event%20Grid/EventGridTopics_EnablePrivateEndpoint_Audit.json) |
+|[Azure Event Grid domains should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9830b652-8523-49cc-b1b3-e17dce1127ca) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your Event Grid domain instead of the entire service, you'll also be protected against data leakage risks. Learn more at: [https://aka.ms/privateendpoints](https://aka.ms/privateendpoints). |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Event%20Grid/Domains_PrivateEndpoint_Audit.json) |
+|[Azure Event Grid topics should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4b90e17e-8448-49db-875e-bd83fb6f804f) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your Event Grid topic instead of the entire service, you'll also be protected against data leakage risks. Learn more at: [https://aka.ms/privateendpoints](https://aka.ms/privateendpoints). |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Event%20Grid/Topics_PrivateEndpoint_Audit.json) |
|[Azure Machine Learning workspaces should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F40cec1dd-a100-4920-b15b-3024fe8901ab) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your Azure Machine Learning workspaces instead of the entire service, you'll also be protected against data leakage risks. Learn more at: [https://aka.ms/azureml-workspaces-privatelink](https://aka.ms/azureml-workspaces-privatelink). |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Machine%20Learning/Workspace_PrivateLinkEnabled_Audit.json) | |[Azure SignalR Service should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F53503636-bcc9-4748-9663-5348217f160f) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your SignalR resources instead of the entire service, you'll also be protected against data leakage risks .Learn more at: [https://aka.ms/asrs/privatelink](https://aka.ms/asrs/privatelink). |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SignalR/SignalR_PrivateEndpointEnabled_Audit.json) | |[Container registries should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe8eef0a8-67cf-4eb4-9386-14b0e78733d4) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network.By mapping private endpoints to your container registries instead of the entire service, you'll also be protected against data leakage risks. Learn more at: [https://aka.ms/acr/private-link](https://aka.ms/acr/private-link). |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_PrivateEndpointEnabled_Audit.json) |
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | ||||| |[An Azure Active Directory administrator should be provisioned for SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1f314764-cb73-4fc9-b863-8eca98ac36e9) |Audit provisioning of an Azure Active Directory administrator for your SQL server to enable Azure AD authentication. Azure AD authentication enables simplified permission management and centralized identity management of database users and other Microsoft services |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SQL_DB_AuditServerADAdmins_Audit.json) |
-|[Managed identity should be used in your API App](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc4d441f8-f9d9-4a9e-9cef-e82117cb3eef) |Use a managed identity for enhanced authentication security |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_UseManagedIdentity_ApiApp_Audit.json) |
-|[Managed identity should be used in your Function App](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0da106f2-4ca3-48e8-bc85-c638fe6aea8f) |Use a managed identity for enhanced authentication security |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_UseManagedIdentity_FunctionApp_Audit.json) |
-|[Managed identity should be used in your Web App](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2b9ad585-36bc-4615-b300-fd4435808332) |Use a managed identity for enhanced authentication security |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_UseManagedIdentity_WebApp_Audit.json) |
+|[Managed identity should be used in your API App](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc4d441f8-f9d9-4a9e-9cef-e82117cb3eef) |Use a managed identity for enhanced authentication security |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_UseManagedIdentity_ApiApp_Audit.json) |
+|[Managed identity should be used in your Function App](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0da106f2-4ca3-48e8-bc85-c638fe6aea8f) |Use a managed identity for enhanced authentication security |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_UseManagedIdentity_FunctionApp_Audit.json) |
+|[Managed identity should be used in your Web App](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2b9ad585-36bc-4615-b300-fd4435808332) |Use a managed identity for enhanced authentication security |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_UseManagedIdentity_WebApp_Audit.json) |
|[Service Fabric clusters should only use Azure Active Directory for client authentication](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb54ed75b-3e1a-44ac-a333-05ba39b99ff0) |Audit usage of client authentication only via Azure Active Directory in Service Fabric |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Service%20Fabric/ServiceFabric_AuditADAuth_Audit.json) | ### Manage application identities securely and automatically
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
-|[Managed identity should be used in your API App](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc4d441f8-f9d9-4a9e-9cef-e82117cb3eef) |Use a managed identity for enhanced authentication security |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_UseManagedIdentity_ApiApp_Audit.json) |
-|[Managed identity should be used in your Function App](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0da106f2-4ca3-48e8-bc85-c638fe6aea8f) |Use a managed identity for enhanced authentication security |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_UseManagedIdentity_FunctionApp_Audit.json) |
-|[Managed identity should be used in your Web App](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2b9ad585-36bc-4615-b300-fd4435808332) |Use a managed identity for enhanced authentication security |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_UseManagedIdentity_WebApp_Audit.json) |
+|[Managed identity should be used in your API App](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc4d441f8-f9d9-4a9e-9cef-e82117cb3eef) |Use a managed identity for enhanced authentication security |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_UseManagedIdentity_ApiApp_Audit.json) |
+|[Managed identity should be used in your Function App](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0da106f2-4ca3-48e8-bc85-c638fe6aea8f) |Use a managed identity for enhanced authentication security |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_UseManagedIdentity_FunctionApp_Audit.json) |
+|[Managed identity should be used in your Web App](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2b9ad585-36bc-4615-b300-fd4435808332) |Use a managed identity for enhanced authentication security |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_UseManagedIdentity_WebApp_Audit.json) |
|[Service principals should be used to protect your subscriptions instead of management certificates](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6646a0bd-e110-40ca-bb97-84fcee63c414) |Management certificates allow anyone who authenticates with them to manage the subscription(s) they are associated with. To manage subscriptions more securely, use of service principals with Resource Manager is recommended to limit the impact of a certificate compromise. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_UseServicePrincipalToProtectSubscriptions.json) | ### Use strong authentication controls for all Azure Active Directory based access
initiative definition.
|[Azure Machine Learning workspaces should be encrypted with a customer-managed key](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fba769a63-b8cc-4b2d-abf6-ac33c7204be8) |Manage encryption at rest of Azure Machine Learning workspace data with customer-managed keys. By default, customer data is encrypted with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more at [https://aka.ms/azureml-workspaces-cmk](https://aka.ms/azureml-workspaces-cmk). |Audit, Deny, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Machine%20Learning/Workspace_CMKEnabled_Audit.json) | |[Bring your own key data protection should be enabled for MySQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F83cef61d-dbd1-4b20-a4fc-5fbc7da10833) |Use customer-managed keys to manage the encryption at rest of your MySQL servers. By default, the data is encrypted at rest with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MySQL_EnableByok_Audit.json) | |[Bring your own key data protection should be enabled for PostgreSQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F18adea5e-f416-4d0f-8aa8-d24321e3e274) |Use customer-managed keys to manage the encryption at rest of your PostgreSQL servers. By default, the data is encrypted at rest with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_EnableByok_Audit.json) |
-|[Cognitive Services accounts should enable data encryption with a customer-managed key](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F67121cc7-ff39-4ab8-b7e3-95b84dab487d) |Customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data stored in Cognitive Services to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more about customer-managed key encryption at [https://aka.ms/cosmosdb-cmk](https://aka.ms/cosmosdb-cmk). |Audit, Deny, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_CustomerManagedKey_Audit.json) |
+|[Cognitive Services accounts should enable data encryption with a customer-managed key](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F67121cc7-ff39-4ab8-b7e3-95b84dab487d) |Customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data stored in Cognitive Services to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more about customer-managed keys at [https://go.microsoft.com/fwlink/?linkid=2121321](https://go.microsoft.com/fwlink/?linkid=2121321). |Audit, Deny, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_CustomerManagedKey_Audit.json) |
|[Cognitive Services accounts should use customer owned storage or enable data encryption.](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F11566b39-f7f7-4b82-ab06-68d8700eb0a4) |This policy audits any Cognitive Services account not using customer owned storage nor data encryption. For each Cognitive Services account with storage, use either customer owned storage or enable data encryption. |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_BYOX_Audit.json) | |[Container registries should be encrypted with a customer-managed key](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5b9159ae-1701-4a6f-9a7a-aa9c8ddd0580) |Use customer-managed keys to manage the encryption at rest of the contents of your registries. By default, the data is encrypted at rest with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more at [https://aka.ms/acr/CMK](https://aka.ms/acr/CMK). |Audit, Deny, Disabled |[1.1.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_CMKEncryptionEnabled_Audit.json) | |[Disk encryption should be applied on virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0961003e-5a0a-4549-abde-af6a37f2724d) |Virtual machines without an enabled disk encryption will be monitored by Azure Security Center as recommendations. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_UnencryptedVMDisks_Audit.json) |
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | ||||| |[Email notification for high severity alerts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6e2593d9-add6-4083-9c9b-4b7d2188c899) |To ensure the relevant people in your organization are notified when there is a potential security breach in one of your subscriptions, enable email notifications for high severity alerts in Security Center. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Email_notification.json) |
-|[Email notification to subscription owner for high severity alerts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0b15565f-aa9e-48ba-8619-45960f2c314d) |To ensure your subscription owners are notified when there is a potential security breach in their subscription, set email notifications to subscription owners for high severity alerts in Security Center. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Email_notification_to_subscription_owner.json) |
+|[Email notification to subscription owner for high severity alerts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0b15565f-aa9e-48ba-8619-45960f2c314d) |To ensure your subscription owners are notified when there is a potential security breach in their subscription, set email notifications to subscription owners for high severity alerts in Security Center. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Email_notification_to_subscription_owner.json) |
|[Subscriptions should have a contact email address for security issues](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4f4f78b8-e367-4b10-a341-d9a4ad5cf1c7) |To ensure the relevant people in your organization are notified when there is a potential security breach in one of your subscriptions, set a security contact to receive email notifications from Security Center. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Security_contact_email.json) | ### Detection and analysis - create incidents based on high quality alerts
governance Azure Security Benchmarkv1 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/azure-security-benchmarkv1.md
Title: Regulatory Compliance details for Azure Security Benchmark v1 description: Details of the Azure Security Benchmark v1 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 02/09/2021 Last updated : 03/05/2021
This built-in initiative is deployed as part of the
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
-|[Managed identity should be used in your API App](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc4d441f8-f9d9-4a9e-9cef-e82117cb3eef) |Use a managed identity for enhanced authentication security |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_UseManagedIdentity_ApiApp_Audit.json) |
-|[Managed identity should be used in your Function App](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0da106f2-4ca3-48e8-bc85-c638fe6aea8f) |Use a managed identity for enhanced authentication security |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_UseManagedIdentity_FunctionApp_Audit.json) |
-|[Managed identity should be used in your Web App](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2b9ad585-36bc-4615-b300-fd4435808332) |Use a managed identity for enhanced authentication security |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_UseManagedIdentity_WebApp_Audit.json) |
+|[Managed identity should be used in your API App](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc4d441f8-f9d9-4a9e-9cef-e82117cb3eef) |Use a managed identity for enhanced authentication security |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_UseManagedIdentity_ApiApp_Audit.json) |
+|[Managed identity should be used in your Function App](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0da106f2-4ca3-48e8-bc85-c638fe6aea8f) |Use a managed identity for enhanced authentication security |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_UseManagedIdentity_FunctionApp_Audit.json) |
+|[Managed identity should be used in your Web App](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2b9ad585-36bc-4615-b300-fd4435808332) |Use a managed identity for enhanced authentication security |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_UseManagedIdentity_WebApp_Audit.json) |
## Malware Defense
governance Cis Azure 1 1 0 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/cis-azure-1-1-0.md
Title: Regulatory Compliance details for CIS Microsoft Azure Foundations Benchmark
-description: Details of the CIS Microsoft Azure Foundations Benchmark Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment.
Previously updated : 02/09/2021
+ Title: Regulatory Compliance details for CIS Microsoft Azure Foundations Benchmark 1.1.0
+description: Details of the CIS Microsoft Azure Foundations Benchmark 1.1.0 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment.
Last updated : 03/05/2021
-# Details of the CIS Microsoft Azure Foundations Benchmark Regulatory Compliance built-in initiative
+# Details of the CIS Microsoft Azure Foundations Benchmark 1.1.0 Regulatory Compliance built-in initiative
The following article details how the Azure Policy Regulatory Compliance built-in initiative
-definition maps to **compliance domains** and **controls** in CIS Microsoft Azure Foundations Benchmark.
+definition maps to **compliance domains** and **controls** in CIS Microsoft Azure Foundations Benchmark 1.1.0.
For more information about this compliance standard, see
-[CIS Microsoft Azure Foundations Benchmark](https://www.cisecurity.org/benchmark/azure/). To understand
+[CIS Microsoft Azure Foundations Benchmark 1.1.0](https://www.cisecurity.org/benchmark/azure/). To understand
_Ownership_, see [Azure Policy policy definition](../concepts/definition-structure.md#type) and [Shared responsibility in the cloud](../../../security/fundamentals/shared-responsibility.md).
-The following mappings are to the **CIS Microsoft Azure Foundations Benchmark** controls. Use the
+The following mappings are to the **CIS Microsoft Azure Foundations Benchmark 1.1.0** controls. Use the
navigation on the right to jump directly to a specific **compliance domain**. Many of the controls are implemented with an [Azure Policy](../overview.md) initiative definition. To review the complete initiative definition, open **Policy** in the Azure portal and select the **Definitions** page.
Then, find and select the **CIS Microsoft Azure Foundations Benchmark 1.1.0** Re
initiative definition. This built-in initiative is deployed as part of the
-[CIS Microsoft Azure Foundations Benchmark blueprint sample](../../blueprints/samples/cis-azure-1-1-0.md).
+[CIS Microsoft Azure Foundations Benchmark 1.1.0 blueprint sample](../../blueprints/samples/cis-azure-1-1-0.md).
> [!IMPORTANT] > Each control below is associated with one or more [Azure Policy](../overview.md) definitions.
This built-in initiative is deployed as part of the
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
-|[Email notification to subscription owner for high severity alerts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0b15565f-aa9e-48ba-8619-45960f2c314d) |To ensure your subscription owners are notified when there is a potential security breach in their subscription, set email notifications to subscription owners for high severity alerts in Security Center. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Email_notification_to_subscription_owner.json) |
+|[Email notification to subscription owner for high severity alerts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0b15565f-aa9e-48ba-8619-45960f2c314d) |To ensure your subscription owners are notified when there is a potential security breach in their subscription, set email notifications to subscription owners for high severity alerts in Security Center. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Email_notification_to_subscription_owner.json) |
## Storage Accounts
This built-in initiative is deployed as part of the
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
+|[Resource logs in Azure Key Vault Managed HSM should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa2a5b911-5617-447e-a49e-59dbe0e0434b) |To recreate activity trails for investigation purposes when a security incident occurs or when your network is compromised, you may want to audit by enabling resource logs on Managed HSMs. Please follow the instructions here: [https://docs.microsoft.com/azure/key-vault/managed-hsm/logging](https://docs.microsoft.com/azure/key-vault/managed-hsm/logging). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/ManagedHsm_AuditDiagnosticLog_Audit.json) |
|[Resource logs in Key Vault should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fcf820ca0-f99e-4f3e-84fb-66e913812d21) |Audit enabling of resource logs. This enables you to recreate activity trails to use for investigation purposes when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[4.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/KeyVault_AuditDiagnosticLog_Audit.json) | ### Ensure that Activity Log Alert exists for Create Policy Assignment
This built-in initiative is deployed as part of the
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
+|[Azure Key Vault Managed HSM should have purge protection enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc39ba22d-4428-4149-b981-70acb31fc383) |Malicious deletion of an Azure Key Vault Managed HSM can lead to permanent data loss. A malicious insider in your organization can potentially delete and purge Azure Key Vault Managed HSM. Purge protection protects you from insider attacks by enforcing a mandatory retention period for soft deleted Azure Key Vault Managed HSM. No one inside your organization or Microsoft will be able to purge your Azure Key Vault Managed HSM during the soft delete retention period. |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/ManagedHsm_Recoverable_Audit.json) |
|[Key vaults should have purge protection enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0b60c0b2-2dc2-4e1c-b5c9-abbed971de53) |Malicious deletion of a key vault can lead to permanent data loss. A malicious insider in your organization can potentially delete and purge key vaults. Purge protection protects you from insider attacks by enforcing a mandatory retention period for soft deleted key vaults. No one inside your organization or Microsoft will be able to purge your key vaults during the soft delete retention period. |Audit, Deny, Disabled |[1.1.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/KeyVault_Recoverable_Audit.json) | ### Enable role-based access control (RBAC) within Azure Kubernetes Services
This built-in initiative is deployed as part of the
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
-|[Managed identity should be used in your API App](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc4d441f8-f9d9-4a9e-9cef-e82117cb3eef) |Use a managed identity for enhanced authentication security |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_UseManagedIdentity_ApiApp_Audit.json) |
-|[Managed identity should be used in your Function App](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0da106f2-4ca3-48e8-bc85-c638fe6aea8f) |Use a managed identity for enhanced authentication security |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_UseManagedIdentity_FunctionApp_Audit.json) |
-|[Managed identity should be used in your Web App](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2b9ad585-36bc-4615-b300-fd4435808332) |Use a managed identity for enhanced authentication security |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_UseManagedIdentity_WebApp_Audit.json) |
+|[Managed identity should be used in your API App](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc4d441f8-f9d9-4a9e-9cef-e82117cb3eef) |Use a managed identity for enhanced authentication security |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_UseManagedIdentity_ApiApp_Audit.json) |
+|[Managed identity should be used in your Function App](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0da106f2-4ca3-48e8-bc85-c638fe6aea8f) |Use a managed identity for enhanced authentication security |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_UseManagedIdentity_FunctionApp_Audit.json) |
+|[Managed identity should be used in your Web App](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2b9ad585-36bc-4615-b300-fd4435808332) |Use a managed identity for enhanced authentication security |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_UseManagedIdentity_WebApp_Audit.json) |
### Ensure that 'PHP version' is the latest, if used to run the web app
governance Cmmc L3 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/cmmc-l3.md
Title: Regulatory Compliance details for CMMC Level 3 description: Details of the CMMC Level 3 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 02/09/2021 Last updated : 03/05/2021
initiative definition.
|[The Log Analytics agent should be installed on Virtual Machine Scale Sets](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fefbde977-ba53-4479-b8e9-10b957924fbf) |This policy audits any Windows/Linux Virtual Machine Scale Sets if the Log Analytics agent is not installed. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/VMSS_LogAnalyticsAgent_AuditIfNotExists.json) | |[The Log Analytics agent should be installed on virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa70ca396-0a34-413a-88e1-b956c1e683be) |This policy audits any Windows/Linux virtual machines if the Log Analytics agent is not installed. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/VirtualMachines_LogAnalyticsAgent_AuditIfNotExists.json) |
-### Provide a system capability that compares and synchronizes internal system clocks with an authoritative source to generate time stamps for audit records.
-
-**ID**: CMMC L3 AU.2.043
-**Ownership**: Shared
-
-|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
-|||||
-|[Audit Windows machines that are not set to the specified time zone](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc633f6a2-7f8b-4d9e-9456-02f0f04f5505) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). Machines are non-compliant if the value of the property StandardName in WMI class Win32_TimeZone does not match the selected time zone for the policy parameter. |auditIfNotExists |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_WindowsTimeZone_AINE.json) |
- ### Alert in the event of an audit logging process failure. **ID**: CMMC L3 AU.3.046
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | ||||| |[Email notification for high severity alerts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6e2593d9-add6-4083-9c9b-4b7d2188c899) |To ensure the relevant people in your organization are notified when there is a potential security breach in one of your subscriptions, enable email notifications for high severity alerts in Security Center. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Email_notification.json) |
-|[Email notification to subscription owner for high severity alerts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0b15565f-aa9e-48ba-8619-45960f2c314d) |To ensure your subscription owners are notified when there is a potential security breach in their subscription, set email notifications to subscription owners for high severity alerts in Security Center. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Email_notification_to_subscription_owner.json) |
+|[Email notification to subscription owner for high severity alerts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0b15565f-aa9e-48ba-8619-45960f2c314d) |To ensure your subscription owners are notified when there is a potential security breach in their subscription, set email notifications to subscription owners for high severity alerts in Security Center. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Email_notification_to_subscription_owner.json) |
|[Subscriptions should have a contact email address for security issues](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4f4f78b8-e367-4b10-a341-d9a4ad5cf1c7) |To ensure the relevant people in your organization are notified when there is a potential security breach in one of your subscriptions, set a security contact to receive email notifications from Security Center. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Security_contact_email.json) | |[Vulnerability Assessment settings for SQL server should contain an email address to receive scan reports](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F057d6cfe-9c4f-4a6d-bc60-14420ea1f1a9) |Ensure that an email address is provided for the 'Send scan reports to' field in the Vulnerability Assessment settings. This email address receives scan result summary after a periodic scan runs on SQL servers. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_VulnerabilityAssessmentEmails_Audit.json) |
initiative definition.
|[Both operating systems and data disks in Azure Kubernetes Service clusters should be encrypted by customer-managed keys](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7d7be79c-23ba-4033-84dd-45e2a5ccdd67) |Encrypting OS and data disks using customer-managed keys provides more control and greater flexibility in key management. This is a common requirement in many regulatory and industry compliance standards. |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/AKS_CMK_Deny.json) | |[Certificates using RSA cryptography should have the specified minimum key size](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fcee51871-e572-4576-855c-047c820360f0) |Manage your organizational compliance requirements by specifying a minimum key size for RSA certificates stored in your key vault. |audit, deny, disabled |[2.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/Certificates_RSA_MinimumKeySize.json) | |[Cognitive Services accounts should enable data encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2bdd0062-9d75-436e-89df-487dd8e4b3c7) |This policy audits any Cognitive Services account not using data encryption. For each Cognitive Services account with storage, should enable data encryption with either customer managed or Microsoft managed key. |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_Encryption_Audit.json) |
-|[Cognitive Services accounts should enable data encryption with a customer-managed key](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F67121cc7-ff39-4ab8-b7e3-95b84dab487d) |Customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data stored in Cognitive Services to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more about customer-managed key encryption at [https://aka.ms/cosmosdb-cmk](https://aka.ms/cosmosdb-cmk). |Audit, Deny, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_CustomerManagedKey_Audit.json) |
+|[Cognitive Services accounts should enable data encryption with a customer-managed key](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F67121cc7-ff39-4ab8-b7e3-95b84dab487d) |Customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data stored in Cognitive Services to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more about customer-managed keys at [https://go.microsoft.com/fwlink/?linkid=2121321](https://go.microsoft.com/fwlink/?linkid=2121321). |Audit, Deny, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_CustomerManagedKey_Audit.json) |
|[Container registries should be encrypted with a customer-managed key](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5b9159ae-1701-4a6f-9a7a-aa9c8ddd0580) |Use customer-managed keys to manage the encryption at rest of the contents of your registries. By default, the data is encrypted at rest with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more at [https://aka.ms/acr/CMK](https://aka.ms/acr/CMK). |Audit, Deny, Disabled |[1.1.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_CMKEncryptionEnabled_Audit.json) | |[Disk encryption should be applied on virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0961003e-5a0a-4549-abde-af6a37f2724d) |Virtual machines without an enabled disk encryption will be monitored by Azure Security Center as recommendations. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_UnencryptedVMDisks_Audit.json) | |[Disk encryption should be enabled on Azure Data Explorer](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff4b53539-8df9-40e4-86c6-6b607703bd4e) |Enabling disk encryption helps protect and safeguard your data to meet your organizational security and compliance commitments. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Data%20Explorer/ADX_disk_encrypted.json) |
initiative definition.
||||| |[A maximum of 3 owners should be designated for your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4f11b553-d42e-4e3a-89be-32ca364cad4c) |It is recommended to designate up to 3 subscription owners in order to reduce the potential for breach by a compromised owner. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_DesignateLessThanXOwners_Audit.json) | |[An Azure Active Directory administrator should be provisioned for SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1f314764-cb73-4fc9-b863-8eca98ac36e9) |Audit provisioning of an Azure Active Directory administrator for your SQL server to enable Azure AD authentication. Azure AD authentication enables simplified permission management and centralized identity management of database users and other Microsoft services |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SQL_DB_AuditServerADAdmins_Audit.json) |
-|[Audit Windows machines that have extra accounts in the Administrators group](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3d2a3320-2a72-4c67-ac5f-caa40fbee2b2) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). Machines are non-compliant if the local Administrators group contains members that are not listed in the policy parameter. |auditIfNotExists |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AdministratorsGroupMembers_AINE.json) |
|[Audit Windows machines that have the specified members in the Administrators group](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F69bf4abd-ca1e-4cf6-8b5a-762d42e61d4f) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). Machines are non-compliant if the local Administrators group contains one or more of the members listed in the policy parameter. |auditIfNotExists |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AdministratorsGroupMembersToExclude_AINE.json) | |[Deprecated accounts with owner permissions should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Febb62a0c-3560-49e1-89ed-27e074e9f8ad) |Deprecated accounts with owner permissions should be removed from your subscription. Deprecated accounts are accounts that have been blocked from signing in. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveDeprecatedAccountsWithOwnerPermissions_Audit.json) | |[External accounts with owner permissions should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff8456c1c-aa66-4dfb-861a-25d127b775c9) |External accounts with owner permissions should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveExternalAccountsWithOwnerPermissions_Audit.json) |
initiative definition.
|[Azure Defender for Storage should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F308fbb08-4ab8-4e67-9b29-592e93fb94fa) |Azure Defender for Storage provides detections of unusual and potentially harmful attempts to access or exploit storage accounts. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnStorageAccounts_Audit.json) | |[Azure Monitor should collect activity logs from all regions](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F41388f1c-2db0-4c25-95b2-35d7f5ccbfa9) |This policy audits the Azure Monitor log profile which does not export activities from all Azure supported regions including global. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ActivityLog_CaptureAllRegions.json) | |[Azure subscriptions should have a log profile for Activity Log](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7796937f-307b-4598-941c-67d3a05ebfe7) |This policy ensures if a log profile is enabled for exporting activity logs. It audits if there is no log profile created to export the logs either to a storage account or to an event hub. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/Logprofile_activityLogs_Audit.json) |
-|[Email notification to subscription owner for high severity alerts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0b15565f-aa9e-48ba-8619-45960f2c314d) |To ensure your subscription owners are notified when there is a potential security breach in their subscription, set email notifications to subscription owners for high severity alerts in Security Center. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Email_notification_to_subscription_owner.json) |
+|[Email notification to subscription owner for high severity alerts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0b15565f-aa9e-48ba-8619-45960f2c314d) |To ensure your subscription owners are notified when there is a potential security breach in their subscription, set email notifications to subscription owners for high severity alerts in Security Center. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Email_notification_to_subscription_owner.json) |
|[Flow log should be configured for every network security group](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc251913d-7d24-4958-af87-478ed3b9ba41) |Audit for network security groups to verify if flow log resource is configured. Flow log allows to log information about IP traffic flowing through network security group. It can be used for optimizing network flows, monitoring throughput, verifying compliance, detecting intrusions and more. |audit |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/NetworkSecurityGroup_FlowLog_Audit.json) | |[Network Watcher should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb6e2945c-0b7b-40f5-9233-7a5323b5cdc6) |Network Watcher is a regional service that enables you to monitor and diagnose conditions at a network scenario level in, to, and from Azure. Scenario level monitoring enables you to diagnose problems at an end to end network level view. Network diagnostic and visualization tools available with Network Watcher help you understand, diagnose, and gain insights to your network in Azure. |auditIfNotExists |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/NetworkWatcher_Enabled_Audit.json) | |[Subscriptions should have a contact email address for security issues](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4f4f78b8-e367-4b10-a341-d9a4ad5cf1c7) |To ensure the relevant people in your organization are notified when there is a potential security breach in one of your subscriptions, set a security contact to receive email notifications from Security Center. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Security_contact_email.json) |
initiative definition.
|[Azure Monitor log profile should collect logs for categories 'write,' 'delete,' and 'action'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1a4e592a-6a6e-44a5-9814-e36264ca96e7) |This policy ensures that a log profile collects logs for categories 'write,' 'delete,' and 'action' |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ActivityLog_CaptureAllCategories.json) | |[Azure Monitor should collect activity logs from all regions](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F41388f1c-2db0-4c25-95b2-35d7f5ccbfa9) |This policy audits the Azure Monitor log profile which does not export activities from all Azure supported regions including global. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ActivityLog_CaptureAllRegions.json) | |[Azure subscriptions should have a log profile for Activity Log](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7796937f-307b-4598-941c-67d3a05ebfe7) |This policy ensures if a log profile is enabled for exporting activity logs. It audits if there is no log profile created to export the logs either to a storage account or to an event hub. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/Logprofile_activityLogs_Audit.json) |
-|[Email notification to subscription owner for high severity alerts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0b15565f-aa9e-48ba-8619-45960f2c314d) |To ensure your subscription owners are notified when there is a potential security breach in their subscription, set email notifications to subscription owners for high severity alerts in Security Center. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Email_notification_to_subscription_owner.json) |
+|[Email notification to subscription owner for high severity alerts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0b15565f-aa9e-48ba-8619-45960f2c314d) |To ensure your subscription owners are notified when there is a potential security breach in their subscription, set email notifications to subscription owners for high severity alerts in Security Center. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Email_notification_to_subscription_owner.json) |
|[Network Watcher should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb6e2945c-0b7b-40f5-9233-7a5323b5cdc6) |Network Watcher is a regional service that enables you to monitor and diagnose conditions at a network scenario level in, to, and from Azure. Scenario level monitoring enables you to diagnose problems at an end to end network level view. Network diagnostic and visualization tools available with Network Watcher help you understand, diagnose, and gain insights to your network in Azure. |auditIfNotExists |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/NetworkWatcher_Enabled_Audit.json) | > [!NOTE]
governance Hipaa Hitrust 9 2 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/hipaa-hitrust-9-2.md
Title: Regulatory Compliance details for HIPAA HITRUST 9.2 description: Details of the HIPAA HITRUST 9.2 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 02/09/2021 Last updated : 03/05/2021
This built-in initiative is deployed as part of the
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | ||||| |[Auditing on SQL server should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa6fb4358-5bf4-4ad7-ba82-2cd2f41ce5e9) |Auditing on your SQL Server should be enabled to track database activities across all databases on the server and save them in an audit log. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServerAuditing_Audit.json) |
+|[Resource logs in Azure Key Vault Managed HSM should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa2a5b911-5617-447e-a49e-59dbe0e0434b) |To recreate activity trails for investigation purposes when a security incident occurs or when your network is compromised, you may want to audit by enabling resource logs on Managed HSMs. Please follow the instructions here: [https://docs.microsoft.com/azure/key-vault/managed-hsm/logging](https://docs.microsoft.com/azure/key-vault/managed-hsm/logging). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/ManagedHsm_AuditDiagnosticLog_Audit.json) |
|[Resource logs in Key Vault should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fcf820ca0-f99e-4f3e-84fb-66e913812d21) |Audit enabling of resource logs. This enables you to recreate activity trails to use for investigation purposes when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[4.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/KeyVault_AuditDiagnosticLog_Audit.json) | ## Monitoring System Use
This built-in initiative is deployed as part of the
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
+|[Azure Key Vault Managed HSM should have purge protection enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc39ba22d-4428-4149-b981-70acb31fc383) |Malicious deletion of an Azure Key Vault Managed HSM can lead to permanent data loss. A malicious insider in your organization can potentially delete and purge Azure Key Vault Managed HSM. Purge protection protects you from insider attacks by enforcing a mandatory retention period for soft deleted Azure Key Vault Managed HSM. No one inside your organization or Microsoft will be able to purge your Azure Key Vault Managed HSM during the soft delete retention period. |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/ManagedHsm_Recoverable_Audit.json) |
|[Key vaults should have purge protection enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0b60c0b2-2dc2-4e1c-b5c9-abbed971de53) |Malicious deletion of a key vault can lead to permanent data loss. A malicious insider in your organization can potentially delete and purge key vaults. Purge protection protects you from insider attacks by enforcing a mandatory retention period for soft deleted key vaults. No one inside your organization or Microsoft will be able to purge your key vaults during the soft delete retention period. |Audit, Deny, Disabled |[1.1.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/KeyVault_Recoverable_Audit.json) | ### Business impact analysis are used to evaluate the consequences of disasters, security failures, loss of service, and service availability.
governance Iso 27001 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/iso-27001.md
Title: Regulatory Compliance details for ISO 27001:2013 description: Details of the ISO 27001:2013 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 02/09/2021 Last updated : 03/05/2021
governance New Zealand Ism https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/new-zealand-ism.md
Title: Regulatory Compliance details for New Zealand ISM
-description: Details of the New Zealand ISM Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment.
Previously updated : 02/09/2021
+ Title: Regulatory Compliance details for New Zealand ISM Restricted
+description: Details of the New Zealand ISM Restricted Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment.
Last updated : 03/05/2021
-# Details of the New Zealand ISM Regulatory Compliance built-in initiative
+# Details of the New Zealand ISM Restricted Regulatory Compliance built-in initiative
The following article details how the Azure Policy Regulatory Compliance built-in initiative
-definition maps to **compliance domains** and **controls** in New Zealand ISM.
+definition maps to **compliance domains** and **controls** in New Zealand ISM Restricted.
For more information about this compliance standard, see
-[New Zealand ISM](https://www.nzism.gcsb.govt.nz/). To understand
+[New Zealand ISM Restricted](https://www.nzism.gcsb.govt.nz/). To understand
_Ownership_, see [Azure Policy policy definition](../concepts/definition-structure.md#type) and [Shared responsibility in the cloud](../../../security/fundamentals/shared-responsibility.md).
-The following mappings are to the **New Zealand ISM** controls. Use the
+The following mappings are to the **New Zealand ISM Restricted** controls. Use the
navigation on the right to jump directly to a specific **compliance domain**. Many of the controls are implemented with an [Azure Policy](../overview.md) initiative definition. To review the complete initiative definition, open **Policy** in the Azure portal and select the **Definitions** page.
initiative definition.
## Access Control and Passwords
-### 16.1.32 System User Identitfication
+### 16.1.32 System User Identification
**ID**: NZISM Security Benchmark AC-2 **Ownership**: Customer
governance Nist Sp 800 171 R2 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/nist-sp-800-171-r2.md
Title: Regulatory Compliance details for NIST SP 800-171 R2 description: Details of the NIST SP 800-171 R2 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 02/09/2021 Last updated : 03/05/2021
initiative definition.
||||| |[Advanced data security should be enabled on SQL Managed Instance](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb7388-5bf4-4ad7-ba99-2cd2f41cebb9) |Audit each SQL Managed Instance without advanced data security. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlManagedInstance_AdvancedDataSecurity_Audit.json) | |[Advanced data security should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb4388-5bf4-4ad7-ba82-2cd2f41ceae9) |Audit SQL servers without Advanced Data Security |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_AdvancedDataSecurity_Audit.json) |
-|[Email notification to subscription owner for high severity alerts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0b15565f-aa9e-48ba-8619-45960f2c314d) |To ensure your subscription owners are notified when there is a potential security breach in their subscription, set email notifications to subscription owners for high severity alerts in Security Center. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Email_notification_to_subscription_owner.json) |
+|[Email notification to subscription owner for high severity alerts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0b15565f-aa9e-48ba-8619-45960f2c314d) |To ensure your subscription owners are notified when there is a potential security breach in their subscription, set email notifications to subscription owners for high severity alerts in Security Center. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Email_notification_to_subscription_owner.json) |
|[Network Watcher should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb6e2945c-0b7b-40f5-9233-7a5323b5cdc6) |Network Watcher is a regional service that enables you to monitor and diagnose conditions at a network scenario level in, to, and from Azure. Scenario level monitoring enables you to diagnose problems at an end to end network level view. Network diagnostic and visualization tools available with Network Watcher help you understand, diagnose, and gain insights to your network in Azure. |auditIfNotExists |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/NetworkWatcher_Enabled_Audit.json) | |[Subscriptions should have a contact email address for security issues](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4f4f78b8-e367-4b10-a341-d9a4ad5cf1c7) |To ensure the relevant people in your organization are notified when there is a potential security breach in one of your subscriptions, set a security contact to receive email notifications from Security Center. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Security_contact_email.json) |
governance Nist Sp 800 53 R4 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/nist-sp-800-53-r4.md
Title: Regulatory Compliance details for NIST SP 800-53 R4 description: Details of the NIST SP 800-53 R4 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 02/09/2021 Last updated : 03/05/2021
healthcare-apis Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure API for FHIR description: Lists Azure Policy Regulatory Compliance controls available for Azure API for FHIR. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 02/09/2021 Last updated : 03/05/2021
iot-hub-device-update Device Update Raspberry Pi https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub-device-update/device-update-raspberry-pi.md
Title: Device Update for Azure IoT Hub tutorial using the Raspberry Pi 3 B+ Reference Yocto Image | Microsoft Docs description: Get started with Device Update for Azure IoT Hub using the Raspberry Pi 3 B+ Reference Yocto Image.--++ Last updated 2/11/2021
Use that version number in the Import Update step below.
## Import update
-1. Select the Device Updates option under Automatic Device Management from the left-hand navigation bar.
-
-2. Select the Updates tab.
-
-3. Select "+ Import New Update".
-
-4. Select the folder icon or text box under "Select an Import Manifest File". You will see a file picker dialog. Select the Import Manifest you downloaded previously. Next, select the folder icon or text box under "Select one or more update files". You will see a file picker dialog. Select the apt manifest update file you downloaded previously.
+1. Create an Import Manifest following these [instructions](import-update.md).
+2. Select the Device Updates option under Automatic Device Management from the left-hand navigation bar.
+3. Select the Updates tab.
+4. Select "+ Import New Update".
+5. Select the folder icon or text box under "Select an Import Manifest File". You will see a file picker dialog. Select the Import Manifest you created above. Next, select the folder icon or text box under "Select one or more update files". You will see a file picker dialog. Select the update file that you wish to deploy to your IoT devices.
:::image type="content" source="media/import-update/select-update-files.png" alt-text="Screenshot showing update file selection." lightbox="media/import-update/select-update-files.png":::
iot-hub-device-update Device Update Simulator https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub-device-update/device-update-simulator.md
Title: Device Update for Azure IoT Hub tutorial using the Ubuntu (18.04 x64) Simulator Reference Agent | Microsoft Docs description: Get started with Device Update for Azure IoT Hub using the Ubuntu (18.04 x64) Simulator Reference Agent.--++ Last updated 2/11/2021
Agent running. [main]
## Import update
-1. Select the Device Updates option under Automatic Device Management from the left-hand navigation bar.
+1. Create an Import Manifest following these [instructions](import-update.md).
+2. Select the Device Updates option under Automatic Device Management from the left-hand navigation bar.
-2. Select the Updates tab.
+3. Select the Updates tab.
-3. Select "+ Import New Update".
+4. Select "+ Import New Update".
+
+5. Select the folder icon or text box under "Select an Import Manifest File". You will see a file picker dialog. Select the Import Manifest you created above. Next, select the folder icon or text box under "Select one or more update files". You will see a file picker dialog. Select the Ubuntu update image that you downloaded earlier.
-4. Select the folder icon or text box under "Select an Import Manifest File". You will see a file picker dialog. Select the Import Manifest you downloaded previously. Next, select the folder icon or text box under "Select one or more update files". You will see a file picker dialog. Select the apt manifest update file you downloaded previously.
-
:::image type="content" source="media/import-update/select-update-files.png" alt-text="Screenshot showing update file selection." lightbox="media/import-update/select-update-files.png":::
-5. Select the folder icon or text box under "Select a storage container". Then select the appropriate storage account.
+6. Select the folder icon or text box under "Select a storage container". Then select the appropriate storage account.
-6. If youΓÇÖve already created a container, you can reuse it. (Otherwise, select "+ Container" to create a new storage container for updates.). Select the container you wish to use and click "Select".
+7. If youΓÇÖve already created a container, you can reuse it. (Otherwise, select "+ Container" to create a new storage container for updates.). Select the container you wish to use and click "Select".
:::image type="content" source="media/import-update/container.png" alt-text="Screenshot showing container selection." lightbox="media/import-update/container.png":::
-7. Select "Submit" to start the import process.
+8. Select "Submit" to start the import process.
-8. The import process begins, and the screen changes to the "Import History" section. Select "Refresh" to view progress until the import process completes. Depending on the size of the update, this may complete in a few minutes but could take longer.
+9. The import process begins, and the screen changes to the "Import History" section. Select "Refresh" to view progress until the import process completes. Depending on the size of the update, this may complete in a few minutes but could take longer.
:::image type="content" source="media/import-update/update-publishing-sequence-2.png" alt-text="Screenshot showing update import sequence." lightbox="media/import-update/update-publishing-sequence-2.png":::
-9. When the Status column indicates the import has succeeded, select the "Ready to Deploy" header. You should see your imported update in the list now.
+10. When the Status column indicates the import has succeeded, select the "Ready to Deploy" header. You should see your imported update in the list now.
[Learn more](import-update.md) about importing updates.
iot-hub-device-update Import Update https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub-device-update/import-update.md
# Import New Update
-Learn how to import a new update into Device Update for IoT Hub.
+Learn how to import a new update into Device Update for IoT Hub. If you haven't already done so, be sure to familiarize yourself with the basic [import concepts](import-concepts.md).
## Prerequisites
a location accessible from PowerShell (once the zip file is downloaded, right cl
| | -- | | deviceManufacturer | Manufacturer of the device the update is compatible with, for example, Contoso | deviceModel | Model of the device the update is compatible with, for example, Toaster
- | updateProvider | Provider part of update identity, for example, Fabrikam
- | updateName | Name part of update identity, for example, ImageUpdate
- | updateVersion | Update version, for example, 2.0
+ | updateProvider | Entity who is creating or directly responsible for the update. It will often be a company name.
+ | updateName | Identifier for a class of updates. The class can be anything you choose. It will often be a device or model name.
+ | updateVersion | Version number distinguishing this update from others that have the same Provider and Name. May or may not match a version of an individual software component on the device.
| updateType | <ul><li>Specify `microsoft/swupdate:1` for image update</li><li>Specify `microsoft/apt:1` for package update</li></ul> | installedCriteria | <ul><li>Specify value of SWVersion for `microsoft/swupdate:1` update type</li><li>Specify recommended value for `microsoft/apt:1` update type. | updateFilePath(s) | Path to the update file(s) on your computer
iot-hub Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure IoT Hub description: Lists Azure Policy Regulatory Compliance controls available for Azure IoT Hub. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 02/09/2021 Last updated : 03/05/2021
key-vault Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Key Vault description: Lists Azure Policy Regulatory Compliance controls available for Azure Key Vault. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 02/09/2021 Last updated : 03/05/2021
logic-apps Create Stateful Stateless Workflows Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/create-stateful-stateless-workflows-visual-studio-code.md
ms.suite: integration Previously updated : 03/02/2021 Last updated : 03/05/2021 # Create stateful and stateless workflows in Visual Studio Code with the Azure Logic Apps (Preview) extension
Before you can create your logic app, create a local project so that you can man
1. Replace the `AzureWebJobsStorage` property value with the storage account's connection string that you saved earlier, for example: Before:+ ```json { "IsEncrypted": false,
Before you can create your logic app, create a local project so that you can man
``` After:+ ```json { "IsEncrypted": false,
Before you can create your logic app, create a local project so that you can man
1. When you're done, make sure that you save your changes.
+<a name="enable-built-in-connector-authoring"></a>
+
+## Enable built-in connector authoring
+
+You can create your own built-in connectors for any service you need by using the [preview release's extensibility framework](https://techcommunity.microsoft.com/t5/integrations-on-azure/azure-logic-apps-running-anywhere-built-in-connector/ba-p/1921272). Similar to built-in connectors such as Azure Service Bus and SQL Server, these connectors provide higher throughput, low latency, local connectivity, and run natively in the same process as the preview runtime.
+
+The authoring capability is currently available only in Visual Studio Code, but isn't enabled by default. To create these connectors, you need to first convert your project from extension bundle-based (Node.js) to NuGet package-based (.NET).
+
+1. In the Explorer pane, at your project's root, move your mouse pointer over any blank area below all the other files and folders, open the shortcut menu, and select **Convert to Nuget-based Logic App project**.
+
+ ![Screenshot that shows that shows Explorer pane with the project's shortcut menu opened from a blank area in the project window.](./media/create-stateful-stateless-workflows-visual-studio-code/convert-logic-app-project.png)
+
+1. When the prompt appears, confirm the project conversion.
+
+1. To continue, review and follow the steps in the article, [Azure Logic Apps Running Anywhere - Built-in connector extensibility](https://techcommunity.microsoft.com/t5/integrations-on-azure/azure-logic-apps-running-anywhere-built-in-connector/ba-p/1921272).
+ <a name="open-workflow-definition-designer"></a> ## Open the workflow definition file in the designer
When you try to start a debugging session, you get the error, **"Error exists af
1. In the following task, delete the line, `"dependsOn: "generateDebugSymbols"`, along with the comma that ends the preceding line, for example: Before:+ ```json { "type": "func",
When you try to start a debugging session, you get the error, **"Error exists af
``` After:+ ```json { "type": "func",
logic-apps Logic Apps Overview Preview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/logic-apps-overview-preview.md
ms.suite: integration Previously updated : 03/02/2021 Last updated : 03/05/2021 # Overview: Azure Logic Apps Preview
This table specifies the child workflow's behavior based on whether the parent a
Azure Logic Apps Preview includes many current and additional capabilities, for example:
-* Create logic apps and their workflows from [390+ connectors](/connectors/connector-reference/connector-reference-logicapps-connectors) for Software-as-a-Service (SaaS) and Platform-as-a-Service (PaaS) apps and services plus connectors for on-premises systems.
+* Create logic apps and their workflows from [400+ connectors](/connectors/connector-reference/connector-reference-logicapps-connectors) for Software-as-a-Service (SaaS) and Platform-as-a-Service (PaaS) apps and services plus connectors for on-premises systems.
- * Some managed connectors such as Azure Service Bus, Azure Event Hubs, and SQL Server run similarly to the built-in triggers and actions that are native to the Azure Logic Apps Preview runtime, for example, the Request trigger and HTTP action. For more information, see [Azure Logic Apps Running Anywhere - Built-in connector extensibility](https://techcommunity.microsoft.com/t5/integrations-on-azure/azure-logic-apps-running-anywhere-built-in-connector/ba-p/1921272).
+ * Some managed connectors, such as Azure Service Bus, Azure Event Hubs, SQL Server, and MQ, run similarly to the built-in triggers and actions that are native to the Azure Logic Apps Preview runtime, for example, the Request trigger and HTTP action.
+
+ * Create your own built-in connectors for any service you need by using the [preview release's extensibility framework](https://techcommunity.microsoft.com/t5/integrations-on-azure/azure-logic-apps-running-anywhere-built-in-connector/ba-p/1921272). Similar to built-in connectors such as Azure Service Bus and SQL Server, but unlike [custom connectors](../connectors/apis-list.md#custom-apis-and-connectors) that aren't currently supported for preview, these connectors provide higher throughput, low latency, local connectivity, and run natively in the same process as the preview runtime.
+
+ The authoring capability is currently available only in Visual Studio Code, but isn't enabled by default. To create these connectors, [switch your project from extension bundle-based (Node.js) to NuGet package-based (.NET)](create-stateful-stateless-workflows-visual-studio-code.md#enable-built-in-connector-authoring). For more information, see [Azure Logic Apps Running Anywhere - Built-in connector extensibility](https://techcommunity.microsoft.com/t5/integrations-on-azure/azure-logic-apps-running-anywhere-built-in-connector/ba-p/1921272).
* You can use the B2B actions for Liquid Operations and XML Operations without an integration account. To use these actions, you need to have Liquid maps, XML maps, or XML schemas that you can upload through the respective actions in the Azure portal or add to your Visual Studio Code project's **Artifacts** folder using the respective **Maps** and **Schemas** folders.
Azure Logic Apps Preview includes many current and additional capabilities, for
* Regenerate access keys for managed connections used by individual workflows in a **Logic App (Preview)** resource. For this task, [follow the same steps for the **Logic Apps** resource but at the individual workflow level](logic-apps-securing-a-logic-app.md#regenerate-access-keys), not the logic app resource level. * Add parallel branches in the new designer by following the same steps as the non-preview designer.
-
+ For more information, see [Changed, limited, unavailable, and unsupported capabilities](#limited-unavailable-unsupported) and the [Logic Apps Public Preview Known Issues page in GitHub](https://github.com/Azure/logicapps/blob/master/articles/logic-apps-public-preview-known-issues.md). <a name="pricing-model"></a>
In Azure Logic Apps Preview, these capabilities have changed, or they are curren
* [On-premises data gateway *triggers*](../connectors/apis-list.md#on-premises-connectors) are unavailable, but gateway actions *are* available.
- * [Custom connectors](../connectors/apis-list.md#custom-apis-and-connectors) are unavailable.
- * The built-in action, [Azure Functions - Choose an Azure function](logic-apps-azure-functions.md) is now **Azure Function Operations - Call an Azure function**. This action currently works only for functions that are created from the **HTTP Trigger** template. In the Azure portal, you can select an HTTP trigger function where you have access by creating a connection through the user experience. If you inspect the function action's JSON definition in code view or the **workflow.json** file, the action refers to the function by using a `connectionName` reference. This version abstracts the function's information as a connection, which you can find in your project's **connections.json** file, which is available after you create a connection.
In Azure Logic Apps Preview, these capabilities have changed, or they are curren
* The built-in action, [Azure Logic Apps - Choose a Logic App workflow](logic-apps-http-endpoint.md) is now **Workflow Operations - Invoke a workflow in this workflow app**.
+* [Custom connectors](../connectors/apis-list.md#custom-apis-and-connectors) aren't currently supported for preview.
+ * **Hosting plan availability**: Whether you create a new **Logic App (Preview)** resource type in the Azure portal or deploy from Visual Studio Code, you can only use the Premium or App Service hosting plan in Azure. Consumption hosting plans are unavailable and unsupported for deploying this resource type. You can deploy from Visual Studio Code to a Docker container, but not to an [integration service environment (ISE)](../logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md). * **Breakpoint debugging in Visual Studio Code**: Although you can add and use breakpoints inside the **workflow.json** file for a workflow, breakpoints are supported only for actions at this time, not triggers. For more information, see [Create stateful and stateless workflows in Visual Studio Code](create-stateful-stateless-workflows-visual-studio-code.md#manage-breakpoints).
logic-apps Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Logic Apps description: Lists Azure Policy Regulatory Compliance controls available for Azure Logic Apps. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 02/09/2021 Last updated : 03/05/2021
machine-learning How To Monitor View Training Logs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-monitor-view-training-logs.md
When you use **ScriptRunConfig**, you can use ```run.wait_for_completion(show_ou
<a id="queryrunmetrics"></a>
-### Logging Run Metrics
+## View run metrics
-Use the following methods in the logging APIs to influence the metrics visualizations. Note the [service limits](https://docs.microsoft.com/azure/machine-learning/resource-limits-quotas-capacity#metrics) for these logged metrics.
+## Via the SDK
+You can view the metrics of a trained model using ```run.get_metrics()```. See the example below.
-|Logged Value|Example code| Format in portal|
-|-|-|-|
-|Log an array of numeric values| `run.log_list(name='Fibonacci', value=[0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89])`|single-variable line chart|
-|Log a single numeric value with the same metric name repeatedly used (like from within a for loop)| `for i in tqdm(range(-10, 10)): run.log(name='Sigmoid', value=1 / (1 + np.exp(-i))) angle = i / 2.0`| Single-variable line chart|
-|Log a row with 2 numerical columns repeatedly|`run.log_row(name='Cosine Wave', angle=angle, cos=np.cos(angle)) sines['angle'].append(angle) sines['sine'].append(np.sin(angle))`|Two-variable line chart|
-|Log table with 2 numerical columns|`run.log_table(name='Sine Wave', value=sines)`|Two-variable line chart|
+```python
+from azureml.core import Run
+run = Run.get_context()
+run.log('metric-name', metric_value)
-## Query run metrics
+metrics = run.get_metrics()
+# metrics is of type Dict[str, List[float]] mapping mertic names
+# to a list of the values for that metric in the given run.
-You can view the metrics of a trained model using ```run.get_metrics()```. For example, you could use this with the example above to determine the best model by looking for the model with the lowest mean square error (mse) value.
+metrics.get('metric-name')
+# list of metrics in the order they were recorded
+```
<a name="view-the-experiment-in-the-web-portal"></a>
machine-learning How To Track Experiments https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-track-experiments.md
Logs can help you diagnose errors and warnings, or track performance metrics lik
You can log multiple data types including scalar values, lists, tables, images, directories, and more. For more information, and Python code examples for different data types, see the [Run class reference page](/python/api/azureml-core/azureml.core.run%28class%29?preserve-view=true&view=azure-ml-py).
+### Logging Run Metrics
+
+Use the following methods in the logging APIs to influence the metrics visualizations. Note the [service limits](https://docs.microsoft.com/azure/machine-learning/resource-limits-quotas-capacity#metrics) for these logged metrics.
+
+|Logged Value|Example code| Format in portal|
+|-|-|-|
+|Log an array of numeric values| `run.log_list(name='Fibonacci', value=[0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89])`|single-variable line chart|
+|Log a single numeric value with the same metric name repeatedly used (like from within a for loop)| `for i in tqdm(range(-10, 10)): run.log(name='Sigmoid', value=1 / (1 + np.exp(-i))) angle = i / 2.0`| Single-variable line chart|
+|Log a row with 2 numerical columns repeatedly|`run.log_row(name='Cosine Wave', angle=angle, cos=np.cos(angle)) sines['angle'].append(angle) sines['sine'].append(np.sin(angle))`|Two-variable line chart|
+|Log table with 2 numerical columns|`run.log_table(name='Sine Wave', value=sines)`|Two-variable line chart|
+|Log image|`run.log_image(name='food', path='./breadpudding.jpg', plot=None, description='desert')`|Use this method to log an image file or a matplotlib plot to the run. These images will be visible and comparable in the run record|
+
+### Logging with MLflow
+Use MLFlowLogger to log metrics.
+
+```python
+from azureml.core import Run
+# connect to the workspace from within your running code
+run = Run.get_context()
+ws = run.experiment.workspace
+
+# workspace has associated ml-flow-tracking-uri
+mlflow_url = ws.get_mlflow_tracking_uri()
+
+#Example: PyTorch Lightning
+from pytorch_lightning.loggers import MLFlowLogger
+
+mlf_logger = MLFlowLogger(experiment_name=run.experiment.name, tracking_uri=mlflow_url)
+mlf_logger._run_id = run.id
+```
+ ## Interactive logging session Interactive logging sessions are typically used in notebook environments. The method [Experiment.start_logging()](/python/api/azureml-core/azureml.core.experiment%28class%29?preserve-view=true&view=azure-ml-py#&preserve-view=truestart-logging--args-kwargs-) starts an interactive logging session. Any metrics logged during the session are added to the run record in the experiment. The method [run.complete()](/python/api/azureml-core/azureml.core.run%28class%29?preserve-view=true&view=azure-ml-py#&preserve-view=truecomplete--set-status-true-) ends the sessions and marks the run as completed.
machine-learning Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Machine Learning description: Lists Azure Policy Regulatory Compliance controls available for Azure Machine Learning. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 02/09/2021 Last updated : 03/05/2021
mariadb Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mariadb/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Database for MariaDB description: Lists Azure Policy Regulatory Compliance controls available for Azure Database for MariaDB. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 02/09/2021 Last updated : 03/05/2021
media-services Migrate V 2 V 3 Migration Scenario Based Encoding https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/migrate-v-2-v-3-migration-scenario-based-encoding.md
Take a few minutes to look at the flowcharts below for a visual comparison of th
Click on the image below to see a larger version.
-[ ![Encoding workflow for V2](./media/migration-guide/V2-pretty.svg) ](./media/migration-guide/V2-pretty.svg#lightbox)
+[![Encoding workflow for V2](./media/migration-guide/V2-pretty.svg) ](./media/migration-guide/V2-pretty.svg#lightbox)
1. Setup 1. Create an asset or use and existing asset. If using a new asset, upload content to that asset. If using an existing asset, you should be encoding files that already exist in the asset.
Click on the image below to see a larger version.
### V3 encoding workflow
-[ ![Encoding workflow for V3](./media/migration-guide/V3-pretty.svg) ](./media/migration-guide/V3-pretty.svg#lightbox)
+<Token>
+<object data="./media/migration-guide/v3-pretty2.svg" width="80%"></object>
+</Token>
1. Set up
- 1. Create an asset or use and existing asset. If using a new asset, upload content to that asset. If using an existing asset, you should be encoding files that already exist in the asset. You *shouldn't upload more content to that asset.*
+ 1. Create an asset or use an existing asset. If using a new asset, upload content to that asset. If using an existing asset, you should be encoding files that already exist in the asset. You *shouldn't upload more content to that asset.*
1. Create an output asset. The output asset is where the encoded files and input and output metadata will be stored. 1. Get values for the transform: - Standard Encoder preset
If your V2 code called the Standard Encoder with a custom preset, you first need
Custom presets are now JSON and no longer XML based. Recreate your preset in JSON following the custom preset schema as defined in the [Transform Open API (Swagger)](https://github.com/Azure/azure-rest-api-specs/blob/master/specification/mediaservices/resource-manager/Microsoft.Media/stable/2020-05-01/examples/transforms-create.json) documentation. -
-<!-- removed because this is covered in the tutorials
-Common custom [encoding](https://github.com/Azure/azure-rest-api-specs/blob/master/specification/mediaservices/resource-manager/Microsoft.Media/stable/2020-05-01/Encoding.json) scenarios:
- 1. Create a custom Single Bitrate MP4 encode
- 1. Create a custom [Adaptive Bitrate Encoding Ladder](autogen-bitrate-ladder.md)
- 1. Creating Sprite Thumbnails
- 1. Creating Thumbnails (see below for your preferred method)
- 1. [Sub Clipping](subclip-video-rest-howto.md)
- 1. Cropping
>- ## Input and output metadata files from an encoding job In v2, XML input and output metadata files get generated as the result of an encoding job. In v3, the metadata format changed from XML to JSON. For more information about metadata, see [Input metadata](input-metadata-schema.md) and [Output metadata](output-metadata-schema.md).
media-services Stream Live Tutorial With Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/stream-live-tutorial-with-api.md
The tutorial shows you how to:
The following items are required to complete the tutorial: - Install Visual Studio Code or Visual Studio.-- [Create a Media Services account](./create-account-howto.md).<br/>Make sure to remember the values you use for the resource group name and Media Services account name.-- Follow the steps in [Access Azure Media Services API with the Azure CLI](./access-api-howto.md) and save the credentials. You'll need to use them to access the API.
+- [Create a Media Services account](./create-account-howto.md).<br/>Make sure to copy the API Access details in JSON format or store the values needed to connect to the Media Services account in the .env file format used in this sample.
+- Follow the steps in [Access Azure Media Services API with the Azure CLI](./access-api-howto.md) and save the credentials. You'll need to use them to access the API in this sample, or enter them into the .env file format.
- A camera or a device (like a laptop) that's used to broadcast an event.-- An on-premises live encoder that converts signals from the camera to streams that are sent to the Media Services live streaming service, see [recommended on-premises live encoders](recommended-on-premises-live-encoders.md). The stream has to be in **RTMP** or **Smooth Streaming** format. -- For this sample, it is recommended to start with a software encoder like the OBS Studio live streaming software to get started.
+- An on-premises software encoder that encodes your camera stream and sends it to the Media Services live streaming service using the RTMP protocol, see [recommended on-premises live encoders](recommended-on-premises-live-encoders.md). The stream has to be in **RTMP** or **Smooth Streaming** format.
+- For this sample, it is recommended to start with a software encoder like the free [Open Broadcast Software OBS Studio](https://obsproject.com/download) to make it simple to get started.
> [!TIP] > Make sure to review [Live streaming with Media Services v3](live-streaming-overview.md) before proceeding. ## Download and configure the sample
-Clone a GitHub repository that contains the streaming .NET sample to your machine using the following command:
+Clone the following Git Hub repository that contains the live streaming .NET sample to your machine using the following command:
```bash git clone https://github.com/Azure-Samples/media-services-v3-dotnet.git
The live streaming sample is located in the [Live](https://github.com/Azure-Samp
Open [appsettings.json](https://github.com/Azure-Samples/media-services-v3-dotnet/blob/main/Live/LiveEventWithDVR/appsettings.json) in your downloaded project. Replace the values with the credentials you got from [accessing APIs](./access-api-howto.md).
+Note that you can also use the .env file format at the root of the project to set your environment variables only once for all projects in the .NET samples repository. Just copy the sample.env file, fill out the information that you obtain from the Azure portal Media Services API Access page, or from the Azure CLI. Rename the sample.env file to just ".env" to use it across all projects.
+The .gitignore file is already configured to avoid publishing the contents of this file to your forked repository.
+ > [!IMPORTANT] > This sample uses a unique suffix for each resource. If you cancel the debugging or terminate the app without running it through, you'll end up with multiple Live Events in your account. <br/>Make sure to stop the running Live Events. Otherwise, you'll be **billed**!
This section examines functions defined in the [Program.cs](https://github.com/A
The sample creates a unique suffix for each resource so that you don't have name collisions if you run the sample multiple times without cleaning up.
-> [!IMPORTANT]
-> This sample uses a unique suffix for each resource. If you cancel the debugging or terminate the app without running it through, you'll end up with multiple Live Events in your account. <br/>
-> Make sure to stop the running Live Events. Otherwise, you'll be **billed**!
### Start using Media Services APIs with .NET SDK
-To start using Media Services APIs with .NET, you need to create an **AzureMediaServicesClient** object. To create the object, you need to supply credentials needed for the client to connect to Azure using Azure AD. In the code you cloned at the beginning of the article, the **GetCredentialsAsync** function creates the ServiceClientCredentials object based on the credentials supplied in local configuration file.
+To start using Media Services APIs with .NET, you need to create an **AzureMediaServicesClient** object. To create the object, you need to supply credentials needed for the client to connect to Azure using Azure AD. In the code you cloned at the beginning of the article, the **GetCredentialsAsync** function creates the ServiceClientCredentials object based on the credentials supplied in local configuration file (appsettings.json) or through the .env environment variables file located at the root of the repository.
[!code-csharp[Main](../../../media-services-v3-dotnet/Live/LiveEventWithDVR/Program.cs#CreateMediaServicesClient)] ### Create a live event
-This section shows how to create a **pass-through** type of Live Event (LiveEventEncodingType set to None). For more information about the available types of Live Events, see [Live Event types](live-events-outputs-concept.md#live-event-types).
+This section shows how to create a **pass-through** type of Live Event (LiveEventEncodingType set to None). For more information about the other available types of Live Events, see [Live Event types](live-events-outputs-concept.md#live-event-types). In addition to pass-through, you can use a live transcoding Live Event for 720P or 1080P adaptive bitrate cloud encoding.
Some things that you might want to specify when creating the live event are:
-* Media Services location.
-* The streaming protocol for the Live Event (currently, the RTMP and Smooth Streaming protocols are supported).<br/>You can't change the protocol option while the Live Event or its associated Live Outputs are running. If you require different protocols, create separate Live Event for each streaming protocol.
+* The ingest protocol for the Live Event (currently, the RTMP(S) and Smooth Streaming protocols are supported).<br/>You can't change the protocol option while the Live Event or its associated Live Outputs are running. If you require different protocols, create separate Live Event for each streaming protocol.
* IP restrictions on the ingest and preview. You can define the IP addresses that are allowed to ingest a video to this Live Event. Allowed IP addresses can be specified as either a single IP address (for example '10.0.0.1'), an IP range using an IP address and a CIDR subnet mask (for example, '10.0.0.1/22'), or an IP range using an IP address and a dotted decimal subnet mask (for example, '10.0.0.1(255.255.252.0)').<br/>If no IP addresses are specified and there's no rule definition, then no IP address will be allowed. To allow any IP address, create a rule and set 0.0.0.0/0.<br/>The IP addresses have to be in one of the following formats: IpV4 address with four numbers or CIDR address range. * When creating the event, you can specify to autostart it. <br/>When autostart is set to true, the Live Event will be started after creation. That means the billing starts as soon as the Live Event starts running. You must explicitly call Stop on the Live Event resource to halt further billing. For more information, see [Live Event states and billing](live-event-states-billing.md).
-* For an ingest URL to be predictive, set the "vanity" mode. For detailed information, see [Live Event ingest URLs](live-events-outputs-concept.md#live-event-ingest-urls).
+There are also standby modes available to start the Live Event in a lower cost 'allocated' state that makes it faster to move to a 'Running' state. This is useful for situations like hotpools that need to hand out channels quickly to streamers.
+* For an ingest URL to be predictive and easier to maintain in a hardware based live encoder, set the "useStaticHostname" property to true. For detailed information, see [Live Event ingest URLs](live-events-outputs-concept.md#live-event-ingest-urls).
[!code-csharp[Main](../../../media-services-v3-dotnet/Live/LiveEventWithDVR/Program.cs#CreateLiveEvent)]
Use the previewEndpoint to preview and verify that the input from the encoder is
Once you have the stream flowing into the Live Event, you can begin the streaming event by creating an Asset, Live Output, and Streaming Locator. This will archive the stream and make it available to viewers through the Streaming Endpoint.
+When learning these concepts, it is best to think of the "Asset" object as the tape that you would insert into a video tape recorder in the old days. The "Live Output" is the tape recorder machine. The "Live Event" is just the video signal coming into the back of the machine.
+
+You first create the signal by creating the "Live Event". The signal is not flowing until you start that Live Event and connect your encoder to the input.
+
+The tape can be created at any time. It is just an empty "Asset" that you will hand to the Live Output object, the tape recorder in this analogy.
+
+The tape recorder can be created at any time. Meaning you can create a Live Output before starting the signal flow, or after. If you need to speed things up, it is sometimes helpful to create it before you start the signal flow.
+
+To stop the tape recorder, you call delete on the LiveOutput. This does not delete the contents on the tape "Asset". The Asset is always kept with the archived video content until you call delete explicitly on the Asset itself.
+
+The next section will walk through the creation of the Asset ("tape") and the Live Output ("tape recorder").
+ #### Create an Asset
-Create an Asset for the Live Output to use.
+Create an Asset for the Live Output to use. In the analogy above, this will be our tape that we record the live video signal onto. Viewers will be able to see the contents live or on-demand from this virtual tape.
[!code-csharp[Main](../../../media-services-v3-dotnet/Live/LiveEventWithDVR/Program.cs#CreateAsset)] #### Create a Live Output
-Live Outputs start on creation and stop when deleted. When you delete the Live Output, you're not deleting the underlying Asset and content in the asset.
+Live Outputs start on creation and stop when deleted. This is going to be the "tape recorder" for our event. When you delete the Live Output, you're not deleting the underlying Asset or content in the asset. Think of it as ejecting the tape. The Asset with the recording will last as long as you like, and when it is ejected (meaning, when the Live Output is deleted) it will be available for on-demand viewing immediately.
[!code-csharp[Main](../../../media-services-v3-dotnet/Live/LiveEventWithDVR/Program.cs#CreateLiveOutput)]
Live Outputs start on creation and stop when deleted. When you delete the Live O
> [!NOTE] > When your Media Services account is created, a **default** streaming endpoint is added to your account in the **Stopped** state. To start streaming your content and take advantage of [dynamic packaging](dynamic-packaging-overview.md) and dynamic encryption, the streaming endpoint from which you want to stream content has to be in the **Running** state.
-When you publish the Live Output asset using a Streaming Locator, the Live Event (up to the DVR window length) will continue to be viewable until the Streaming Locator's expiry or deletion, whichever comes first.
+When you publish the Asset using a Streaming Locator, the Live Event (up to the DVR window length) will continue to be viewable until the Streaming Locator's expiry or deletion, whichever comes first. This is how you make the virtual "tape" recording available for your viewing audience to see live and on-demand. The same URL can be used to watch the live event, DVR window, or the on-demand asset when the recording is complete (when the Live Output is deleted.)
[!code-csharp[Main](../../../media-services-v3-dotnet/Live/LiveEventWithDVR/Program.cs#CreateStreamingLocator)]
mysql Howto Configure Sign In Azure Ad Authentication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/howto-configure-sign-in-azure-ad-authentication.md
Only an Azure AD Admin user can create/enable users for Azure AD-based authentic
Only one Azure AD admin can be created per MySQL server and selection of another one will overwrite the existing Azure AD admin configured for the server.
-In a future release we will support specifying an Azure AD group instead of an individual user to have multiple administrators, however this is currently not supported yet.
- After configuring the administrator, you can now sign in: ## Connecting to Azure Database for MySQL using Azure AD
mysql Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Database for MySQL description: Lists Azure Policy Regulatory Compliance controls available for Azure Database for MySQL. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 02/09/2021 Last updated : 03/05/2021
networking Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/networking/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure networking services description: Lists Azure Policy Regulatory Compliance controls available for Azure networking services. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 02/09/2021 Last updated : 03/05/2021
postgresql Concepts Extensions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/flexible-server/concepts-extensions.md
Previously updated : 09/23/2020 Last updated : 03/05/2021 # PostgreSQL extensions in Azure Database for PostgreSQL - Flexible Server
The following extensions are available in Azure Database for PostgreSQL - Flexib
> |[ltree](https://www.postgresql.org/docs/12/ltree.html) | 1.1 | data type for hierarchical tree-like structures| > |[pageinspect](https://www.postgresql.org/docs/12/pageinspect.html) | 1.7 | inspect the contents of database pages at a low level| > |[pg_buffercache](https://www.postgresql.org/docs/12/pgbuffercache.html) | 1.3 | examine the shared buffer cache|
+> |[pg_cron](https://github.com/citusdata/pg_cron) | 1.2 | Job scheduler for PostgreSQL|
> |[pg_freespacemap](https://www.postgresql.org/docs/12/pgfreespacemap.html) | 1.2 | examine the free space map (FSM)| > |[pg_prewarm](https://www.postgresql.org/docs/12/pgprewarm.html) | 1.2 | prewarm relation data| > |[pg_stat_statements](https://www.postgresql.org/docs/12/pgstatstatements.html) | 1.7 | track execution statistics of all SQL statements executed|
The following extensions are available in Azure Database for PostgreSQL - Flexib
> |[pg_visibility](https://www.postgresql.org/docs/12/pgvisibility.html) | 1.2 | examine the visibility map (VM) and page-level visibility info| > |[pgaudit](https://www.pgaudit.org/) | 1.4 | provides auditing functionality| > |[pgcrypto](https://www.postgresql.org/docs/12/pgcrypto.html) | 1.3 | cryptographic functions|
+> |[pglogical](https://github.com/2ndQuadrant/pglogical) | 2.3.2 | PostgreSQL logical replication|
> |[pgrowlocks](https://www.postgresql.org/docs/12/pgrowlocks.html) | 1.2 | show row-level locking information| > |[pgstattuple](https://www.postgresql.org/docs/12/pgstattuple.html) | 1.5 | show tuple-level statistics| > |[plpgsql](https://www.postgresql.org/docs/12/plpgsql.html) | 1.0 | PL/pgSQL procedural language|
The following extensions are available in Azure Database for PostgreSQL - Flexib
> |[ltree](https://www.postgresql.org/docs/11/ltree.html) | 1.1 | data type for hierarchical tree-like structures| > |[pageinspect](https://www.postgresql.org/docs/11/pageinspect.html) | 1.7 | inspect the contents of database pages at a low level| > |[pg_buffercache](https://www.postgresql.org/docs/11/pgbuffercache.html) | 1.3 | examine the shared buffer cache|
+> |[pg_cron](https://github.com/citusdata/pg_cron) | 1.2 | Job scheduler for PostgreSQL|
> |[pg_freespacemap](https://www.postgresql.org/docs/11/pgfreespacemap.html) | 1.2 | examine the free space map (FSM)| > |[pg_prewarm](https://www.postgresql.org/docs/11/pgprewarm.html) | 1.2 | prewarm relation data| > |[pg_stat_statements](https://www.postgresql.org/docs/11/pgstatstatements.html) | 1.6 | track execution statistics of all SQL statements executed|
The following extensions are available in Azure Database for PostgreSQL - Flexib
> |[pg_visibility](https://www.postgresql.org/docs/11/pgvisibility.html) | 1.2 | examine the visibility map (VM) and page-level visibility info| > |[pgaudit](https://www.pgaudit.org/) | 1.3.1 | provides auditing functionality| > |[pgcrypto](https://www.postgresql.org/docs/11/pgcrypto.html) | 1.3 | cryptographic functions|
+> |[pglogical](https://github.com/2ndQuadrant/pglogical) | 2.3.2 | PostgreSQL logical replication|
> |[pgrowlocks](https://www.postgresql.org/docs/11/pgrowlocks.html) | 1.2 | show row-level locking information| > |[pgstattuple](https://www.postgresql.org/docs/11/pgstattuple.html) | 1.5 | show tuple-level statistics| > |[plpgsql](https://www.postgresql.org/docs/11/plpgsql.html) | 1.0 | PL/pgSQL procedural language|
postgresql Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/flexible-server/overview.md
The flexible server service is equipped with built-in performance monitoring and
One of the advantage of running your workload in Azure is it's global reach. The flexible server is available today in following Azure regions:
-| Region | Availability | Zone redundant HA |
+| Region | Availability | Zone-redundant HA |
| | | | | West Europe | :heavy_check_mark: | :heavy_check_mark: | | North Europe | :heavy_check_mark: | :heavy_check_mark: |
One of the advantage of running your workload in Azure is it's global reach. The
| Southeast Asia | :heavy_check_mark: | :heavy_check_mark: | | Japan East | :heavy_check_mark: | :heavy_check_mark: |
-We continue to add new regions.
+We continue to add more regions for flexible server.
## Migration
The service runs the community version of PostgreSQL. This allows full applicati
- **Dump and Restore** ΓÇô For offline migrations, where users can afford some downtime, dump and restore using community tools like pg_dump and pg_restore can provide fastest way to migrate. See [Migrate using dump and restore](../howto-migrate-using-dump-and-restore.md) for details. - **Azure Database Migration Service** ΓÇô For seamless and simplified migrations to flexible server with minimal downtime, Azure Database Migration Service can be leveraged. See [DMS via portal](../../dms/tutorial-postgresql-azure-postgresql-online-portal.md) and [DMS via CLI](../../dms/tutorial-postgresql-azure-postgresql-online.md). You can migrate from your Azure Database for PostgreSQL - Single Server to Flexible Server. See this [DMS article](../../dms/tutorial-azure-postgresql-to-azure-postgresql-online-portal.md) for details.
+## Contacts
+For any questions or suggestions you might have on Azure Database for PostgreSQL flexible server, send an email to the Azure Database for PostgreSQL Team ([@Ask Azure DB for PostgreSQL](mailto:AskAzureDBforPostgreSQL@service.microsoft.com)). Please note that this email address is not a technical support alias.
+
+In addition, consider the following points of contact as appropriate:
+
+- To contact Azure Support, [file a ticket from the Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade).
+- To fix an issue with your account, file a [support request](https://ms.portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest) in the Azure portal.
+- To provide feedback or to request new features, create an entry via [UserVoice](https://feedback.azure.com/forums/597976-azure-database-for-postgresql).
+
+ ## Next steps Now that you've read an introduction to Azure Database for PostgreSQL flexible server deployment mode, you're ready to create your first server: [Create an Azure Database for PostgreSQL - Flexible Server using Azure portal](./quickstart-create-server-portal.md)
postgresql Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Database for PostgreSQL description: Lists Azure Policy Regulatory Compliance controls available for Azure Database for PostgreSQL. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 02/09/2021 Last updated : 03/05/2021
purview Register Scan Amazon S3 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/register-scan-amazon-s3.md
Previously updated : 01/19/2021 Last updated : 03/07/2021 # Customer intent: As a security officer, I need to understand how to use the Azure Purview connector for Amazon S3 service to set up, configure, and scan my Amazon S3 buckets.
The following table maps the regions where you data is stored to the region wher
| Asia Pacific (Sydney) | Europe (Frankfurt) | | Asia Pacific (Tokyo) | Europe (Frankfurt) | | Canada (Central) | US East (Ohio) |
-| China (Beijing) | Europe (Frankfurt) |
-| China (Ningxia) | Europe (Frankfurt) |
+| China (Beijing) | Not supported |
+| China (Ningxia) | Not supported |
| Europe (Frankfurt) | Europe (Frankfurt) | | Europe (Ireland) | Europe (Frankfurt) | | Europe (London) | Europe (Frankfurt) |
role-based-access-control Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/role-based-access-control/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure RBAC description: Lists Azure Policy Regulatory Compliance controls available for Azure role-based access control (Azure RBAC). These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 02/09/2021 Last updated : 03/05/2021
route-server Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/route-server/overview.md
Azure Route Server simplifies configuration, management, and deployment of your
* You no longer need to update [User-Defined Routes](../virtual-network/virtual-networks-udr-overview.md) manually whenever your NVA announces new routes or withdraw old ones.
-* You no longer need to configure a load balancer in front of your NVA for resiliency or performance purposes. When you peer multiple instances of your NVA with Azure Route Server, you can configure the BGP attributes in your NVA. These BGP attributes will let Azure Route Server which NVA instance should be active or passive.
+* You can peer multiple instances of your NVA with Azure Route Server. You can configure the BGP attributes in your NVA and, depending on your design (e.g., active-active for performance or active-passive for resiliency), let Azure Route Server know which NVA instance is active or which one is passive.
* The interface between NVA and Azure Route Server is based on a common standard protocol. As long as your NVA supports BGP, you can peer it with Azure Route Server. For more information, see [Route Server supported routing protocols](route-server-faq.md#protocol).
search Search Howto Index Sharepoint Online https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-howto-index-sharepoint-online.md
Last updated 03/01/2021
> > The [REST API version 2020-06-30-Preview](search-api-preview.md) provides this feature. There is currently no portal or SDK support.
+> [!NOTE]
+> SharePoint Online supports a granular authorization model that determines per-user access at the document level. The SharePoint Online indexer does not pull these permissions into the search index, and Cognitive Search does not support document-level authorization. When a document is indexed from SharePoint Online into a search service, the content is available to anyone who has read access to the index. If you require document-level permissions, you should investigate security filters to trim results of unauthorized content. For more information, see [Security trimming using Active Directory identities](search-security-trimming-for-azure-search-with-aad.md).
+ This article describes how to use Azure Cognitive Search to index documents (such as PDFs, Microsoft Office documents, and several other common formats) stored in SharePoint Online document libraries into an Azure Cognitive Search index. First, it explains the basics of setting up and configuring the indexer. Then, it offers a deeper exploration of behaviors and scenarios you are likely to encounter. ## Functionality
search Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Cognitive Search description: Lists Azure Policy Regulatory Compliance controls available for Azure Cognitive Search. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 02/09/2021 Last updated : 03/05/2021
security-center Custom Security Policies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/custom-security-policies.md
Title: Create custom security policies in Azure Security Center | Microsoft Docs description: Azure custom policy definitions monitored by Azure Security Center.-
With this feature, you can add your own *custom* initiatives. You'll then receiv
As discussed in [the Azure Policy documentation](../governance/policy/concepts/definition-structure.md#definition-location), when you specify a location for your custom initiative, it must be a management group or a subscription.
+> [!TIP]
+> For an overview of the key concepts on this page, see [What are security policies, initiatives, and recommendations?](security-policy-concept.md).
+ ::: zone pivot="azure-portal" ## To add a custom initiative to your subscription
security-center Prevent Misconfigurations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/prevent-misconfigurations.md
Title: How to prevent misconfigurations with Azure Security Center description: Learn how to use Security Center's 'Enforce' and 'Deny' options on the recommendations details pages-
security-center Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/release-notes.md
Learn more about this dashboard in [Azure Security Center's overview page](overv
### SQL vulnerability assessment now includes the "Disable rule" experience (preview)
-Security Center includes a built-in vulnerability scanner to help you discover, track, and remediate potential database vulnerabilities. The findings from your assessment scans provide an overview of your SQL machines' security state, and details of any security findings.
+Security Center includes a built-in vulnerability scanner to help you discover, track, and remediate potential database vulnerabilities. The results from your assessment scans provide an overview of your SQL machines' security state, and details of any security findings.
If you have an organizational need to ignore a finding, rather than remediate it, you can optionally disable it. Disabled findings don't impact your secure score or generate unwanted noise.
security-center Security Center Adaptive Application https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/security-center-adaptive-application.md
Title: Adaptive application controls in Azure Security Center description: This document helps you use adaptive application control in Azure Security Center to allow list applications running in Azure machines.- - - Last updated 02/07/2021
security-center Security Center Adaptive Network Hardening https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/security-center-adaptive-network-hardening.md
Title: Adaptive network hardening in Azure Security Center | Microsoft Docs description: Learn how to use actual traffic patterns to harden your network security groups (NSG) rules and further improve your security posture.- - - Last updated 03/11/2020
security-center Security Center Network Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/security-center-network-recommendations.md
Title: Protecting your network resources in Azure Security Center description: This document addresses recommendations in Azure Security Center that help you protect your Azure network resources and stay in compliance with security policies.- - Last updated 04/05/2019
security-center Security Center Os Coverage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/security-center-os-coverage.md
Title: Platforms supported by Azure Security Center | Microsoft Docs description: This document provides a list of platforms supported by Azure Security Center.- - Last updated 03/31/2020
security-center Security Center Permissions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/security-center-permissions.md
Title: Permissions in Azure Security Center | Microsoft Docs description: This article explains how Azure Security Center uses role-based access control to assign permissions to users and identifies the allowed actions for each role.-
-cloud: na
- Previously updated : 12/01/2020 Last updated : 01/03/2021
In addition to these roles, there are two specific Security Center roles:
The following table displays roles and allowed actions in Security Center.
-|Action|Security Reader / <br> Reader |Security Admin |Resource Group Contributor / <br> Resource Group Owner |Subscription Contributor |Subscription Owner |
-|: |::|::|::|::|::|
-|Edit security policy|-|Γ£ö|-|-|Γ£ö|
-|Add/assign initiatives (including) regulatory compliance standards)|-|-|-|-|Γ£ö|
-|Enable / disable Azure Defender|-|Γ£ö|-|-|Γ£ö|
-|Enable / disable auto-provisioning|-|Γ£ö|-|Γ£ö|Γ£ö|
-|Apply security recommendations for a resource</br> (and use [Quick Fix!](security-center-remediate-recommendations.md#quick-fix-remediation))|-|-|Γ£ö|Γ£ö|Γ£ö|
-|Dismiss alerts|-|Γ£ö|-|Γ£ö|Γ£ö|
-|View alerts and recommendations|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+| Action | Security Reader / <br> Reader | Security Admin | Resource Group Contributor / <br> Resource Group Owner | Subscription Contributor | Subscription Owner |
+|:-|:--:|:--:|::|::|::|
+| Edit security policy | - | Γ£ö | - | - | Γ£ö |
+| Add/assign initiatives (including) regulatory compliance standards) | - | - | - | - | Γ£ö |
+| Enable / disable Azure Defender | - | Γ£ö | - | - | Γ£ö |
+| Enable / disable auto-provisioning | - | Γ£ö | - | Γ£ö | Γ£ö |
+| Apply security recommendations for a resource</br> (and use [Quick Fix!](security-center-remediate-recommendations.md#quick-fix-remediation)) | - | - | Γ£ö | Γ£ö | Γ£ö |
+| Dismiss alerts | - | Γ£ö | - | Γ£ö | Γ£ö |
+| View alerts and recommendations | Γ£ö | Γ£ö | Γ£ö | Γ£ö | Γ£ö |
> [!NOTE] > We recommend that you assign the least permissive role needed for users to complete their tasks. For example, assign the Reader role to users who only need to view information about the security health of a resource but not take action, such as applying recommendations or editing policies.
security-center Security Center Powershell Onboarding https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/security-center-powershell-onboarding.md
Title: Onboard to Azure Security Center with PowerShell description: This document walks you through the process of enabling Azure Security Center with PowerShell cmdlets.- - Last updated 01/24/2021
security-center Security Center Pricing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/security-center-pricing.md
Last updated 02/14/2021
# Azure Security Center free vs Azure Defender enabled Azure Defender is free for the first 30 days. At the end of 30 days, should you choose to continue using the service, we'll automatically start charging for usage.
+You can upgrade from the **Pricing & settings** page, as described in [Quickstart: Enable Azure Defender](enable-azure-defender.md). For pricing details in your currency of choice and according to your region, see [Security Center pricing](https://azure.microsoft.com/pricing/details/security-center/).
+ ## What are the benefits of enabling Azure Defender? Security Center is offered in two modes:
Security Center has two offerings:
### How do I enable Azure Defender for my subscription? You can use any of the following ways to enable Azure Defender for your subscription:
-|Method |Instructions |
-|||
-|Azure Security Center pages of the Azure portal|[Enable Azure Defender](enable-azure-defender.md)|
-|REST API|[Pricings API](/rest/api/securitycenter/pricings)|
-|Azure CLI|[az security pricing](/cli/azure/security/pricing)|
-|PowerShell|[Set-AzSecurityPricing](/powershell/module/az.security/set-azsecuritypricing)|
-|Azure Policy|[Bundle Pricings](https://github.com/Azure/Azure-Security-Center/blob/master/Pricing%20%26%20Settings/ARM%20Templates/Set-ASC-Bundle-Pricing.json)|
-|||
+| Method | Instructions |
+|-|-|
+| Azure Security Center pages of the Azure portal | [Enable Azure Defender](enable-azure-defender.md) |
+| REST API | [Pricings API](/rest/api/securitycenter/pricings) |
+| Azure CLI | [az security pricing](/cli/azure/security/pricing) |
+| PowerShell | [Set-AzSecurityPricing](/powershell/module/az.security/set-azsecuritypricing) |
+| Azure Policy | [Bundle Pricings](https://github.com/Azure/Azure-Security-Center/blob/master/Pricing%20%26%20Settings/ARM%20Templates/Set-ASC-Bundle-Pricing.json) |
+| | |
### Can I enable Azure Defender for servers on a subset of servers in my subscription? No. When you enable [Azure Defender for servers](defender-for-servers-introduction.md) on a subscription, all the servers in the subscription will be protected by Azure Defender.
security-center Security Center Provide Security Contact Details https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/security-center-provide-security-contact-details.md
Title: Configure email notifications for Azure Security Center alerts description: Learn how to fine-tune the types of emails sent out by Azure Security Center for security alerts. - - Last updated 02/09/2021
security-center Security Center Remediate Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/security-center-remediate-recommendations.md
Title: Remediate recommendations in Azure Security Center | Microsoft Docs description: This article explains how to respond to recommendations in Azure Security Center to protect your resources and satisfy security policies.- - Previously updated : 09/08/2020 Last updated : 03/04/2021
security-center Security Center Services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/security-center-services.md
Title: Azure Security Center's features according to OS, machine type, and cloud description: Learn about which Azure Security Center features are available according to their OS, type, and cloud deployment.- - Last updated 02/16/2021
security-center Security Center Threat Report https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/security-center-threat-report.md
Title: Azure Security Center threat intelligence report | Microsoft Docs description: This page helps you to use Azure Security Center threat intelligence reports during an investigation to find more information about security alerts- - Last updated 06/15/2020
security-center Security Center Troubleshooting Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/security-center-troubleshooting-guide.md
Title: Azure Security Center Troubleshooting Guide | Microsoft Docs description: This guide is for IT professionals, security analysts, and cloud admins who need to troubleshoot Azure Security Center related issues.----++ Last updated 09/10/2019
service-bus-messaging Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Service Bus Messaging description: Lists Azure Policy Regulatory Compliance controls available for Azure Service Bus Messaging. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 02/09/2021 Last updated : 03/05/2021
service-fabric Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-fabric/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Service Fabric description: Lists Azure Policy Regulatory Compliance controls available for Azure Service Fabric. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 02/09/2021 Last updated : 03/05/2021
spring-cloud Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/spring-cloud/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Spring Cloud description: Lists Azure Policy Regulatory Compliance controls available for Azure Spring Cloud. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 02/09/2021 Last updated : 03/05/2021
storage Data Lake Storage Access Control Model https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/data-lake-storage-access-control-model.md
This table shows a column that represents each level of a fictitious directory h
> [!NOTE]
-> To view the contents of a container in Azure Storage Explorer, security principals must [sign into Storage Explorer by using Azure AD](../../vs-azure-tools-storage-manage-with-storage-explorer.md?tabs=windows#add-a-resource-via-azure-ad), and (at a minimum) have read access (R--) to the root folder (`\`) of a container. This level of permission does give them the ability to list the contents of the root folder. If you don't want the contents of the root folder to be visible, you can assign them [Reader](../../role-based-access-control/built-in-roles.md#reader) role. With that role, they'll be able to list the containers in the account, but not container contents. You can then grant access to specific directories and files by using ACLs.
+> To view the contents of a container in Azure Storage Explorer, security principals must [sign into Storage Explorer by using Azure AD](../../vs-azure-tools-storage-manage-with-storage-explorer.md?tabs=windows#attach-to-an-individual-resource), and (at a minimum) have read access (R--) to the root folder (`\`) of a container. This level of permission does give them the ability to list the contents of the root folder. If you don't want the contents of the root folder to be visible, you can assign them [Reader](../../role-based-access-control/built-in-roles.md#reader) role. With that role, they'll be able to list the containers in the account, but not container contents. You can then grant access to specific directories and files by using ACLs.
## Security groups
storage Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/common/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Storage description: Lists Azure Policy Regulatory Compliance controls available for Azure Storage. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 02/09/2021 Last updated : 03/05/2021
storage Storage Explorer Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/common/storage-explorer-troubleshooting.md
If you donΓÇÖt have a role that grants any management layer permissions, Storage
### What if I can't get the management layer permissions I need from my administrator?
-If you want to access blob containers or queues, you can attach to those resources using your Azure credentials.
+If you want to access blob containers, ADLS Gen2 containers or directories, or queues, you can attach to those resources using your Azure credentials.
1. Open the Connect dialog.
-2. Select "Add a resource via Azure Active Directory (Azure AD)". Select Next.
-3. Select the user account and tenant associated with the resource you're attaching to. Select Next.
-4. Select the resource type, enter the URL to the resource, and enter a unique display name for the connection. Select Next then Connect.
+1. Select the resource type you want to connect to.
+1. Select **Sign in using Azure Active Directory (Azure AD)**. Select **Next**.
+1. Select the user account and tenant associated with the resource you're attaching to. Select **Next**.
+1. Enter the URL to the resource, and enter a unique display name for the connection. Select **Next** then **Connect**.
-For other resource types, we don't currently have an Azure RBAC-related solution. As a workaround, you can request a SAS URI to [attach to your resource](../../vs-azure-tools-storage-manage-with-storage-explorer.md?tabs=linux#use-a-shared-access-signature-uri).
+For other resource types, we don't currently have an Azure RBAC-related solution. As a workaround, you can request a SAS URL then attach to your resource by following these steps:
+
+1. Open the Connect dialog.
+1. Select the resource type you want to connect to.
+1. Select **Shared access signature (SAS)**. Select **Next**.
+1. Enter the SAS URL you received and enter a unique display name for the connection. Select **Next** then **Connect**.
+
+For more information on attaching to resources, see [Attach to an Individual Resource](../../vs-azure-tools-storage-manage-with-storage-explorer.md?tabs=linux#attach-to-an-individual-resource).
### Recommended Azure built-in roles
storage Storage Files Scale Targets https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/files/storage-files-scale-targets.md
To help you plan your deployment for each of the stages, below are the results o
| Namespace Download Throughput | 400 objects per second | ### Initial one-time provisioning+ **Initial cloud change enumeration**: When a new sync group is created, initial cloud change enumeration is the first step that will execute. In this process, the system will enumerate all the items in the Azure File Share. During this process, there will be no sync activity i.e. no items will be downloaded from cloud endpoint to server endpoint and no items will be uploaded from server endpoint to cloud endpoint. Sync activity will resume once initial cloud change enumeration completes. The rate of performance is 20 objects per second. Customers can estimate the time it will take to complete initial cloud change enumeration by determining the number of items in the cloud share and using the following formulae to get the time in days. **Time (in days) for initial cloud enumeration = (Number of objects in cloud endpoint)/(20 * 60 * 60 * 24)**
+**Initial sync of data from Windows Server to Azure File share**:Many Azure File Sync deployments start with an empty Azure file share because all the data is on the Windows Server. In these cases, the initial cloud change enumeration is fast and the majority of time will be spent syncing changes from the Windows Server into the Azure file share(s).
+
+While sync uploads data to the Azure file share, there is no downtime on the local file server, and administrators can [setup network limits](https://docs.microsoft.com/azure/storage/files/storage-sync-files-server-registration#set-azure-file-sync-network-limits) to restrict the amount of bandwidth used for background data upload.
+
+Initial sync is typically limited by the initial upload rate of 20 files per second per sync group. Customers can estimate the time to upload all their data to azure using the following formulae to get time in days:
+
+ **Time (in days) for uploading files to a sync group = (Number of objects in cloud endpoint)/(20 * 60 * 60 * 24)**
+
+Splitting your data into multiple server endpoints and sync groups can speed up this initial data upload, because the upload can be done in parallel for multiple sync groups at a rate of 20 items per second each. So, two sync groups would be running at a combined rate of 40 items per second. The total time to complete would be the time estimate for the sync group with the most files to sync
+ **Namespace download throughput** When a new server endpoint is added to an existing sync group, the Azure File Sync agent does not download any of the file content from the cloud endpoint. It first syncs the full namespace and then triggers background recall to download the files, either in their entirety or, if cloud tiering is enabled, to the cloud tiering policy set on the server endpoint. | Ongoing sync | Details |
As a general guide for your deployment, you should keep a few things in mind:
## See also - [Planning for an Azure Files deployment](storage-files-planning.md)-- [Planning for an Azure File Sync deployment](storage-sync-files-planning.md)
+- [Planning for an Azure File Sync deployment](storage-sync-files-planning.md)
storage Storage Sync Files Planning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/files/storage-sync-files-planning.md
If cloud tiering is enabled on a server endpoint, files that are tiered are skip
### Other Hierarchical Storage Management (HSM) solutions No other HSM solutions should be used with Azure File Sync.
+## Performance and Scalability
+
+Since the Azure File Sync agent runs on a Windows Server machine that connects to the Azure file shares, the effective sync performance depends upon a number of factors in your infrastructure: Windows Server and the underlying disk configuration, network bandwidth between the server and the Azure storage, file size, total dataset size, and the activity on the dataset. Since Azure File Sync works on the file level, the performance characteristics of an Azure File Sync-based solution is better measured in the number of objects (files and directories) processed per second.
+
+Changes made to the Azure file share by using the Azure portal or SMB are not immediately detected and replicated like changes to the server endpoint. Azure Files does not yet have change notifications or journaling, so there's no way to automatically initiate a sync session when files are changed. On Windows Server, Azure File Sync uses [Windows USN journaling](https://docs.microsoft.com/windows/win32/fileio/change-journals) to automatically initiate a sync session when files change
+
+To detect changes to the Azure file share, Azure File Sync has a scheduled job called a change detection job. A change detection job enumerates every file in the file share, and then compares it to the sync version for that file. When the change detection job determines that files have changed, Azure File Sync initiates a sync session. The change detection job is initiated every 24 hours. Because the change detection job works by enumerating every file in the Azure file share, change detection takes longer in larger namespaces than in smaller namespaces. For large namespaces, it might take longer than once every 24 hours to determine which files have changed.
+
+For more information, see [Azure File Sync performance metrics](storage-files-scale-targets.md#azure-file-sync-performance-metrics) and [Azure File Sync scale targets](storage-files-scale-targets.md#azure-file-sync-scale-targets)
+ ## Identity Azure File Sync works with your standard AD-based identity without any special setup beyond setting up sync. When you are using Azure File Sync, the general expectation is that most accesses go through the Azure File Sync caching servers, rather than through the Azure file share. Since the server endpoints are located on Windows Server, and Windows Server has supported AD and Windows-style ACLs for a long time, nothing is needed beyond ensuring the Windows file servers registered with the Storage Sync Service are domain joined. Azure File Sync will store ACLs on the files in the Azure file share, and will replicate them to all server endpoints.
stream-analytics Geospatial Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/stream-analytics/geospatial-scenarios.md
Title: Geofencing and geospatial aggregation with Azure Stream Analytics description: This article describes how to use Azure Stream Analytics for geofencing and geospatial aggregation. --++ Last updated 04/02/2019
stream-analytics Powerbi Output Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/stream-analytics/powerbi-output-managed-identity.md
Title: Use Managed Identity to authenticate your Azure Stream Analytics job to Power BI output description: This article describes how to use managed identities to authenticate your Azure Stream Analytics job to Power BI output. --++ Last updated 3/10/2020
stream-analytics Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/stream-analytics/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Stream Analytics description: Lists Azure Policy Regulatory Compliance controls available for Azure Stream Analytics. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 02/09/2021 Last updated : 03/05/2021
stream-analytics Stream Analytics Add Inputs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/stream-analytics/stream-analytics-add-inputs.md
Title: Understand inputs for Azure Stream Analytics description: This article describe the concept of inputs in an Azure Stream Analytics job, comparing streaming input to reference data input. --++ Last updated 10/29/2020
stream-analytics Stream Analytics Geospatial Functions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/stream-analytics/stream-analytics-geospatial-functions.md
Title: Introduction to Azure Stream Analytics geospatial functions description: This article describes geospatial functions that are used in Azure Stream Analytics jobs. --++ Last updated 12/06/2018
stream-analytics Stream Analytics Machine Learning Anomaly Detection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/stream-analytics/stream-analytics-machine-learning-anomaly-detection.md
Title: Anomaly detection in Azure Stream Analytics description: This article describes how to use Azure Stream Analytics and Azure Machine Learning together to detect anomalies. --++ Last updated 06/21/2019
The following video demonstrates how to detect an anomaly in real time using mac
## Model behavior
-Generally, the model's accuracy improves with more data in the sliding window. The data in the specified sliding window is treated as part of its normal range of values for that time frame. The model only considers event history over the sliding window to check if the current event is anomalous. As the sliding window moves, old values are evicted from the modelΓÇÖs training.
+Generally, the model's accuracy improves with more data in the sliding window. The data in the specified sliding window is treated as part of its normal range of values for that time frame. The model only considers event history over the sliding window to check if the current event is anomalous. As the sliding window moves, old values are evicted from the model's training.
The functions operate by establishing a certain normal based on what they have seen so far. Outliers are identified by comparing against the established normal, within the confidence level. The window size should be based on the minimum events required to train the model for normal behavior so that when an anomaly occurs, it would be able to recognize it.
The history size, window duration, and total event load are related in the follo
windowDuration (in ms) = 1000 * historySize / (Total Input Events Per Sec / Input Partition Count)
-When partitioning the function by deviceId, add ΓÇ£PARTITION BY deviceIdΓÇ¥ to the anomaly detection function call.
+When partitioning the function by deviceId, add "PARTITION BY deviceId" to the anomaly detection function call.
### Observations The following table includes the throughput observations for a single node (6 SU) for the non-partitioned case:
-| History size (events) | Window duration (ms) | Total input events per sec |
+| History size (events) | Window duration (ms) | Total input events per sec |
| | -- | -- | | 60 | 55 | 2,200 | | 600 | 728 | 1,650 |
stream-analytics Stream Analytics Machine Learning Integration Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/stream-analytics/stream-analytics-machine-learning-integration-tutorial.md
Title: Azure Stream Analytics integration with Azure Machine Learning Studio (classic) description: This article describes how to quickly set up a simple Azure Stream Analytics job that integrates Azure Machine Learning Studio (classic), using a user-defined function. --++ Last updated 08/12/2020
synapse-analytics Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Synapse Analytics description: Lists Azure Policy Regulatory Compliance controls available for Azure Synapse Analytics. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 02/09/2021 Last updated : 03/05/2021
synapse-analytics Overview Architecture https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql/overview-architecture.md
Synapse SQL uses a node-based architecture. Applications connect and issue T-SQL
The Azure Synapse SQL Control node utilizes a distributed query engine to optimize queries for parallel processing, and then passes operations to Compute nodes to do their work in parallel.
-The serverless SQL pool Control node utilizes Distributed Query Processing (DQP) engine to optimize and orchestrate distributed execution of user query by splitting it into smaller queries that will be executed on Compute nodes. Each small query is called task and represents distributed execution unit. It reads file(s) from storage, joins results from other tasks, groups or orders data retrieved from other tasks.
+The serverless SQL pool Control node utilizes Distributed Query Processing (DQP) engine to optimize and orchestrate distributed execution of user query by splitting it into smaller queries that will be executed on Compute nodes. Each small query is called task and represents distributed execution unit. It reads file(s) from storage, joins results from other tasks, groups, or orders data retrieved from other tasks.
The Compute nodes store all user data in Azure Storage and run the parallel queries. The Data Movement Service (DMS) is a system-level internal service that moves data across the nodes as necessary to run queries in parallel and return accurate results.
With decoupled storage and compute, when using Synapse SQL one can benefit from
Synapse SQL leverages Azure Storage to keep your user data safe. Since your data is stored and managed by Azure Storage, there is a separate charge for your storage consumption.
-Serverless SQL pool lets you query files in your data lake in read-only manner, while SQL pool lets you ingest data also. When data is ingested into dedicated SQL pool, the data is sharded into **distributions** to optimize the performance of the system. You can choose which sharding pattern to use to distribute the data when you define the table. These sharding patterns are supported:
+Serverless SQL pool allows you to query your data lake files, while dedicated SQL pool allows you to query and ingest data from your data lake files. When data is ingested into dedicated SQL pool, the data is sharded into **distributions** to optimize the performance of the system. You can choose which sharding pattern to use to distribute the data when you define the table. These sharding patterns are supported:
* Hash * Round Robin
A round-robin distributed table distributes data evenly across the table but wit
## Replicated tables A replicated table provides the fastest query performance for small tables.
-A table that is replicated caches a full copy of the table on each compute node. Consequently, replicating a table removes the need to transfer data among compute nodes before a join or aggregation. Replicated tables are best utilized with small tables. Extra storage is required and there is additional overhead that is incurred when writing data, which make large tables impractical.
+A table that is replicated caches a full copy of the table on each compute node. So, replicating a table removes the need to transfer data among compute nodes before a join or aggregation. Replicated tables are best utilized with small tables. Extra storage is required and there is additional overhead that is incurred when writing data, which make large tables impractical.
The diagram below shows a replicated table that is cached on the first distribution on each compute node.
The diagram below shows a replicated table that is cached on the first distribut
## Next steps
-Now that you know a bit about Synapse SQL, learn how to quickly [create a dedicated SQL pool](../quickstart-create-sql-pool-portal.md) and [load sample data](../sql-data-warehouse/sql-data-warehouse-load-from-azure-blob-storage-with-polybase.md) (./sql-data-warehouse-load-sample-databases.md). Or you start [using serverless SQL pool](../quickstart-sql-on-demand.md). If you are new to Azure, you may find the [Azure glossary](../../azure-glossary-cloud-terminology.md) helpful as you encounter new terminology.
+Now that you know a bit about Synapse SQL, learn how to quickly [create a dedicated SQL pool](../quickstart-create-sql-pool-portal.md) and [load sample data](../sql-data-warehouse/sql-data-warehouse-load-from-azure-blob-storage-with-polybase.md) (./sql-data-warehouse-load-sample-databases.md). Or start [using serverless SQL pool](../quickstart-sql-on-demand.md). If you are new to Azure, you may find the [Azure glossary](../../azure-glossary-cloud-terminology.md) helpful as you encounter new terminology.
virtual-machines Hc Series https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/hc-series.md
HC-series VMs feature 100 Gb/sec Mellanox EDR InfiniBand. These VMs are connecte
[Live Migration](maintenance-and-updates.md): Not Supported<br> [Memory Preserving Updates](maintenance-and-updates.md): Not Supported<br> [VM Generation Support](generation-2.md): Generation 1 and 2<br>
-[Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Supported ([Learn more](https://techcommunity.microsoft.com/t5/azure-compute/accelerated-networking-on-hb-hc-hbv2-and-ndv2/ba-p/2067965) about performance and potential issues) <br>
-[Ephemeral OS Disks](ephemeral-os-disks.md): Not Supported <br>
+[Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Supported<br>
+[Ephemeral OS Disks](ephemeral-os-disks.md): Supported <br>
+ <br> | Size | vCPU | Processor | Memory (GiB) | Memory bandwidth GB/s | Base CPU frequency (GHz) | All-cores frequency (GHz, peak) | Single-core frequency (GHz, peak) | RDMA performance (Gb/s) | MPI support | Temp storage (GiB) | Max data disks | Max Ethernet vNICs |
virtual-machines Image Builder Permissions Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/linux/image-builder-permissions-powershell.md
Title: Configure Azure Image Builder Service permissions using PowerShell
description: Configure requirements for Azure VM Image Builder Service including permissions and privileges using PowerShell Previously updated : 03/02/2021 Last updated : 03/05/2021
# Configure Azure Image Builder Service permissions using PowerShell
-Azure Image Builder Service requires configuration of permissions and privileges prior to building an image. The following sections detail how to configure possible scenarios using PowerShell.
+When you register for the (AIB), this grants the AIB Service permission to create, manage and delete a staging resource group (IT_*), and have rights to add resources to it, that are required for the image build. This is done by an AIB Service Principal Name (SPN) being made available in your subscription during a successful registration.
+
+To allow Azure VM Image Builder to distribute images to either the managed images or to a Shared Image Gallery, you will need to create an Azure user-assigned identity that has permissions to read and write images. If you are accessing Azure storage, then this will need permissions to read private or public containers.
+
+You must setup permissions and privileges prior to building an image. The following sections detail how to configure possible scenarios using PowerShell.
> [!IMPORTANT] > Azure Image Builder is currently in public preview.
virtual-machines N Series Driver Setup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/linux/n-series-driver-setup.md
- Title: Azure N-series GPU driver setup for Linux
-description: How to set up NVIDIA GPU drivers for N-series VMs running Linux in Azure
----- Previously updated : 01/09/2019---
-# Install NVIDIA GPU drivers on N-series VMs running Linux
-
-To take advantage of the GPU capabilities of Azure N-series VMs backed by NVIDIA GPUs, you must install NVIDIA GPU drivers. The [NVIDIA GPU Driver Extension](../extensions/hpccompute-gpu-linux.md) installs appropriate NVIDIA CUDA or GRID drivers on an N-series VM. Install or manage the extension using the Azure portal or tools such as the Azure CLI or Azure Resource Manager templates. See the [NVIDIA GPU Driver Extension documentation](../extensions/hpccompute-gpu-linux.md) for supported distributions and deployment steps.
-
-If you choose to install NVIDIA GPU drivers manually, this article provides supported distributions, drivers, and installation and verification steps. Manual driver setup information is also available for [Windows VMs](../windows/n-series-driver-setup.md).
-
-For N-series VM specs, storage capacities, and disk details, see [GPU Linux VM sizes](../sizes-gpu.md?toc=/azure/virtual-machines/linux/toc.json).
--
-## Install CUDA drivers on N-series VMs
-
-Here are steps to install CUDA drivers from the NVIDIA CUDA Toolkit on N-series VMs.
--
-C and C++ developers can optionally install the full Toolkit to build GPU-accelerated applications. For more information, see the [CUDA Installation Guide](https://docs.nvidia.com/cuda/cuda-installation-guide-linux/https://docsupdatetracker.net/index.html).
-
-To install CUDA drivers, make an SSH connection to each VM. To verify that the system has a CUDA-capable GPU, run the following command:
-
-```bash
-lspci | grep -i NVIDIA
-```
-You will see output similar to the following example (showing an NVIDIA Tesla K80 card):
-
-![lspci command output](./media/n-series-driver-setup/lspci.png)
-
-Then run installation commands specific for your distribution.
-
-### Ubuntu
-
-1. Download and install the CUDA drivers from the NVIDIA website. For example, for Ubuntu 16.04 LTS:
- ```bash
- CUDA_REPO_PKG=cuda-repo-ubuntu1604_10.0.130-1_amd64.deb
-
- wget -O /tmp/${CUDA_REPO_PKG} https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64/${CUDA_REPO_PKG}
-
- sudo dpkg -i /tmp/${CUDA_REPO_PKG}
-
- sudo apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64/7fa2af80.pub
-
- rm -f /tmp/${CUDA_REPO_PKG}
-
- sudo apt-get update
-
- sudo apt-get install cuda-drivers
-
- ```
-
- The installation can take several minutes.
-
-2. To optionally install the complete CUDA toolkit, type:
-
- ```bash
- sudo apt-get install cuda
- ```
-
-3. Reboot the VM and proceed to verify the installation.
-
-#### CUDA driver updates
-
-We recommend that you periodically update CUDA drivers after deployment.
-
-```bash
-sudo apt-get update
-
-sudo apt-get upgrade -y
-
-sudo apt-get dist-upgrade -y
-
-sudo apt-get install cuda-drivers
-
-sudo reboot
-```
-
-### CentOS or Red Hat Enterprise Linux
-
-1. Update the kernel (recommended). If you choose not to update the kernel, ensure that the versions of `kernel-devel` and `dkms` are appropriate for your kernel.
-
- ```
- sudo yum install kernel kernel-tools kernel-headers kernel-devel
-
- sudo reboot
-
-2. Install the latest [Linux Integration Services for Hyper-V and Azure](https://www.microsoft.com/download/details.aspx?id=55106). Check if LIS is required by verifying the results of lspci. If all GPU devices are listed as expected, installing LIS is not required.
-
-Skip this step if you plan to use CentOS 7.8(or higher) as LIS is no longer required for these versions.
-
-Please note that LIS is applicable to Red Hat Enterprise Linux, CentOS, and the Oracle Linux Red Hat Compatible Kernel 5.2-5.11, 6.0-6.10, and 7.0-7.7. Please refer to the [Linux Integration Services documentation] (https://www.microsoft.com/en-us/download/details.aspx?id=55106) for more details.
-
-Skip this step if you are not using the Kernel versions listed above.
-
- ```bash
- wget https://aka.ms/lis
-
- tar xvzf lis
-
- cd LISISO
-
- sudo ./install.sh
-
- sudo reboot
- ```
-
-3. Reconnect to the VM and continue installation with the following commands:
-
- ```bash
- sudo rpm -Uvh https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
-
- sudo yum install dkms
-
- CUDA_REPO_PKG=cuda-repo-rhel7-10.0.130-1.x86_64.rpm
-
- wget https://developer.download.nvidia.com/compute/cuda/repos/rhel7/x86_64/${CUDA_REPO_PKG} -O /tmp/${CUDA_REPO_PKG}
-
- sudo rpm -ivh /tmp/${CUDA_REPO_PKG}
-
- rm -f /tmp/${CUDA_REPO_PKG}
-
- sudo yum install cuda-drivers
- ```
-
- The installation can take several minutes.
-
-4. To optionally install the complete CUDA toolkit, type:
-
- ```bash
- sudo yum install cuda
- ```
- > [!NOTE]
- > If you see an error message related to missing packages like vulkan-filesystem then you may need to edit /etc/yum.repos.d/rh-cloud , look for optional-rpms and set enabled to 1
- >
-
-5. Reboot the VM and proceed to verify the installation.
-
-### Verify driver installation
-
-To query the GPU device state, SSH to the VM and run the [nvidia-smi](https://developer.nvidia.com/nvidia-system-management-interface) command-line utility installed with the driver.
-
-If the driver is installed, you will see output similar to the following. Note that **GPU-Util** shows 0% unless you are currently running a GPU workload on the VM. Your driver version and GPU details may be different from the ones shown.
-
-![NVIDIA device status](./media/n-series-driver-setup/smi.png)
-
-## RDMA network connectivity
-
-RDMA network connectivity can be enabled on RDMA-capable N-series VMs such as NC24r deployed in the same availability set or in a single placement group in a virtual machine (VM) scale set. The RDMA network supports Message Passing Interface (MPI) traffic for applications running with Intel MPI 5.x or a later version. Additional requirements follow:
-
-### Distributions
-
-Deploy RDMA-capable N-series VMs from one of the images in the Azure Marketplace that supports RDMA connectivity on N-series VMs:
-
-* **Ubuntu 16.04 LTS** - Configure RDMA drivers on the VM and register with Intel to download Intel MPI:
-
- [!INCLUDE [virtual-machines-common-ubuntu-rdma](../../../includes/virtual-machines-common-ubuntu-rdma.md)]
-
-* **CentOS-based 7.4 HPC** - RDMA drivers and Intel MPI 5.1 are installed on the VM.
-
-* **CentOS-based HPC** - CentOS-HPC 7.6 and later (for SKUs where InfiniBand is supported over SR-IOV). These images have Mellanox OFED and MPI libraries pre-installed.
-
-> [!NOTE]
-> CX3-Pro cards are supported only through LTS versions of Mellanox OFED. Use LTS Mellanox OFED version (4.9-0.1.7.0) on the N-series VMs with ConnectX3-Pro cards. For more information, see [Linux Drivers](https://www.mellanox.com/products/infiniband-drivers/linux/mlnx_ofed).
->
-> Also, some of the latest Azure Marketplace HPC images have Mellanox OFED 5.1 and later, which don't support ConnectX3-Pro cards. Check the Mellanox OFED version in the HPC image before using it on VMs with ConnectX3-Pro cards.
->
-> The following images are the latest CentOS-HPC images that support ConnectX3-Pro cards:
->
-> - OpenLogic:CentOS-HPC:7.6:7.6.2020062900
-> - OpenLogic:CentOS-HPC:7_6gen2:7.6.2020062901
-> - OpenLogic:CentOS-HPC:7.7:7.7.2020062600
-> - OpenLogic:CentOS-HPC:7_7-gen2:7.7.2020062601
-> - OpenLogic:CentOS-HPC:8_1:8.1.2020062400
-> - OpenLogic:CentOS-HPC:8_1-gen2:8.1.2020062401
->
-
-## Install GRID drivers on NV or NVv3-series VMs
-
-To install NVIDIA GRID drivers on NV or NVv3-series VMs, make an SSH connection to each VM and follow the steps for your Linux distribution.
-
-### Ubuntu
-
-1. Run the `lspci` command. Verify that the NVIDIA M60 card or cards are visible as PCI devices.
-
-2. Install updates.
-
- ```bash
- sudo apt-get update
-
- sudo apt-get upgrade -y
-
- sudo apt-get dist-upgrade -y
-
- sudo apt-get install build-essential ubuntu-desktop -y
-
- sudo apt-get install linux-azure -y
- ```
-3. Disable the Nouveau kernel driver, which is incompatible with the NVIDIA driver. (Only use the NVIDIA driver on NV or NVv2 VMs.) To do this, create a file in `/etc/modprobe.d` named `nouveau.conf` with the following contents:
-
- ```
- blacklist nouveau
-
- blacklist lbm-nouveau
- ```
--
-4. Reboot the VM and reconnect. Exit X server:
-
- ```bash
- sudo systemctl stop lightdm.service
- ```
-
-5. Download and install the GRID driver:
-
- ```bash
- wget -O NVIDIA-Linux-x86_64-grid.run https://go.microsoft.com/fwlink/?linkid=874272
-
- chmod +x NVIDIA-Linux-x86_64-grid.run
-
- sudo ./NVIDIA-Linux-x86_64-grid.run
- ```
-
-6. When you're asked whether you want to run the nvidia-xconfig utility to update your X configuration file, select **Yes**.
-
-7. After installation completes, copy /etc/nvidia/gridd.conf.template to a new file gridd.conf at location /etc/nvidia/
-
- ```bash
- sudo cp /etc/nvidia/gridd.conf.template /etc/nvidia/gridd.conf
- ```
-
-8. Add the following to `/etc/nvidia/gridd.conf`:
-
- ```
- IgnoreSP=FALSE
- EnableUI=FALSE
- ```
-
-9. Remove the following from `/etc/nvidia/gridd.conf` if it is present:
-
- ```
- FeatureType=0
- ```
-10. Reboot the VM and proceed to verify the installation.
--
-### CentOS or Red Hat Enterprise Linux
-
-1. Update the kernel and DKMS (recommended). If you choose not to update the kernel, ensure that the versions of `kernel-devel` and `dkms` are appropriate for your kernel.
-
- ```bash
- sudo yum update
-
- sudo yum install kernel-devel
-
- sudo rpm -Uvh https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
-
- sudo yum install dkms
-
- sudo yum install hyperv-daemons
- ```
-
-2. Disable the Nouveau kernel driver, which is incompatible with the NVIDIA driver. (Only use the NVIDIA driver on NV or NV3 VMs.) To do this, create a file in `/etc/modprobe.d` named `nouveau.conf` with the following contents:
-
- ```
- blacklist nouveau
-
- blacklist lbm-nouveau
- ```
-
-3. Reboot the VM, reconnect, and install the latest [Linux Integration Services for Hyper-V and Azure](https://www.microsoft.com/download/details.aspx?id=55106). Check if LIS is required by verifying the results of lspci. If all GPU devices are listed as expected, installing LIS is not required.
-
-Skip this step is you are using CentOS/RHEL 7.8 and above.
-
- ```bash
- wget https://aka.ms/lis
-
- tar xvzf lis
-
- cd LISISO
-
- sudo ./install.sh
-
- sudo reboot
-
- ```
-
-4. Reconnect to the VM and run the `lspci` command. Verify that the NVIDIA M60 card or cards are visible as PCI devices.
-
-5. Download and install the GRID driver:
-
- ```bash
- wget -O NVIDIA-Linux-x86_64-grid.run https://go.microsoft.com/fwlink/?linkid=874272
-
- chmod +x NVIDIA-Linux-x86_64-grid.run
-
- sudo ./NVIDIA-Linux-x86_64-grid.run
- ```
-6. When you're asked whether you want to run the nvidia-xconfig utility to update your X configuration file, select **Yes**.
-
-7. After installation completes, copy /etc/nvidia/gridd.conf.template to a new file gridd.conf at location /etc/nvidia/
-
- ```bash
- sudo cp /etc/nvidia/gridd.conf.template /etc/nvidia/gridd.conf
- ```
-
-8. Add the following to `/etc/nvidia/gridd.conf`:
-
- ```
- IgnoreSP=FALSE
- EnableUI=FALSE
- ```
-9. Remove the following from `/etc/nvidia/gridd.conf` if it is present:
-
- ```
- FeatureType=0
- ```
-10. Reboot the VM and proceed to verify the installation.
--
-### Verify driver installation
--
-To query the GPU device state, SSH to the VM and run the [nvidia-smi](https://developer.nvidia.com/nvidia-system-management-interface) command-line utility installed with the driver.
-
-If the driver is installed, you will see output similar to the following. Note that **GPU-Util** shows 0% unless you are currently running a GPU workload on the VM. Your driver version and GPU details may be different from the ones shown.
-
-![Screenshot that shows the output when the GPU device state is queried.](./media/n-series-driver-setup/smi-nv.png)
-
-
-### X11 server
-If you need an X11 server for remote connections to an NV or NVv2 VM, [x11vnc](http://www.karlrunge.com/x11vnc/) is recommended because it allows hardware acceleration of graphics. The BusID of the M60 device must be manually added to the X11 configuration file (usually, `etc/X11/xorg.conf`). Add a `"Device"` section similar to the following:
-
-```
-Section "Device"
- Identifier "Device0"
- Driver "nvidia"
- VendorName "NVIDIA Corporation"
- BoardName "Tesla M60"
- BusID "PCI:0@your-BusID:0:0"
-EndSection
-```
-
-Additionally, update your `"Screen"` section to use this device.
-
-The decimal BusID can be found by running
-
-```bash
-nvidia-xconfig --query-gpu-info | awk '/PCI BusID/{print $4}'
-```
-
-The BusID can change when a VM gets reallocated or rebooted. Therefore, you may want to create a script to update the BusID in the X11 configuration when a VM is rebooted. For example, create a script named `busidupdate.sh` (or another name you choose) with contents similar to the following:
-
-```bash
-#!/bin/bash
-XCONFIG="/etc/X11/xorg.conf"
-OLDBUSID=`awk '/BusID/{gsub(/"/, "", $2); print $2}' ${XCONFIG}`
-NEWBUSID=`nvidia-xconfig --query-gpu-info | awk '/PCI BusID/{print $4}'`
-
-if [[ "${OLDBUSID}" == "${NEWBUSID}" ]] ; then
- echo "NVIDIA BUSID not changed - nothing to do"
-else
- echo "NVIDIA BUSID changed from \"${OLDBUSID}\" to \"${NEWBUSID}\": Updating ${XCONFIG}"
- sed -e 's|BusID.*|BusID '\"${NEWBUSID}\"'|' -i ${XCONFIG}
-fi
-```
-
-Then, create an entry for your update script in `/etc/rc.d/rc3.d` so the script is invoked as root on boot.
-
-## Troubleshooting
-
-* You can set persistence mode using `nvidia-smi` so the output of the command is faster when you need to query cards. To set persistence mode, execute `nvidia-smi -pm 1`. Note that if the VM is restarted, the mode setting goes away. You can always script the mode setting to execute upon startup.
-* If you updated the NVIDIA CUDA drivers to the latest version and find RDMA connectivity is no longer working, [reinstall the RDMA drivers](#rdma-network-connectivity) to reestablish that connectivity.
-* If a certain CentOS/RHEL OS version (or kernel) is not supported for LIS, an error ΓÇ£Unsupported kernel versionΓÇ¥ is thrown. Please report this error along with the OS and kernel versions.
-
-## Next steps
-
-* To capture a Linux VM image with your installed NVIDIA drivers, see [How to generalize and capture a Linux virtual machine](capture-image.md).
+
+ Title: Azure N-series GPU driver setup for Linux
+description: How to set up NVIDIA GPU drivers for N-series VMs running Linux in Azure
+++++ Last updated : 11/11/2019+++
+# Install NVIDIA GPU drivers on N-series VMs running Linux
+
+To take advantage of the GPU capabilities of Azure N-series VMs backed by NVIDIA GPUs, you must install NVIDIA GPU drivers. The [NVIDIA GPU Driver Extension](../extensions/hpccompute-gpu-linux.md) installs appropriate NVIDIA CUDA or GRID drivers on an N-series VM. Install or manage the extension using the Azure portal or tools such as the Azure CLI or Azure Resource Manager templates. See the [NVIDIA GPU Driver Extension documentation](../extensions/hpccompute-gpu-linux.md) for supported distributions and deployment steps.
+
+If you choose to install NVIDIA GPU drivers manually, this article provides supported distributions, drivers, and installation and verification steps. Manual driver setup information is also available for [Windows VMs](../windows/n-series-driver-setup.md).
+
+For N-series VM specs, storage capacities, and disk details, see [GPU Linux VM sizes](../sizes-gpu.md?toc=/azure/virtual-machines/linux/toc.json).
++
+## Install CUDA drivers on N-series VMs
+
+Here are steps to install CUDA drivers from the NVIDIA CUDA Toolkit on N-series VMs.
+
+C and C++ developers can optionally install the full Toolkit to build GPU-accelerated applications. For more information, see the [CUDA Installation Guide](https://docs.nvidia.com/cuda/cuda-installation-guide-linux/https://docsupdatetracker.net/index.html).
+
+To install CUDA drivers, make an SSH connection to each VM. To verify that the system has a CUDA-capable GPU, run the following command:
+
+```bash
+lspci | grep -i NVIDIA
+```
+You will see output similar to the following example (showing an NVIDIA Tesla K80 card):
+
+![lspci command output](./media/n-series-driver-setup/lspci.png)
+
+lspci lists the PCIe devices on the VM, including the InfiniBand NIC and GPUs, if any. If lspci doesn't return successfully, you may need to install LIS on CentOS/RHEL (instructions below).
+Then run installation commands specific for your distribution.
+
+### Ubuntu
+
+1. Download and install the CUDA drivers from the NVIDIA website. For example, for Ubuntu 16.04 LTS:
+ ```bash
+ CUDA_REPO_PKG=cuda-repo-ubuntu1604_10.0.130-1_amd64.deb
+ wget -O /tmp/${CUDA_REPO_PKG} https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64/${CUDA_REPO_PKG}
+
+ sudo dpkg -i /tmp/${CUDA_REPO_PKG}
+ sudo apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64/7fa2af80.pub
+ rm -f /tmp/${CUDA_REPO_PKG}
+
+ sudo apt-get update
+ sudo apt-get install cuda-drivers
+ ```
+
+ The installation can take several minutes.
+
+2. To optionally install the complete CUDA toolkit, type:
+
+ ```bash
+ sudo apt-get install cuda
+ ```
+
+3. Reboot the VM and proceed to verify the installation.
+
+#### CUDA driver updates
+
+We recommend that you periodically update CUDA drivers after deployment.
+
+```bash
+sudo apt-get update
+sudo apt-get upgrade -y
+sudo apt-get dist-upgrade -y
+sudo apt-get install cuda-drivers
+
+sudo reboot
+```
+
+### CentOS or Red Hat Enterprise Linux
+
+1. Update the kernel (recommended). If you choose not to update the kernel, ensure that the versions of `kernel-devel` and `dkms` are appropriate for your kernel.
+
+ ```
+ sudo yum install kernel kernel-tools kernel-headers kernel-devel
+ sudo reboot
+ ```
+
+2. Install the latest [Linux Integration Services for Hyper-V and Azure](https://www.microsoft.com/download/details.aspx?id=55106). Check if LIS is required by verifying the results of lspci. If all GPU devices are listed as expected (and documented above), installing LIS is not required.
+
+ Please note that LIS is applicable to Red Hat Enterprise Linux, CentOS, and the Oracle Linux Red Hat Compatible Kernel 5.2-5.11, 6.0-6.10, and 7.0-7.7. Please refer to the [Linux Integration Services documentation] (https://www.microsoft.com/en-us/download/details.aspx?id=55106) for more details.
+ Skip this step if you plan to use CentOS/RHEL 7.8 (or higher versions) as LIS is no longer required for these versions.
+
+ ```bash
+ wget https://aka.ms/lis
+ tar xvzf lis
+ cd LISISO
+
+ sudo ./install.sh
+ sudo reboot
+ ```
+
+3. Reconnect to the VM and continue installation with the following commands:
+
+ ```bash
+ sudo rpm -Uvh https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
+ sudo yum install dkms
+
+ CUDA_REPO_PKG=cuda-repo-rhel7-10.0.130-1.x86_64.rpm
+ wget https://developer.download.nvidia.com/compute/cuda/repos/rhel7/x86_64/${CUDA_REPO_PKG} -O /tmp/${CUDA_REPO_PKG}
+
+ sudo rpm -ivh /tmp/${CUDA_REPO_PKG}
+ rm -f /tmp/${CUDA_REPO_PKG}
+
+ sudo yum install cuda-drivers
+ ```
+
+ The installation can take several minutes.
+
+4. To optionally install the complete CUDA toolkit, type:
+
+ ```bash
+ sudo yum install cuda
+ ```
+ > [!NOTE]
+ > If you see an error message related to missing packages like vulkan-filesystem then you may need to edit /etc/yum.repos.d/rh-cloud , look for optional-rpms and set enabled to 1
+ >
+
+5. Reboot the VM and proceed to verify the installation.
+
+### Verify driver installation
+
+To query the GPU device state, SSH to the VM and run the [nvidia-smi](https://developer.nvidia.com/nvidia-system-management-interface) command-line utility installed with the driver.
+
+If the driver is installed, you will see output similar to the following. Note that **GPU-Util** shows 0% unless you are currently running a GPU workload on the VM. Your driver version and GPU details may be different from the ones shown.
+
+![NVIDIA device status](./media/n-series-driver-setup/smi.png)
+
+## RDMA network connectivity
+
+RDMA network connectivity can be enabled on RDMA-capable N-series VMs such as NC24r deployed in the same availability set or in a single placement group in a virtual machine (VM) scale set. The RDMA network supports Message Passing Interface (MPI) traffic for applications running with Intel MPI 5.x or a later version. Additional requirements follow:
+
+### Distributions
+
+Deploy RDMA-capable N-series VMs from one of the images in the Azure Marketplace that supports RDMA connectivity on N-series VMs:
+
+* **Ubuntu 16.04 LTS** - Configure RDMA drivers on the VM and register with Intel to download Intel MPI:
+
+ [!INCLUDE [virtual-machines-common-ubuntu-rdma](../../../includes/virtual-machines-common-ubuntu-rdma.md)]
+
+* **CentOS-based 7.4 HPC** - RDMA drivers and Intel MPI 5.1 are installed on the VM.
+
+* **CentOS-based HPC** - CentOS-HPC 7.6 and later (for SKUs where InfiniBand is supported over SR-IOV). These images have Mellanox OFED and MPI libraries pre-installed.
+
+> [!NOTE]
+> CX3-Pro cards are supported only through LTS versions of Mellanox OFED. Use LTS Mellanox OFED version (4.9-0.1.7.0) on the N-series VMs with ConnectX3-Pro cards. For more information, see [Linux Drivers](https://www.mellanox.com/products/infiniband-drivers/linux/mlnx_ofed).
+>
+> Also, some of the latest Azure Marketplace HPC images have Mellanox OFED 5.1 and later, which don't support ConnectX3-Pro cards. Check the Mellanox OFED version in the HPC image before using it on VMs with ConnectX3-Pro cards.
+>
+> The following images are the latest CentOS-HPC images that support ConnectX3-Pro cards:
+>
+> - OpenLogic:CentOS-HPC:7.6:7.6.2020062900
+> - OpenLogic:CentOS-HPC:7_6gen2:7.6.2020062901
+> - OpenLogic:CentOS-HPC:7.7:7.7.2020062600
+> - OpenLogic:CentOS-HPC:7_7-gen2:7.7.2020062601
+> - OpenLogic:CentOS-HPC:8_1:8.1.2020062400
+> - OpenLogic:CentOS-HPC:8_1-gen2:8.1.2020062401
+>
+
+## Install GRID drivers on NV or NVv3-series VMs
+
+To install NVIDIA GRID drivers on NV or NVv3-series VMs, make an SSH connection to each VM and follow the steps for your Linux distribution.
+
+### Ubuntu
+
+1. Run the `lspci` command. Verify that the NVIDIA M60 card or cards are visible as PCI devices.
+
+2. Install updates.
+
+ ```bash
+ sudo apt-get update
+ sudo apt-get upgrade -y
+ sudo apt-get dist-upgrade -y
+ sudo apt-get install build-essential ubuntu-desktop -y
+ sudo apt-get install linux-azure -y
+ ```
+3. Disable the Nouveau kernel driver, which is incompatible with the NVIDIA driver. (Only use the NVIDIA driver on NV or NVv2 VMs.) To do this, create a file in `/etc/modprobe.d` named `nouveau.conf` with the following contents:
+
+ ```
+ blacklist nouveau
+ blacklist lbm-nouveau
+ ```
++
+4. Reboot the VM and reconnect. Exit X server:
+
+ ```bash
+ sudo systemctl stop lightdm.service
+ ```
+
+5. Download and install the GRID driver:
+
+ ```bash
+ wget -O NVIDIA-Linux-x86_64-grid.run https://go.microsoft.com/fwlink/?linkid=874272
+ chmod +x NVIDIA-Linux-x86_64-grid.run
+ sudo ./NVIDIA-Linux-x86_64-grid.run
+ ```
+
+6. When you're asked whether you want to run the nvidia-xconfig utility to update your X configuration file, select **Yes**.
+
+7. After installation completes, copy /etc/nvidia/gridd.conf.template to a new file gridd.conf at location /etc/nvidia/
+
+ ```bash
+ sudo cp /etc/nvidia/gridd.conf.template /etc/nvidia/gridd.conf
+ ```
+
+8. Add the following to `/etc/nvidia/gridd.conf`:
+
+ ```
+ IgnoreSP=FALSE
+ EnableUI=FALSE
+ ```
+
+9. Remove the following from `/etc/nvidia/gridd.conf` if it is present:
+
+ ```
+ FeatureType=0
+ ```
+10. Reboot the VM and proceed to verify the installation.
++
+### CentOS or Red Hat Enterprise Linux
+
+1. Update the kernel and DKMS (recommended). If you choose not to update the kernel, ensure that the versions of `kernel-devel` and `dkms` are appropriate for your kernel.
+
+ ```bash
+ sudo yum update
+ sudo yum install kernel-devel
+ sudo rpm -Uvh https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
+ sudo yum install dkms
+ sudo yum install hyperv-daemons
+ ```
+
+2. Disable the Nouveau kernel driver, which is incompatible with the NVIDIA driver. (Only use the NVIDIA driver on NV or NV3 VMs.) To do this, create a file in `/etc/modprobe.d` named `nouveau.conf` with the following contents:
+
+ ```
+ blacklist nouveau
+ blacklist lbm-nouveau
+ ```
+
+3. Reboot the VM, reconnect, and install the latest [Linux Integration Services for Hyper-V and Azure](https://www.microsoft.com/download/details.aspx?id=55106). Check if LIS is required by verifying the results of lspci. If all GPU devices are listed as expected (and documented above), installing LIS is not required.
+
+ Skip this step if you plan to use CentOS/RHEL 7.8 (or higher versions) as LIS is no longer required for these versions.
+
+ ```bash
+ wget https://aka.ms/lis
+ tar xvzf lis
+ cd LISISO
+
+ sudo ./install.sh
+ sudo reboot
+
+ ```
+
+4. Reconnect to the VM and run the `lspci` command. Verify that the NVIDIA M60 card or cards are visible as PCI devices.
+
+5. Download and install the GRID driver:
+
+ ```bash
+ wget -O NVIDIA-Linux-x86_64-grid.run https://go.microsoft.com/fwlink/?linkid=874272
+ chmod +x NVIDIA-Linux-x86_64-grid.run
+
+ sudo ./NVIDIA-Linux-x86_64-grid.run
+ ```
+6. When you're asked whether you want to run the nvidia-xconfig utility to update your X configuration file, select **Yes**.
+
+7. After installation completes, copy /etc/nvidia/gridd.conf.template to a new file gridd.conf at location /etc/nvidia/
+
+ ```bash
+ sudo cp /etc/nvidia/gridd.conf.template /etc/nvidia/gridd.conf
+ ```
+
+8. Add the following to `/etc/nvidia/gridd.conf`:
+
+ ```
+ IgnoreSP=FALSE
+ EnableUI=FALSE
+ ```
+9. Remove the following from `/etc/nvidia/gridd.conf` if it is present:
+
+ ```
+ FeatureType=0
+ ```
+10. Reboot the VM and proceed to verify the installation.
++
+### Verify driver installation
++
+To query the GPU device state, SSH to the VM and run the [nvidia-smi](https://developer.nvidia.com/nvidia-system-management-interface) command-line utility installed with the driver.
+
+If the driver is installed, you will see output similar to the following. Note that **GPU-Util** shows 0% unless you are currently running a GPU workload on the VM. Your driver version and GPU details may be different from the ones shown.
+
+![Screenshot that shows the output when the GPU device state is queried.](./media/n-series-driver-setup/smi-nv.png)
+
+
+### X11 server
+If you need an X11 server for remote connections to an NV or NVv2 VM, [x11vnc](http://www.karlrunge.com/x11vnc/) is recommended because it allows hardware acceleration of graphics. The BusID of the M60 device must be manually added to the X11 configuration file (usually, `etc/X11/xorg.conf`). Add a `"Device"` section similar to the following:
+
+```
+Section "Device"
+ Identifier "Device0"
+ Driver "nvidia"
+ VendorName "NVIDIA Corporation"
+ BoardName "Tesla M60"
+ BusID "PCI:0@your-BusID:0:0"
+EndSection
+```
+
+Additionally, update your `"Screen"` section to use this device.
+
+The decimal BusID can be found by running
+
+```bash
+nvidia-xconfig --query-gpu-info | awk '/PCI BusID/{print $4}'
+```
+
+The BusID can change when a VM gets reallocated or rebooted. Therefore, you may want to create a script to update the BusID in the X11 configuration when a VM is rebooted. For example, create a script named `busidupdate.sh` (or another name you choose) with contents similar to the following:
+
+```bash
+#!/bin/bash
+XCONFIG="/etc/X11/xorg.conf"
+OLDBUSID=`awk '/BusID/{gsub(/"/, "", $2); print $2}' ${XCONFIG}`
+NEWBUSID=`nvidia-xconfig --query-gpu-info | awk '/PCI BusID/{print $4}'`
+
+if [[ "${OLDBUSID}" == "${NEWBUSID}" ]] ; then
+ echo "NVIDIA BUSID not changed - nothing to do"
+else
+ echo "NVIDIA BUSID changed from \"${OLDBUSID}\" to \"${NEWBUSID}\": Updating ${XCONFIG}"
+ sed -e 's|BusID.*|BusID '\"${NEWBUSID}\"'|' -i ${XCONFIG}
+fi
+```
+
+Then, create an entry for your update script in `/etc/rc.d/rc3.d` so the script is invoked as root on boot.
+
+## Troubleshooting
+
+* You can set persistence mode using `nvidia-smi` so the output of the command is faster when you need to query cards. To set persistence mode, execute `nvidia-smi -pm 1`. Note that if the VM is restarted, the mode setting goes away. You can always script the mode setting to execute upon startup.
+* If you updated the NVIDIA CUDA drivers to the latest version and find RDMA connectivity is no longer working, [reinstall the RDMA drivers](#rdma-network-connectivity) to reestablish that connectivity.
+* During installation of LIS, if a certain CentOS/RHEL OS version (or kernel) is not supported for LIS, an error ΓÇ£Unsupported kernel versionΓÇ¥ is thrown. Please report this error along with the OS and kernel versions.
+
+## Next steps
+
+* To capture a Linux VM image with your installed NVIDIA drivers, see [How to generalize and capture a Linux virtual machine](capture-image.md).
virtual-machines Security Controls Policy Image Builder https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/security-controls-policy-image-builder.md
Title: Azure Policy Regulatory Compliance controls for Azure Image Builder description: Lists Azure Policy Regulatory Compliance controls available for Azure Image Builder. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 02/09/2021 Last updated : 03/05/2021
virtual-machines Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Virtual Machines description: Lists Azure Policy Regulatory Compliance controls available for Azure Virtual Machines . These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 02/09/2021 Last updated : 03/05/2021
virtual-network Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-network/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Virtual Network description: Lists Azure Policy Regulatory Compliance controls available for Azure Virtual Network. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 02/09/2021 Last updated : 03/05/2021
virtual-network Tutorial Filter Network Traffic https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-network/tutorial-filter-network-traffic.md
Title: Filter network traffic - tutorial - Azure Portal
+ Title: Filter network traffic - tutorial - Azure portal
description: In this tutorial, you learn how to filter network traffic to a subnet, with a network security group, using the Azure portal.
-tags: azure-resource-manager
Customer intent: I want to filter network traffic to virtual machines that perform similar functions, such as web servers. - Previously updated : 12/13/2018 Last updated : 03/06/2021 # Tutorial: Filter network traffic with a network security group using the Azure portal
-You can filter network traffic inbound to and outbound from a virtual network subnet with a network security group. Network security groups contain security rules that filter network traffic by IP address, port, and protocol. Security rules are applied to resources deployed in a subnet. In this tutorial, you learn how to:
+You can use a network security group to filter network traffic inbound and outbound from a virtual network subnet.
+
+Network security groups contain security rules that filter network traffic by IP address, port, and protocol. Security rules are applied to resources deployed in a subnet.
+
+In this tutorial, you learn how to:
> [!div class="checklist"] > * Create a network security group and security rules
You can filter network traffic inbound to and outbound from a virtual network su
> * Deploy virtual machines (VM) into a subnet > * Test traffic filters
-If you prefer, you can complete this tutorial using the [Azure CLI](tutorial-filter-network-traffic-cli.md) or [PowerShell](tutorial-filter-network-traffic-powershell.md).
- If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+## Prerequisites
+
+- An Azure subscription.
+ ## Sign in to Azure Sign in to the Azure portal at https://portal.azure.com. ## Create a virtual network
-1. On the Azure portal menu or from the **Home** page, select **Create a resource**.
-2. Select **Networking**, and then select **Virtual network**.
-3. Enter, or select, the following information, accept the defaults for the remaining settings, and then select **Create**:
+1. Select **Create a resource** in the upper left-hand corner of the portal.
+
+2. In the search box, enter **Virtual Network**. Select **Virtual Network** in the search results.
+
+3. In the **Virtual Network** page, select **Create**.
- | Setting | Value |
- | | |
- | Name | myVirtualNetwork |
- | Address space | 10.0.0.0/16 |
- | Subscription | Select your subscription. |
- | Resource group | Select **Create new** and enter *myResourceGroup*. |
- | Location | Select **East US**. |
- | Subnet- Name | mySubnet |
- | Subnet - Address range | 10.0.0.0/24 |
+4. In **Create virtual network**, enter or select this information in the **Basics** tab:
+
+ | Setting | Value |
+ | - | -- |
+ | **Project details** | |
+ | Subscription | Select your subscription. |
+ | Resource group | Select **Create new**. </br> Enter **myResourceGroup**. </br> Select **OK**. |
+ | **Instance details** | |
+ | Name | Enter **myVNet**. |
+ | Region | Select **(US) East US**. |
+
+5. Select the **Review + create** tab, or select the blue **Review + create** button at the bottom of the page.
+
+6. Select **Create**.
## Create application security groups An application security group enables you to group together servers with similar functions, such as web servers.
-1. On the Azure portal menu or from the **Home** page, select **Create a resource**.
-2. In the **Search the Marketplace** box, enter *Application security group*. When **Application security group** appears in the search results, select it, select **Application security group** again under **Everything**, and then select **Create**.
-3. Enter, or select, the following information, and then select **Create**:
+1. Select **Create a resource** in the upper left-hand corner of the portal.
+
+2. In the search box, enter **Application security group**. Select **Application security group** in the search results.
+
+3. In the **Application security group** page, select **Create**.
+
+4. In **Create an application security group**, enter or select this information in the **Basics** tab:
- | Setting | Value |
- | | |
- | Name | myAsgWebServers |
- | Subscription | Select your subscription. |
- | Resource group | Select **Use existing** and then select **myResourceGroup**. |
- | Location | East US |
+ | Setting | Value |
+ | - | -- |
+ |**Project details** | |
+ | Subscription | Select your subscription. |
+ | Resource group | Select **myResourceGroup**. |
+ | **Instance details** | |
+ | Name | Enter **myAsgWebServers**. |
+ | Region | Select **(US) East US**. |
-4. Complete step 3 again, specifying the following values:
+5. Select the **Review + create** tab, or select the blue **Review + create** button at the bottom of the page.
- | Setting | Value |
- | | |
- | Name | myAsgMgmtServers |
- | Subscription | Select your subscription. |
- | Resource group | Select **Use existing** and then select **myResourceGroup**. |
- | Location | East US |
+6. Select **Create**.
+
+7. Repeat step 4 again, specifying the following values:
+
+ | Setting | Value |
+ | - | -- |
+ |**Project details** | |
+ | Subscription | Select your subscription. |
+ | Resource group | Select **myResourceGroup**. |
+ | **Instance details** | |
+ | Name | Enter **myAsgMgmtServers**. |
+ | Region | Select **(US) East US**. |
+
+8. Select the **Review + create** tab, or select the blue **Review + create** button at the bottom of the page.
+
+9. Select **Create**.
## Create a network security group
-1. On the Azure portal menu or from the **Home** page, select **Create a resource**.
-2. Select **Networking**, and then select **Network security group**.
-3. Enter, or select, the following information, and then select **Create**:
+A network security group secures network traffic in your virtual network.
+
+1. Select **Create a resource** in the upper left-hand corner of the portal.
+
+2. In the search box, enter **Network security group**. Select **Network security group** in the search results.
- |Setting|Value|
- |||
- |Name|myNsg|
- |Subscription| Select your subscription.|
- |Resource group | Select **Use existing** and then select *myResourceGroup*.|
- |Location|East US|
+3. In the **Network security group** page, select **Create**.
+
+4. In **Create network security group**, enter or select this information in the **Basics** tab:
+
+ | Setting | Value |
+ | - | -- |
+ | **Project details** | |
+ | Subscription | Select your subscription. |
+ | Resource group | Select **myResourceGroup**. |
+ | **Instance details** | |
+ | Name | Enter **myNSG**. |
+ | Location | Select **(US) East US**. |
+
+5. Select the **Review + create** tab, or select the blue **Review + create** button at the bottom of the page.
+
+6. Select **Create**.
## Associate network security group to subnet
-1. In the *Search resources, services, and docs* box at the top of the portal, begin typing *myNsg*. When **myNsg** appears in the search results, select it.
-2. Under **SETTINGS**, select **Subnets** and then select **+ Associate**, as shown in the following picture:
+In this section, we'll associate the network security group with the subnet of the virtual network we created earlier.
- ![Associate NSG to subnet](./media/tutorial-filter-network-traffic/associate-nsg-subnet.png)
+1. In the **Search resources, services, and docs** box at the top of the portal, begin typing **myNsg**. When **myNsg** appears in the search results, select it.
-3. Under **Associate subnet**, select **Virtual network** and then select **myVirtualNetwork**. Select **Subnet**, select **mySubnet**, and then select **OK**.
+2. In the overview page of **myNSG**, select **Subnets** in **Settings**.
-## Create security rules
+3. In the **Settings** page, select **Associate**:
+
+ :::image type="content" source="./media/tutorial-filter-network-traffic/associate-nsg-subnet.png" alt-text="Associate NSG to subnet." border="true":::
-1. Under **SETTINGS**, select **Inbound security rules** and then select **+ Add**, as shown in the following picture:
+3. Under **Associate subnet**, select **Virtual network** and then select **myVNet**.
- ![Add an inbound security rule](./media/tutorial-filter-network-traffic/add-inbound-rule.png)
+4. Select **Subnet**, select **default**, and then select **OK**.
-2. Create a security rule that allows ports 80 and 443 to the **myAsgWebServers** application security group. Under **Add inbound security rule**, enter, or select the following values, accept the remaining defaults, and then select **Add**:
+## Create security rules
- | Setting | Value |
- | | |
- | Destination | Select **Application security group**, and then select **myAsgWebServers** for **Application security group**. |
- | Destination port ranges | Enter 80,443 |
- | Protocol | Select TCP |
- | Name | Allow-Web-All |
+1. In **Settings** of **myNSG**, select **Inbound security rules**.
-3. Complete step 2 again, using the following values:
+2. In **Inbound security rules**, select **+ Add**:
- | Setting | Value |
- | | |
- | Destination | Select **Application security group**, and then select **myAsgMgmtServers** for **Application security group**. |
- | Destination port ranges | Enter 3389 |
- | Protocol | Select TCP |
- | Priority | Enter 110 |
- | Name | Allow-RDP-All |
+ :::image type="content" source="./media/tutorial-filter-network-traffic/add-inbound-rule.png" alt-text="Add inbound security rule." border="true":::
- In this tutorial, RDP (port 3389) is exposed to the internet for the VM that is assigned to the *myAsgMgmtServers* application security group. For production environments, instead of exposing port 3389 to the internet, it's recommended that you connect to Azure resources that you want to manage using a VPN or private network connection.
+3. Create a security rule that allows ports 80 and 443 to the **myAsgWebServers** application security group. In **Add inbound security rule**, enter or select the following information:
-Once you've completed steps 1-3, review the rules you created. Your list should look like the list in the following picture:
+ | Setting | Value |
+ | - | -- |
+ | Source | Leave the default of **Any**. |
+ | Source port ranges | Leave the default of **(*)** |
+ | Destination | Select **Application security group**. |
+ | Destination application security group | Select **myAsgWebServers**. |
+ | Service | Leave the default of **Custom**. |
+ | Destination port ranges | Enter **80,443**. |
+ | Protocol | Select **TCP**. |
+ | Action | Leave the default of **Allow**. |
+ | Priority | Leave the default of **100**. |
+ | Name | Enter **Allow-Web-All**. |
-![Security rules](./media/tutorial-filter-network-traffic/security-rules.png)
+ :::image type="content" source="./media/tutorial-filter-network-traffic/inbound-security-rule.png" alt-text="Inbound security rule." border="true":::
+
+3. Complete step 2 again, using the following values:
+
+ | Setting | Value |
+ | - | -- |
+ | Source | Leave the default of **Any**. |
+ | Source port ranges | Leave the default of **(*)** |
+ | Destination | Select **Application security group**. |
+ | Destination application security group | Select **myAsgMgmtServers**. |
+ | Service | Leave the default of **Custom**. |
+ | Destination port ranges | Enter **3389**. |
+ | Protocol | Select **TCP**. |
+ | Action | Leave the default of **Allow**. |
+ | Priority | Leave the default of **110**. |
+ | Name | Enter **Allow-RDP-All**. |
+
+ > [!CAUTION]
+ > In this article, RDP (port 3389) is exposed to the internet for the VM that is assigned to the **myAsgMgmtServers** application security group.
+ >
+ > For production environments, instead of exposing port 3389 to the internet, it's recommended that you connect to Azure resources that you want to manage using a VPN, private network connection, or Azure Bastion.
+ >
+ > For more information on Azure Bastion, see [What is Azure Bastion?](../bastion/bastion-overview.md).
+
+Once you've completed steps 1-3, review the rules you created. Your list should look like the list in the following example:
+ ## Create virtual machines
Create two VMs in the virtual network.
### Create the first VM
-1. On the Azure portal menu or from the **Home** page, select **Create a resource**.
-2. Select **Compute**, and then select **Windows Server 2016 Datacenter**.
-3. Enter, or select, the following information, and accept the defaults for the remaining settings:
+1. Select **Create a resource** in the upper left-hand corner of the portal.
+
+2. Select **Compute**, then select **Virtual machine**.
+
+3. In **Create a virtual machine**, enter or select this information in the **Basics** tab:
- |Setting|Value|
- |||
- |Subscription| Select your subscription.|
- |Resource group| Select **Use existing** and select **myResourceGroup**.|
- |Name|myVmWeb|
- |Location| Select **East US**.|
- |User name| Enter a user name of your choosing.|
- |Password| Enter a password of your choosing. The password must be at least 12 characters long and meet the [defined complexity requirements](../virtual-machines/windows/faq.md?toc=%2fazure%2fvirtual-network%2ftoc.json#what-are-the-password-requirements-when-creating-a-vm).|
+ | Setting | Value |
+ | - | -- |
+ | **Project details** | |
+ | Subscription | Select your subscription. |
+ | Resource group | Select **myResourceGroup**. |
+ | **Instance details** | |
+ | Virtual machine name | Enter **myVMWeb**. |
+ | Region | Select **(US) East US**. |
+ | Availability options | Leave the default of no redundancy required. |
+ | Image | Select **Windows Server 2019 Datacenter - Gen1**. |
+ | Azure Spot instance | Leave the default of unchecked. |
+ | Size | Select **Standard_D2s_V3**. |
+ | **Administrator account** | |
+ | Username | Enter a username. |
+ | Password | Enter a password. |
+ | Confirm password | Reenter password. |
+ | **Inbound port rules** | |
+ | Public inbound ports | Select **None**. |
-
+4. Select the **Networking** tab.
-4. Select a size for the VM and then select **Select**.
-5. Under **Networking**, select the following values, and accept the remaining defaults:
+5. In the **Networking** tab, enter or select the following information:
- |Setting|Value|
- |||
- |Virtual network |Select **myVirtualNetwork**.|
- |NIC network security group |Select **None**.|
-
+ | Setting | Value |
+ | - | -- |
+ | **Network interface** | |
+ | Virtual network | Select **myVNet**. |
+ | Subnet | Select **default (10.0.0.0/24)**. |
+ | Public IP | Leave the default of a new public IP. |
+ | NIC network security group | Select **None**. |
-6. Select **Review + Create** at the bottom, left corner, select **Create** to start VM deployment.
+6. Select the **Review + create** tab, or select the blue **Review + create** button at the bottom of the page.
+
+7. Select **Create**.
### Create the second VM
-Complete steps 1-6 again, but in step 3, name the VM *myVmMgmt*. The VM takes a few minutes to deploy. Do not continue to the next step until the VM is deployed.
+Complete steps 1-7 again, but in step 3, name the VM **myVMMgmt**. The VM takes a few minutes to deploy.
+
+Don't continue to the next step until the VM is deployed.
## Associate network interfaces to an ASG
-When the portal created the VMs, it created a network interface for each VM, and attached the network interface to the VM. Add the network interface for each VM to one of the application security groups you created previously:
+When the portal created the VMs, it created a network interface for each VM, and attached the network interface to the VM.
+
+Add the network interface for each VM to one of the application security groups you created previously:
+
+1. In the **Search resources, services, and docs** box at the top of the portal, begin typing **myVMWeb**. When the **myVMWeb** virtual machine appears in the search results, select it.
+
+2. In **Settings**, select **Networking**.
+
+3. Select the **Application security groups** tab, then select **Configure the application security groups**.
-1. In the *Search resources, services, and docs* box at the top of the portal, begin typing *myVmWeb*. When the **myVmWeb** VM appears in the search results, select it.
-2. Under **SETTINGS**, select **Networking**. Select **Configure the application security groups**, select **myAsgWebServers** for **Application security groups**, and then select **Save**, as shown in the following picture:
+ :::image type="content" source="./media/tutorial-filter-network-traffic/configure-app-sec-groups.png" alt-text="Configure application security groups." border="true":::
- ![Associate to ASG](./media/tutorial-filter-network-traffic/associate-to-asg.png)
+4. In **Configure the application security groups**, select **myAsgWebServers**. Select **Save**.
-3. Complete steps 1 and 2 again, searching for the **myVmMgmt** VM and selecting the **myAsgMgmtServers** ASG.
+ :::image type="content" source="./media/tutorial-filter-network-traffic/select-asgs.png" alt-text="Select application security groups." border="true":::
+
+5. Complete steps 1 and 2 again, searching for the **myVMMgmt** virtual machine and selecting the **myAsgMgmtServers** ASG.
## Test traffic filters
-1. Connect to the *myVmMgmt* VM. Enter *myVmMgmt* in the search box at the top of the portal. When **myVmMgmt** appears in the search results, select it. Select the **Connect** button.
+1. Connect to the **myVMMgmt** VM. Enter **myVMMgmt** in the search box at the top of the portal. When **myVMMgmt** appears in the search results, select it. Select the **Connect** button.
+ 2. Select **Download RDP file**.
-3. Open the downloaded rdp file and select **Connect**. Enter the user name and password you specified when creating the VM. You may need to select **More choices**, then **Use a different account**, to specify the credentials you entered when you created the VM.
+
+3. Open the downloaded rdp file and select **Connect**. Enter the user name and password you specified when creating the VM.
+ 4. Select **OK**.
-5. You may receive a certificate warning during the sign-in process. If you receive the warning, select **Yes** or **Continue**, to proceed with the connection.
- The connection succeeds, because port 3389 is allowed inbound from the internet to the *myAsgMgmtServers* application security group that the network interface attached to the *myVmMgmt* VM is in.
+5. You may receive a certificate warning during the connection process. If you receive the warning, select **Yes** or **Continue**, to continue with the connection.
+
+ The connection succeeds, because port 3389 is allowed inbound from the internet to the **myAsgMgmtServers** application security group.
+
+ The network interface for **myVMMgmt** is associated with the **myAsgMgmtServers** application security group and allows the connection.
-6. Connect to the *myVmWeb* VM from the *myVmMgmt* VM by entering the following command in a PowerShell session:
+6. Open a PowerShell session on **myVMMgmt**. Connect to **myVMWeb** using the following example:
- ```
+ ```powershell
mstsc /v:myVmWeb ```
- You are able to connect to the myVmWeb VM from the myVmMgmt VM because VMs in the same virtual network can communicate with each other over any port, by default. You can't however, create a remote desktop connection to the *myVmWeb* VM from the internet, because the security rule for the *myAsgWebServers* doesn't allow port 3389 inbound from the internet and inbound traffic from the Internet is denied to all resources, by default.
+ The RDP connection from **myVMMgmt** to **myVMWeb** succeeds because virtual machines in the same network can communicate with each over any port by default.
+
+ You can't create an RDP connection to the **myVMWeb** virtual machine from the internet. The security rule for the **myAsgWebServers** prevents connections to port 3389 inbound from the internet. Inbound traffic from the Internet is denied to all resources by default.
-7. To install Microsoft IIS on the *myVmWeb* VM, enter the following command from a PowerShell session on the *myVmWeb* VM:
+7. To install Microsoft IIS on the **myVMWeb** virtual machine, enter the following command from a PowerShell session on the **myVMWeb** virtual machine:
```powershell Install-WindowsFeature -name Web-Server -IncludeManagementTools ```
-8. After the IIS installation is complete, disconnect from the *myVmWeb* VM, which leaves you in the *myVmMgmt* VM remote desktop connection.
-9. Disconnect from the *myVmMgmt* VM.
-10. In the *Search resources, services, and docs* box at the top of the Azure portal, begin typing *myVmWeb* from your computer. When **myVmWeb** appears in the search results, select it. Note the **Public IP address** for your VM. The address shown in the following picture is 137.135.84.74, but your address is different:
+8. After the IIS installation is complete, disconnect from the **myVMWeb** virtual machine, which leaves you in the **myVMMgmt** virtual machine remote desktop connection.
- ![Public IP address](./media/tutorial-filter-network-traffic/public-ip-address.png)
-
-11. To confirm that you can access the *myVmWeb* web server from the internet, open an internet browser on your computer and browse to `http://<public-ip-address-from-previous-step>`. You see the IIS welcome screen, because port 80 is allowed inbound from the internet to the *myAsgWebServers* application security group that the network interface attached to the *myVmWeb* VM is in.
+9. Disconnect from the **myVMMgmt** VM.
+
+10. In the **Search resources, services, and docs** box at the top of the Azure portal, begin typing **myVMWeb** from your computer. When **myVMWeb** appears in the search results, select it. Note the **Public IP address** for your VM. The address shown in the following example is 23.96.39.113, but your address is different:
+
+ :::image type="content" source="./media/tutorial-filter-network-traffic/public-ip-address.png" alt-text="Public IP address." border="true":::
+
+11. To confirm that you can access the **myVMWeb** web server from the internet, open an internet browser on your computer and browse to `http://<public-ip-address-from-previous-step>`.
+
+You see the IIS welcome screen, because port 80 is allowed inbound from the internet to the **myAsgWebServers** application security group.
+
+The network interface attached for **myVMWeb** is associated with the **myAsgWebServers** application security group and allows the connection.
## Clean up resources When no longer needed, delete the resource group and all of the resources it contains:
-1. Enter *myResourceGroup* in the **Search** box at the top of the portal. When you see **myResourceGroup** in the search results, select it.
+1. Enter **myResourceGroup** in the **Search** box at the top of the portal. When you see **myResourceGroup** in the search results, select it.
2. Select **Delete resource group**.
-3. Enter *myResourceGroup* for **TYPE THE RESOURCE GROUP NAME:** and select **Delete**.
+3. Enter **myResourceGroup** for **TYPE THE RESOURCE GROUP NAME:** and select **Delete**.
## Next steps
-In this tutorial, you created a network security group and associated it to a virtual network subnet. To learn more about network security groups, see [Network security group overview](./network-security-groups-overview.md) and [Manage a network security group](manage-network-security-group.md).
+In this tutorial, you:
+
+* Created a network security group and associated it to a virtual network subnet.
+* Created application security groups for web and management.
+* Created two virtual machines.
+* Tested the application security group network filtering.
+
+To learn more about network security groups, see [Network security group overview](./network-security-groups-overview.md) and [Manage a network security group](manage-network-security-group.md).
-Azure routes traffic between subnets by default. You may instead, choose to route traffic between subnets through a VM, serving as a firewall, for example. To learn how to create a route table, advance to the next tutorial.
+Azure routes traffic between subnets by default. You may instead, choose to route traffic between subnets through a VM, serving as a firewall, for example.
+To learn how to create a route table, advance to the next tutorial.
> [!div class="nextstepaction"] > [Create a route table](./tutorial-create-route-table-portal.md)
virtual-wan Quickstart Route Shared Services Vnet Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-wan/quickstart-route-shared-services-vnet-template.md
+
+ Title: 'Quickstart: Route to shared services using an ARM template'
+
+description: This quickstart shows you how to set up routes to access a shared service VNet with a workload that you want every VNet and Branch to access using an Azure Resource Manager template (ARM template).
++++ Last updated : 03/05/2021+++++
+# Quickstart: Route to shared services VNets using an ARM template
+
+This quickstart describes how to use an Azure Resource Manager template (ARM template) to set up routes to access a shared service VNet with workloads that you want every VNet and Branch (VPN/ER/P2S) to access. Examples of these shared workloads might include virtual machines with services like domain controllers or file shares, or Azure services exposed internally through [Azure Private Endpoint](../private-link/private-endpoint-overview.md).
++
+If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
+
+[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3a%2f%2fraw.githubusercontent.com%2fAzure%2fazure-quickstart-templates%2fmaster%2f301-virtual-wan-with-route-tables%2fazuredeploy.json)
+
+## Prerequisites
+
+* If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+* Public key certificate data is required for this configuration. Sample data is provided in the article. However, the sample data is provided only to satisfy the template requirements in order to create a P2S gateway. After the template completes and the resources are deployed, you must update this field with your own certificate data in order for the configuration to work. See [User VPN certificates](certificates-point-to-site.md#cer).
+
+## <a name="review"></a>Review the template
+
+The template used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/301-virtual-wan-with-route-tables). The template for this article is too long to show here. To view the template, see [azuredeploy.json](https://github.com/Azure/azure-quickstart-templates/blob/master/301-virtual-wan-with-route-tables/azuredeploy.json).
+
+In this quickstart, you'll create an Azure Virtual WAN multi-hub deployment, including all gateways and VNet connections. The list of input parameters has been purposely kept at a minimum. The IP addressing scheme can be changed by modifying the variables inside of the template. The scenario is explained further in the [Scenario: Shared services VNet](scenario-shared-services-vnet.md) article.
++
+This template creates a fully functional Azure Virtual WAN environment with the following resources:
+
+* 2 distinct hubs in different regions.
+* 4 Azure Virtual Networks (VNet).
+* 2 VNet connections for each VWan hub.
+* 1 Point-to-Site (P2S) VPN gateway in each hub.
+* 1 Site-to-Site (S2S) VPN gateway in each hub.
+* 1 ExpressRoute gateway in each hub.
+* Custom Route Tables RT_SHARED in each hub.
+* A label LBL_RT_SHARED to group RT_SHARED route tables.
+
+Multiple Azure resources are defined in the template:
+
+* [**Microsoft.Network/virtualwans**](/azure/templates/microsoft.network/virtualwans)
+* [**Microsoft.Network/virtualhubs**](/azure/templates/microsoft.network/virtualhubs)
+* [**Microsoft.Network/virtualnetworks**](/azure/templates/microsoft.network/virtualnetworks)
+* [**Microsoft.Network/hubvirtualnetworkconnections**](/azure/templates/microsoft.network/virtualhubs/hubvirtualnetworkconnections)
+* [**Microsoft.Network/hubroutetables**](/azure/templates/microsoft.network/virtualhubs/hubRouteTables)
+* [**Microsoft.Network/p2svpngateways**](/azure/templates/microsoft.network/p2svpngateways)
+* [**Microsoft.Network/vpngateways**](/azure/templates/microsoft.network/vpngateways)
+* [**Microsoft.Network/expressroutegateways**](/azure/templates/microsoft.network/expressroutegateways)
+* [**Microsoft.Network/vpnserverconfigurations**](/azure/templates/microsoft.network/vpnServerConfigurations)
+
+>[!NOTE]
+> This ARM template doesn't create the customer-side resources required for hybrid connectivity. After you deploy the template, you still need to create and configure the P2S VPN clients, VPN branches (Local Sites), and connect ExpressRoute circuits.
+>
+
+To find more templates, see [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/?resourceType=Microsoft.Network&pageNumber=1&sort=Popular).
+
+## <a name="deploy"></a>Deploy the template
+
+To deploy this template properly, you must use the button to Deploy to Azure button and the Azure portal, rather than other methods, for the following reasons:
+
+* In order to create the P2S configuration, you need to upload the root certificate data. The data field does not accept the certificate data when using PowerShell or CLI.
+* This template does not work properly using Cloud Shell due to the certificate data upload.
+* Additionally, you can easily modify the template and parameters in the portal to accommodate IP address ranges and other values.
+
+1. Click **Deploy to Azure**.
+
+ [![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3a%2f%2fraw.githubusercontent.com%2fAzure%2fazure-quickstart-templates%2fmaster%2f301-virtual-wan-with-route-tables%2fazuredeploy.json)
+1. To view the template, click **Edit template**. On this page, you can adjust some of the values such as address space or the name of certain resources. **Save** to save your changes, or **Discard**.
+1. On the template page, enter the values. For this template, the P2S public certificate data is required. If you are using this article as an exercise, you can use the following data from this .cer file as sample data for both hubs. Once the template runs and deployment is complete, in order to use the P2S configuration, you must replace this information with the public key [certificate data](certificates-point-to-site.md#cer) for your own deployment.
+
+ ```certificate-data
+ MIIC5zCCAc+gAwIBAgIQGxd3Av1q6LJDZ71e3TzqcTANBgkqhkiG9w0BAQsFADAW
+ MRQwEgYDVQQDDAtQMlNSb290Q2VydDAeFw0yMDExMDkyMjMxNTVaFw0yMTExMDky
+ MjUxNTVaMBYxFDASBgNVBAMMC1AyU1Jvb3RDZXJ0MIIBIjANBgkqhkiG9w0BAQEF
+ AAOCAQ8AMIIBCgKCAQEA33fFra/E0YmGuXLKmYcdvjsYpKwQmw8DjjDkbwhE9jcc
+ Dp50e7F1P6Rxo1T6Hm3dIhEji+0QkP4Ie0XPpw0eW77+RWUiG9XJxGqtJ3Q4tyRy
+ vBfsHORcqMlpV3VZOXIxrk+L/1sSm2xAc2QGuOqKaDNNoKmjrSGNVAeQHigxbTQg
+ zCcyeuhFxHxAaxpW0bslK2hEZ9PhuAe22c2SHht6fOIDeXkadzqTFeV8wEZdltLr
+ 6Per0krxf7N2hFo5Cfz0KgWlvgdKLL7dUc9cjHo6b6BL2pNbLh8YofwHQOQbwt6H
+ miAkEnx1EJ5N8AWuruUTByR2jcWyCnEAUSH41+nk4QIDAQABozEwLzAOBgNVHQ8B
+ Af8EBAMCAgQwHQYDVR0OBBYEFJMgnJSYHH5AJ+9XB11usKRwjbjNMA0GCSqGSIb3
+ DQEBCwUAA4IBAQBOy8Z5FBd/nvgDcjvAwNCw9h5RHzgtgQqDP0qUjEqeQv3ALeC+
+ k/F2Tz0OWiPEzX5N+MMrf/jiYsL2exXuaPWCF5U9fu8bvs89GabHma8MGU3Qua2x
+ Imvt0whWExQMjoyU8SNUi2S13fnRie9ZlSwNh8B/OIUUEtVhQsd4OfuZZFVH4xGp
+ ibJMSMe5JBbZJC2tCdSdTLYfYJqrLkVuTjynXOjmz2JXfwnDNqEMdIMMjXzlNavR
+ J8SNtAoptMOK5vAvlySg4LYtFyXkl0W0vLKIbbHf+2UszuSCijTUa3o/Y1FoYSfi
+ eJH431YTnVLuwdd6fXkXFBrXDhjNsU866+hE
+ ```
+
+1. When you have finished entering values, select **Review + create**.
+1. On the **Review + create** page, after validation passes, select **Create**.
+1. It takes about 75 minutes for the deployment to complete. You can view the progress on the template **Overview** page. If you close the portal, deployment will continue.
+
+ :::image type="content" source="./media/quickstart-route-shared-services-template/template.png" alt-text="Example of deployment complete":::
+
+## <a name="validate"></a>Validate the deployment
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Select **Resource groups** from the left pane.
+1. Select the resource group that you created in the previous section. On the **Overview** page, you will see something similar to this example:
+ :::image type="content" source="./media/quickstart-route-shared-services-template/resources.png" alt-text="Example of resources" lightbox="./media/quickstart-route-shared-services-template/resources.png":::
+
+1. Click the virtual WAN to view the hubs. On the virtual WAN page, click each hub to view connections and other hub information.
+ :::image type="content" source="./media/quickstart-route-shared-services-template/hub.png" alt-text="Example of hubs" lightbox="./media/quickstart-route-shared-services-template/hub.png":::
+
+## <a name="complete"></a>Complete the hybrid configuration
+
+The template does not configure all of the settings necessary for a hybrid network. You need to complete the following configurations and settings, depending on your requirements.
+
+* [Configure the VPN branches - local sites](virtual-wan-site-to-site-portal.md#site)
+* [Complete the P2S VPN configuration](virtual-wan-point-to-site-portal.md)
+* [Connect the ExpressRoute circuits](virtual-wan-expressroute-portal.md)
+
+## Clean up resources
+
+When you no longer need the resources that you created, delete them. Some of the Virtual WAN resources must be deleted in a certain order due to dependencies. Deleting can take about 30 minutes to complete.
++
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Complete the P2S VPN configuration](virtual-wan-point-to-site-portal.md)
+
+> [!div class="nextstepaction"]
+> [Configure the VPN branches - local sites](virtual-wan-site-to-site-portal.md#site)
+
+> [!div class="nextstepaction"]
+> [Connect the ExpressRoute circuits](virtual-wan-expressroute-portal.md)
virtual-wan Scenario Shared Services Vnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-wan/scenario-shared-services-vnet.md
Title: 'Scenario: Route to Shared Services VNets'
+ Title: 'Scenario: Route to shared services VNets'
-description: Scenarios for routing - set up routes to access a Shared Service VNet with a workload that you want every VNet and Branch to access.
+description: Scenarios for routing - set up routes to access a shared service VNet with a workload that you want every VNet and branch to access.
Previously updated : 09/22/2020 Last updated : 03/02/2021
-# Scenario: Route to Shared Services VNets
+# Scenario: Route to shared services VNets
When working with Virtual WAN virtual hub routing, there are quite a few available scenarios. In this scenario, the goal is to set up routes to access a **Shared Service** VNet with workloads that you want every VNet and Branch (VPN/ER/P2S) to access. Examples of these shared workloads might include Virtual Machines with services like Domain Controllers or File Shares, or Azure services exposed internally through [Azure Private Endpoints](../private-link/private-endpoint-overview.md).
We can use a connectivity matrix to summarize the requirements of this scenario:
| From | To: |*Isolated VNets*|*Shared VNet*|*Branches*| ||||||
-|**Isolated VNets**|&#8594;| | Direct | Direct |
-|**Shared VNets** |&#8594;| Direct | Direct | Direct |
-|**Branches** |&#8594;| Direct | Direct | Direct |
+|**Isolated VNets**| ->| | Direct | Direct |
+|**Shared VNets** |->| Direct | Direct | Direct |
+|**Branches** |->| Direct | Direct | Direct |
Each of the cells in the previous table describes whether a Virtual WAN connection (the "From" side of the flow, the row headers) communicates with a destination (the "To" side of the flow, the column headers in italics). In this scenario there are no firewalls or Network Virtual Appliances, so communication flows directly over Virtual WAN (hence the word "Direct" in the table).
As a result, this is the final design:
* Isolated virtual networks: * Associated route table: **RT_SHARED** * Propagating to route tables: **Default**
-* Shared Services virtual networks:
+* Shared services virtual networks:
* Associated route table: **Default** * Propagating to route tables: **RT_SHARED** and **Default** * Branches:
For more information about virtual hub routing, see [About virtual hub routing](
To configure the scenario, consider the following steps:
-1. Identify the **Shared Services** VNet.
+1. Identify the **shared services** VNet.
2. Create a custom route table. In the example, we refer to the route table as **RT_SHARED**. For steps to create a route table, see [How to configure virtual hub routing](how-to-virtual-hub-routing.md). Use the following values as a guideline: * **Association**
- * For **VNets *except* the Shared Services VNet**, select the VNets to isolate. This will imply that all these VNets (except the shared services VNet) will be able to reach destination based on the routes of RT_SHARED route table.
+ * For **VNets *except* the shared services VNet**, select the VNets to isolate. This will imply that all these VNets (except the shared services VNet) will be able to reach destination based on the routes of RT_SHARED route table.
* **Propagation** * For **Branches**, propagate routes to this route table, in addition to any other route tables you may have already selected. Because of this step, the RT_SHARED route table will learn routes from all branch connections (VPN/ER/User VPN).
- * For **VNets**, select the **Shared Services VNet**. Because of this step, RT_SHARED route table will learn routes from the Shared Services VNet connection.
+ * For **VNets**, select the **shared services VNet**. Because of this step, RT_SHARED route table will learn routes from the shared services VNet connection.
This will result in the routing configuration shown in the following figure:
- :::image type="content" source="./media/routing-scenarios/shared-service-vnet/shared-services.png" alt-text="Shared services VNet" lightbox="./media/routing-scenarios/shared-service-vnet/shared-services.png":::
+ :::image type="content" source="./media/routing-scenarios/shared-service-vnet/shared-services.png" alt-text="Diagram for shared services VNet." lightbox="./media/routing-scenarios/shared-service-vnet/shared-services.png":::
## Next steps
-* For more information about Virtual WAN, see the [FAQ](virtual-wan-faq.md).
+* To configure using an ARM template, see [Quickstart: Route to shared services VNets using an ARM template](quickstart-route-shared-services-vnet-template.md).
* For more information about virtual hub routing, see [About virtual hub routing](about-virtual-hub-routing.md).
virtual-wan Scenario Shared Services Vnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/vs-azure-tools-storage-manage-with-storage-explorer.md
## Overview
-Microsoft Azure Storage Explorer is a standalone app that makes it easy to work with Azure Storage data on Windows, macOS, and Linux. In this article, you'll learn several ways of connecting to and managing your Azure storage accounts.
+Microsoft Azure Storage Explorer is a standalone app that makes it easy to work with Azure Storage data on Windows, macOS, and Linux.
-![Microsoft Azure Storage Explorer][0]
+In this article, you'll learn several ways of connecting to and managing your Azure storage accounts.
+ ## Prerequisites
The following versions of macOS support Storage Explorer:
Storage Explorer is available in the [Snap Store](https://snapcraft.io/storage-explorer) for most common distributions of Linux. We recommend Snap Store for this installation. The Storage Explorer snap installs all of its dependencies and updates when new versions are published to the Snap Store.
-For supported distributions, see the [snapd installation page](https://snapcraft.io/docs/installing-snapd).
+For supported distributions, see the [`snapd` installation page](https://snapcraft.io/docs/installing-snapd).
Storage Explorer requires the use of a password manager. You might have to connect to a password manager manually. You can connect Storage Explorer to your system's password manager by running the following command:
Storage Explorer requires the use of a password manager. You might have to conne
snap connect storage-explorer:password-manager-service :password-manager-service ```
-Storage Explorer is also available as a *.tar.gz* download. You have to install dependencies manually. The following distributions of Linux support *.tar.gz* installation:
+Storage Explorer is also available as a *.tar.gz* download. If you use the *.tar.gz*, you must install dependencies manually. The following distributions of Linux support *.tar.gz* installation:
* Ubuntu 20.04 x64 * Ubuntu 18.04 x64
To download and install Storage Explorer, see [Azure Storage Explorer](https://w
## Connect to a storage account or service
-Storage Explorer provides several ways to connect to storage accounts. In general you can either:
+Storage Explorer provides several ways to connect to Azure resources:
* [Sign in to Azure to access your subscriptions and their resources](#sign-in-to-azure)
-* [Attach a specific Storage or CosmosDB resource](#attach-a-specific-resource)
+* [Attach to an individual Azure Storage resource](#attach-to-an-individual-resource)
+* [Attach to a CosmosDB resource](#connect-to-azure-cosmos-db)
### Sign in to Azure > [!NOTE]
-> To fully access resources after you sign in, Storage Explorer requires both management (Azure Resource Manager) and data layer permissions. This means that you need Azure Active Directory (Azure AD) permissions, which give you access to your storage account, the containers in the account, and the data in the containers. If you have permissions only at the data layer, consider [adding a resource through Azure AD](#add-a-resource-via-azure-ad). For more information about the specific permissions Storage Explorer requires, see the [Azure Storage Explorer troubleshooting guide](./storage/common/storage-explorer-troubleshooting.md#azure-rbac-permissions-issues).
+> To fully access resources after you sign in, Storage Explorer requires both management (Azure Resource Manager) and data layer permissions. This means that you need Azure Active Directory (Azure AD) permissions to access your storage account, the containers in the account, and the data in the containers. If you have permissions only at the data layer, consider choosing the **Sign in using Azure Active Directory (Azure AD)** option when attaching to a resource. For more information about the specific permissions Storage Explorer requires, see the [Azure Storage Explorer troubleshooting guide](./storage/common/storage-explorer-troubleshooting.md#azure-rbac-permissions-issues).
1. In Storage Explorer, select **View** > **Account Management** or select the **Manage Accounts** button.
- ![Manage Accounts][1]
-
-1. **ACCOUNT MANAGEMENT** now displays all the Azure accounts you've signed in to. To connect to another account, select **Add an account**.
-
-1. In **Connect to Azure Storage**, select an Azure cloud from **Azure environment** to sign in to a national cloud or an Azure Stack. After you choose your environment, select **Next**.
-
- ![Option to sign in][2]
-
- Storage Explorer opens a page for you to sign in. For more information, see [Connect storage explorer to an Azure Stack subscription or storage account](/azure-stack/user/azure-stack-storage-connect-se).
-
-1. After you successfully sign in with an Azure account, the account and the Azure subscriptions associated with that account appear under **ACCOUNT MANAGEMENT**. Select **All subscriptions** to toggle your selection between all or none of the listed Azure subscriptions. Select the Azure subscriptions that you want to work with, and then select **Apply**.
-
- ![Select Azure subscriptions][3]
+ :::image type="content" alt-text="Manage Accounts" source ="./vs-storage-explorer-manage-accounts.png":::
- **EXPLORER** displays the storage accounts associated with the selected Azure subscriptions.
+1. **ACCOUNT MANAGEMENT** now displays all the Azure accounts you're signed in to. To connect to another account, select **Add an account...**.
- ![Selected Azure subscriptions][4]
+1. The **Connect to Azure Storage** dialog opens. In the **Select Resource** panel, select **Subscription**.
-### Attach a specific resource
+ :::image type="content" alt-text="Connect dialog" source="./vs-storage-explorer-connect-dialog.png":::
-There are several ways to attach to a resource in Storage Explorer:
+1. In the **Select Azure Environment** panel, select an Azure environment to sign in to. You can sign in to global Azure, a national cloud or an Azure Stack instance. Then select **Next**.
-* [Add a resource via Azure AD](#add-a-resource-via-azure-ad). If you have permissions only at the data layer, use this option to add a blob container or an Azure Data Lake Storage Gen2 Blob storage container.
-* [Use a connection string](#use-a-connection-string). Use this option if you have a connection string to a storage account. Storage Explorer supports both key and [shared access signature](./storage/common/storage-sas-overview.md) connection strings.
-* [Use a shared access signature URI](#use-a-shared-access-signature-uri). If you have a [shared access signature URI](./storage/common/storage-sas-overview.md) to a blob container, file share, queue, or table, use it to attach to the resource. To get a shared access signature URI, you can either use [Storage Explorer](#generate-a-sas-in-storage-explorer) or the [Azure portal](https://portal.azure.com).
-* [Use a name and key](#use-a-name-and-key). If you know either of the account keys to your storage account, you can use this option to quickly connect. Find your keys in the storage account page by selecting **Settings** > **Access keys** in the [Azure portal](https://portal.azure.com).
-* [Attach to a local emulator](#attach-to-a-local-emulator). If you're using one of the available Azure Storage Emulators, use this option to easily connect to your emulator.
-* [Connect to an Azure Cosmos DB account by using a connection string](#connect-to-an-azure-cosmos-db-account-by-using-a-connection-string). Use this option if you have a connection string to a CosmosDB instance.
-* [Connect to Azure Data Lake Store by URI](#connect-to-azure-data-lake-store-by-uri). Use this option if you have a URI to Azure Data Lake Store.
+ :::image type="content" alt-text="Option to sign in" source="./vs-storage-explorer-connect-environment.png":::
-#### Add a resource via Azure AD
+ > [!TIP]
+ > For more information about Azure Stack, see [Connect Storage Explorer to an Azure Stack subscription or storage account](/azure-stack/user/azure-stack-storage-connect-se).
-1. Select the **Connect** symbol to open **Connect to Azure Storage**.
+1. Storage Explorer will open a webpage for you to sign in.
- ![Connect to Azure storage option][9]
+1. After you successfully sign in with an Azure account, the account and the Azure subscriptions associated with that account appear under **ACCOUNT MANAGEMENT**. Select the Azure subscriptions that you want to work with, and then select **Apply**.
-1. If you haven't already done so, use the **Add an Azure Account** option to sign in to the Azure account that has access to the resource. After you sign in, return to **Connect to Azure Storage**.
+ :::image type="content" alt-text="Select Azure subscriptions" source="./vs-storage-explorer-account-panel.png":::
-1. Select **Add a resource via Azure Active Directory (Azure AD)**, and then select **Next**.
+1. **EXPLORER** displays the storage accounts associated with the selected Azure subscriptions.
-1. Select an Azure account and tenant. These values must have access to the Storage resource you want to attach to. Select **Next**.
+ :::image type="content" alt-text="Selected Azure subscriptions" source="./vs-storage-explorer-subscription-node.png":::
-1. Choose the resource type you want to attach. Enter the information needed to connect.
+### Attach to an individual resource
- The information you enter on this page depends on what type of resource you're adding. Make sure to choose the correct type of resource. After you've entered the required information, select **Next**.
+Storage Explorer lets you connect to individual resources, such as an Azure Data Lake Storage Gen2 container, using various authentication methods. Some authentication methods are only supported for certain resource types.
-1. Review the **Connection Summary** to make sure all the information is correct. If it is, select **Connect**. Otherwise, select **Back** to return to the previous pages to fix any incorrect information.
+| Resource type | Azure AD | Account Name and Key | Shared Access Signature (SAS) | Public (anonymous) |
+||-|-|--|--|
+| Storage accounts | Yes | Yes | Yes (connection string or URL) | No |
+| Blob containers | Yes | No | Yes (URL) | Yes |
+| Gen2 containers | Yes | No | Yes (URL) | Yes |
+| Gen2 directories | Yes | No | Yes (URL) | Yes |
+| File shares | No | No | Yes (URL) | No |
+| Queues | Yes | No | Yes (URL) | No |
+| Tables | No | No | Yes (URL) | No |
+
+Storage Explorer can also connect to a [local storage emulator](#local-storage-emulator) using the emulator's configured ports.
-After the connection is successfully added, the resource tree goes to the node that represents the connection. The resource appears under **Local & Attached** > **Storage Accounts** > **(Attached Containers)** > **Blob Containers**. If Storage Explorer couldn't add your connection, or if you can't access your data after successfully adding the connection, see the [Azure Storage Explorer troubleshooting guide](./storage/common/storage-explorer-troubleshooting.md).
+To connect to an individual resource, select the **Connect** button in the left-hand toolbar. Then follow the instructions for the resource type you want to connect to.
-#### Use a connection string
-1. Select the **Connect** symbol to open **Connect to Azure Storage**.
+When a connection to a storage account is successfully added, a new tree node will appear under **Local & Attached** > **Storage Accounts**.
- ![Connect to Azure storage option][9]
+For other resource types, a new node is added under **Local & Attached** > **Storage Accounts** > **(Attached Containers)**. The node will appear under a group node matching its type. For example, a new connection to an Azure Data Lake Storage Gen2 container will appear under **Blob Containers**.
-1. Select **Use a connection string**, and then select **Next**.
+If Storage Explorer couldn't add your connection, or if you can't access your data after successfully adding the connection, see the [Azure Storage Explorer troubleshooting guide](./storage/common/storage-explorer-troubleshooting.md).
-1. Choose a display name for your connection and enter your connection string. Then, select **Next**.
+The following sections describe the different authentication methods you can use to connect to individual resources.
-1. Review the **Connection Summary** to make sure all the information is correct. If it is, select **Connect**. Otherwise, select **Back** to return to the previous pages to fix any incorrect information.
+#### Azure AD
-After the connection is successfully added, the resource tree goes to the node that represents the connection. The resource appears under **Local & Attached** > **Storage Accounts**. If Storage Explorer couldn't add your connection, or if you can't access your data after successfully adding the connection, see the [Azure Storage Explorer troubleshooting guide](./storage/common/storage-explorer-troubleshooting.md).
+Storage Explorer can use your Azure account to connect to the following resource types:
+* Blob containers
+* Azure Data Lake Storage Gen2 containers
+* Azure Data Lake Storage Gen2 directories
+* Queues
+
+Azure AD is the preferred option if you have data layer access to your resource but no management layer access.
-#### Use a shared access signature URI
+1. Sign in to at least one Azure account using the [steps described above](#sign-in-to-azure).
+1. In the **Select Resource** panel of the **Connect to Azure Storage** dialog, select **Blob container**, **ADLS Gen2 container**, or **Queue**.
+1. Select **Sign in using Azure Active Directory (Azure AD)** and select **Next**.
+1. Select an Azure account and tenant. The account and tenant must have access to the Storage resource you want to attach to. Select **Next**.
+1. Enter a display name for your connection and the URL of the resource. Select **Next**.
+1. Review your connection information in the **Summary** panel. If the connection information is correct, select **Connect**.
-1. Select the **Connect** symbol to open **Connect to Azure Storage**.
+#### Account name and key
- ![Connect to Azure storage option][9]
+Storage Explorer can connect to a storage account using the storage account's name and key.
-1. Select **Use a shared access signature (SAS) URI**, and then select **Next**.
+You can find your account keys in the [Azure portal](https://portal.azure.com). Open your storage account page and select **Settings** > **Access keys**.
-1. Choose a display name for your connection and enter your shared access signature URI. The service endpoint for the type of resource you're attaching should autofill. If you're using a custom endpoint, it's possible it might not. Select **Next**.
+1. In the **Select Resource** panel of the **Connect to Azure Storage** dialog, select **Storage account**.
+1. Select **Account name and key** and select **Next**.
+1. Enter a display name for your connection, the name of the account, and one of the account keys. Select the appropriate Azure environment. Select **Next**.
+1. Review your connection information in the **Summary** panel. If the connection information is correct, select **Connect**.
-1. Review the **Connection Summary** to make sure all the information is correct. If it is, select **Connect**. Otherwise, select **Back** to return to the previous pages to fix any incorrect information.
+#### Shared access signature (SAS) connection string
-After the connection is successfully added, the resource tree goes to the node that represents the connection. The resource appears under **Local & Attached** > **Storage Accounts** > **(Attached Containers)** > *the service node for the type of container you attached*. If Storage Explorer couldn't add your connection, see the [Azure Storage Explorer troubleshooting guide](./storage/common/storage-explorer-troubleshooting.md). See the troubleshooting guide if you can't access your data after successfully adding the connection.
+Storage Explorer can connect to a storage account using a connection string with a Shared Access Signature (SAS). A SAS connection string looks like this:
-#### Use a name and key
-
-1. Select the **Connect** symbol to open **Connect to Azure Storage**.
-
- ![Connect to Azure storage option][9]
+```text
+SharedAccessSignature=sv=2020-04-08&ss=btqf&srt=sco&st=2021-03-02T00%3A22%3A19Z&se=2020-03-03T00%3A22%3A19Z&sp=rl&sig=fFFpX%2F5tzqmmFFaL0wRffHlhfFFLn6zJuylT6yhOo%2FY%3F;
+BlobEndpoint=https://contoso.blob.core.windows.net/;
+FileEndpoint=https://contoso.file.core.windows.net/;
+QueueEndpoint=https://contoso.queue.core.windows.net/;
+TableEndpoint=https://contoso.table.core.windows.net/;
+```
-1. Select **Use a storage account name and key**, and then select **Next**.
+1. In the **Select Resource** panel of the **Connect to Azure Storage** dialog, select **Storage account**.
+1. Select **Shared access signature (SAS)** and select **Next**.
+1. Enter a display name for your connection and the SAS connection string for the storage account. Select **Next**.
+1. Review your connection information in the **Summary** panel. If the connection information is correct, select **Connect**.
-1. Choose a display name for your connection.
+#### Shared access signature (SAS) URL
-1. Enter your storage account name and either of its access keys.
+Storage Explorer can connect to the following resource types using a SAS URI:
+* Blob container
+* Azure Data Lake Storage Gen2 container or directory
+* File share
+* Queue
+* Table
-1. Choose the **Storage domain** to use and then select **Next**.
+A SAS URI looks like this:
-1. Review the **Connection Summary** to make sure all the information is correct. If it is, select **Connect**. Otherwise, select **Back** to return to the previous pages to fix any incorrect information.
+```text
+https://contoso.blob.core.windows.net/container01?sv=2020-04-08&st=2021-03-02T00%3A30%3A33Z&se=2020-03-03T00%3A30%3A33Z&sr=c&sp=rl&sig=z9VFdWffrV6FXU51T8b8HVfipZPOpYOFLXuQw6wfkFY%3F
+```
-After the connection is successfully added, the resource tree goes to the node that represents the connection. The resource appears under **Local & Attached** > **Storage Accounts**. If Storage Explorer couldn't add your connection, or if you can't access your data after successfully adding the connection, see the [Azure Storage Explorer troubleshooting guide](./storage/common/storage-explorer-troubleshooting.md).
+1. In the **Select Resource** panel of the **Connect to Azure Storage** dialog, select the resource you want to connect to.
+1. Select **Shared access signature (SAS)** and select **Next**.
+1. Enter a display name for your connection and the SAS URI for the resource. Select **Next**.
+1. Review your connection information in the **Summary** panel. If the connection information is correct, select **Connect**.
-#### Attach to a local emulator
+#### Local storage emulator
-Storage Explorer currently supports two official Storage emulators:
+Storage Explorer can connect to an Azure Storage emulator. Currently, there are two supported emulators:
* [Azure Storage Emulator](storage/common/storage-use-emulator.md) (Windows only) * [Azurite](https://github.com/azure/azurite) (Windows, macOS, or Linux)
-If your emulator is listening on the default ports, you can use the **Emulator - Default Ports** node to access your emulator. Look for **Emulator - Default Ports** under **Local & Attached** > **Storage Accounts**.
+If your emulator is listening on the default ports, you can use the **Local & Attached** > **Storage Accounts** > **Emulator - Default Ports** node to access your emulator.
-If you want to use a different name for your connection, or if your emulator isn't running on the default ports, follow these steps:
+If you want to use a different name for your connection, or if your emulator isn't running on the default ports:
-1. Start your emulator. Enter the command `AzureStorageEmulator.exe status` to display the ports for each service type.
+1. Start your emulator.
> [!IMPORTANT] > Storage Explorer doesn't automatically start your emulator. You must start it manually.
-1. Select the **Connect** symbol to open **Connect to Azure Storage**.
-
- ![Connect to Azure storage option][9]
-
-1. Select **Attach to a local emulator**, and then select **Next**.
-
-1. Choose a display name for your connection and enter the ports your emulator is listening on for each service type. **Attach to a Local Emulator** suggests the default port values for most emulators. **Files port** is blank, because neither of the official emulators currently support the Files service. If the emulator you're using does support Files, you can enter the port to use. Then, select **Next**.
+1. In the **Select Resource** panel of the **Connect to Azure Storage** dialog, select **Local storage emulator**.
+1. Enter a display name for your connection and the port number for each emulated service you want to use. If you don't want to use to a service, leave the corresponding port blank. Select **Next**.
+1. Review your connection information in the **Summary** panel. If the connection information is correct, select **Connect**.
-1. Review the **Connection Summary** and make sure all the information is correct. If it is, select **Connect**. Otherwise, select **Back** to return to the previous pages to fix any incorrect information.
+### Connect to Azure Cosmos DB
-After the connection is successfully added, the resource tree goes to the node that represents the connection. The node should appear under **Local & Attached** > **Storage Accounts**. If Storage Explorer couldn't add your connection, or if you can't access your data after successfully adding the connection, see the [Azure Storage Explorer troubleshooting guide](./storage/common/storage-explorer-troubleshooting.md).
+Storage Explorer also supports connecting to Azure Cosmos DB resources.
#### Connect to an Azure Cosmos DB account by using a connection string
As you enter text in the search box, Storage Explorer displays all resources tha
> [!NOTE] > To speed up your search, use **Account Management** to deselect any subscriptions that don't contain the item you're searching for. You can also right-click a node and select **Search From Here** to start searching from a specific node.
->
->
## Next steps
As you enter text in the search box, Storage Explorer displays all resources tha
* [Work with data using Azure Storage Explorer](./cosmos-db/storage-explorer.md) * [Manage Azure Data Lake Store resources with Storage Explorer](./data-lake-store/data-lake-store-in-storage-explorer.md)
-[0]: ./media/vs-azure-tools-storage-manage-with-storage-explorer/Overview.png
-[1]: ./media/vs-azure-tools-storage-manage-with-storage-explorer/ManageAccounts.png
-[2]: ./media/vs-azure-tools-storage-manage-with-storage-explorer/connect-to-azure-storage-azure-environment.png
-[3]: ./media/vs-azure-tools-storage-manage-with-storage-explorer/account-panel-subscriptions-apply.png
-[4]: ./media/vs-azure-tools-storage-manage-with-storage-explorer/SubscriptionNode.png
-[5]: ./media/vs-azure-tools-storage-manage-with-storage-explorer/ConnectDialog.png
-[7]: ./media/vs-azure-tools-storage-manage-with-storage-explorer/PortalAccessKeys.png
-[8]: ./media/vs-azure-tools-storage-manage-with-storage-explorer/AccessKeys.png
-[9]: ./media/vs-azure-tools-storage-manage-with-storage-explorer/ConnectDialog.png
-[10]: ./media/vs-azure-tools-storage-manage-with-storage-explorer/ConnectDialog-AddWithKeySelected.png
-[11]: ./media/vs-azure-tools-storage-manage-with-storage-explorer/ConnectDialog-NameAndKeyPage.png
-[12]: ./media/vs-azure-tools-storage-manage-with-storage-explorer/AttachedWithKeyAccount.png
-[13]: ./media/vs-azure-tools-storage-manage-with-storage-explorer/AttachedWithKeyAccount-Detach.png
[14]: ./media/vs-azure-tools-storage-manage-with-storage-explorer/get-shared-access-signature-for-storage-explorer.png [15]: ./media/vs-azure-tools-storage-manage-with-storage-explorer/create-shared-access-signature-for-storage-explorer.png
-[16]: ./media/vs-azure-tools-storage-manage-with-storage-explorer/ConnectDialog-WithConnStringOrSASSelected.png
-[17]: ./media/vs-azure-tools-storage-manage-with-storage-explorer/ConnectDialog-ConnStringOrSASPage-1.png
-[18]: ./media/vs-azure-tools-storage-manage-with-storage-explorer/AttachedWithSASAccount.png
-[19]: ./media/vs-azure-tools-storage-manage-with-storage-explorer/ConnectDialog-ConnStringOrSASPage-2.png
-[20]: ./media/vs-azure-tools-storage-manage-with-storage-explorer/ServiceAttachedWithSAS.png
[21]: ./media/vs-azure-tools-storage-manage-with-storage-explorer/connect-to-cosmos-db-by-connection-string.png [22]: ./media/vs-azure-tools-storage-manage-with-storage-explorer/connection-string-for-cosmos-db.png
-[23]: ./media/vs-azure-tools-storage-manage-with-storage-explorer/storage-explorer-search-for-resource.png
+[23]: ./media/vs-azure-tools-storage-manage-with-storage-explorer/storage-explorer-search-for-resource.png