Updates from: 03/13/2021 04:05:43
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Password Complexity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/password-complexity.md
Previously updated : 12/10/2020 Last updated : 03/12/2021
To configure the password complexity, override the `newPassword` and `reenterPas
1. Add the `newPassword` and `reenterPassword` claims to the **ClaimsSchema** element. ```xml
- <ClaimType Id="newPassword">
- <PredicateValidationReference Id="CustomPassword" />
- </ClaimType>
- <ClaimType Id="reenterPassword">
- <PredicateValidationReference Id="CustomPassword" />
- </ClaimType>
+ <!--
+ <BuildingBlocks>
+ <ClaimsSchema> -->
+ <ClaimType Id="newPassword">
+ <PredicateValidationReference Id="CustomPassword" />
+ </ClaimType>
+ <ClaimType Id="reenterPassword">
+ <PredicateValidationReference Id="CustomPassword" />
+ </ClaimType>
+ <!--
+ </ClaimsSchema>
+ </BuildingBlocks>-->
``` 1. [Predicates](predicates.md) defines a basic validation to check the value of a claim type and returns true or false. The validation is done by using a specified method element, and a set of parameters relevant to the method. Add the following predicates to the **BuildingBlocks** element, immediately after the closing of the `</ClaimsSchema>` element: ```xml
- <Predicates>
- <Predicate Id="LengthRange" Method="IsLengthRange">
- <UserHelpText>The password must be between 6 and 64 characters.</UserHelpText>
- <Parameters>
- <Parameter Id="Minimum">6</Parameter>
- <Parameter Id="Maximum">64</Parameter>
- </Parameters>
- </Predicate>
- <Predicate Id="Lowercase" Method="IncludesCharacters">
- <UserHelpText>a lowercase letter</UserHelpText>
- <Parameters>
- <Parameter Id="CharacterSet">a-z</Parameter>
- </Parameters>
- </Predicate>
- <Predicate Id="Uppercase" Method="IncludesCharacters">
- <UserHelpText>an uppercase letter</UserHelpText>
- <Parameters>
- <Parameter Id="CharacterSet">A-Z</Parameter>
- </Parameters>
- </Predicate>
- <Predicate Id="Number" Method="IncludesCharacters">
- <UserHelpText>a digit</UserHelpText>
- <Parameters>
- <Parameter Id="CharacterSet">0-9</Parameter>
- </Parameters>
- </Predicate>
- <Predicate Id="Symbol" Method="IncludesCharacters">
- <UserHelpText>a symbol</UserHelpText>
- <Parameters>
- <Parameter Id="CharacterSet">@#$%^&amp;*\-_+=[]{}|\\:',.?/`~"();!</Parameter>
- </Parameters>
- </Predicate>
- </Predicates>
+ <!--
+ <BuildingBlocks>-->
+ <Predicates>
+ <Predicate Id="LengthRange" Method="IsLengthRange">
+ <UserHelpText>The password must be between 6 and 64 characters.</UserHelpText>
+ <Parameters>
+ <Parameter Id="Minimum">6</Parameter>
+ <Parameter Id="Maximum">64</Parameter>
+ </Parameters>
+ </Predicate>
+ <Predicate Id="Lowercase" Method="IncludesCharacters">
+ <UserHelpText>a lowercase letter</UserHelpText>
+ <Parameters>
+ <Parameter Id="CharacterSet">a-z</Parameter>
+ </Parameters>
+ </Predicate>
+ <Predicate Id="Uppercase" Method="IncludesCharacters">
+ <UserHelpText>an uppercase letter</UserHelpText>
+ <Parameters>
+ <Parameter Id="CharacterSet">A-Z</Parameter>
+ </Parameters>
+ </Predicate>
+ <Predicate Id="Number" Method="IncludesCharacters">
+ <UserHelpText>a digit</UserHelpText>
+ <Parameters>
+ <Parameter Id="CharacterSet">0-9</Parameter>
+ </Parameters>
+ </Predicate>
+ <Predicate Id="Symbol" Method="IncludesCharacters">
+ <UserHelpText>a symbol</UserHelpText>
+ <Parameters>
+ <Parameter Id="CharacterSet">@#$%^&amp;*\-_+=[]{}|\\:',.?/`~"();!</Parameter>
+ </Parameters>
+ </Predicate>
+ </Predicates>
+ <!--
+ </BuildingBlocks>-->
``` 1. Add the following predicate validations to the **BuildingBlocks** element, immediately after the closing of the `</Predicates>` element: ```xml
- <PredicateValidations>
- <PredicateValidation Id="CustomPassword">
- <PredicateGroups>
- <PredicateGroup Id="LengthGroup">
- <PredicateReferences MatchAtLeast="1">
- <PredicateReference Id="LengthRange" />
- </PredicateReferences>
- </PredicateGroup>
- <PredicateGroup Id="CharacterClasses">
- <UserHelpText>The password must have at least 3 of the following:</UserHelpText>
- <PredicateReferences MatchAtLeast="3">
- <PredicateReference Id="Lowercase" />
- <PredicateReference Id="Uppercase" />
- <PredicateReference Id="Number" />
- <PredicateReference Id="Symbol" />
- </PredicateReferences>
- </PredicateGroup>
- </PredicateGroups>
- </PredicateValidation>
- </PredicateValidations>
+ <!--
+ <BuildingBlocks>-->
+ <PredicateValidations>
+ <PredicateValidation Id="CustomPassword">
+ <PredicateGroups>
+ <PredicateGroup Id="LengthGroup">
+ <PredicateReferences MatchAtLeast="1">
+ <PredicateReference Id="LengthRange" />
+ </PredicateReferences>
+ </PredicateGroup>
+ <PredicateGroup Id="CharacterClasses">
+ <UserHelpText>The password must have at least 3 of the following:</UserHelpText>
+ <PredicateReferences MatchAtLeast="3">
+ <PredicateReference Id="Lowercase" />
+ <PredicateReference Id="Uppercase" />
+ <PredicateReference Id="Number" />
+ <PredicateReference Id="Symbol" />
+ </PredicateReferences>
+ </PredicateGroup>
+ </PredicateGroups>
+ </PredicateValidation>
+ </PredicateValidations>
+ <!--
+ </BuildingBlocks>-->
``` ## Disable strong password
To configure the password complexity, override the `newPassword` and `reenterPas
The following technical profiles are [Active Directory technical profiles](active-directory-technical-profile.md), which read and write data to Azure Active Directory. Override these technical profiles in the extension file. Use `PersistedClaims` to disable the strong password policy. Find the **ClaimsProviders** element. Add the following claim providers as follows: ```xml
-<ClaimsProvider>
- <DisplayName>Azure Active Directory</DisplayName>
- <TechnicalProfiles>
- <TechnicalProfile Id="AAD-UserWriteUsingLogonEmail">
- <PersistedClaims>
- <PersistedClaim ClaimTypeReferenceId="passwordPolicies" DefaultValue="DisablePasswordExpiration, DisableStrongPassword"/>
- </PersistedClaims>
- </TechnicalProfile>
- <TechnicalProfile Id="AAD-UserWritePasswordUsingObjectId">
- <PersistedClaims>
- <PersistedClaim ClaimTypeReferenceId="passwordPolicies" DefaultValue="DisablePasswordExpiration, DisableStrongPassword"/>
- </PersistedClaims>
- </TechnicalProfile>
- </TechnicalProfiles>
-</ClaimsProvider>
+<!--
+<ClaimsProviders>-->
+ <ClaimsProvider>
+ <DisplayName>Azure Active Directory</DisplayName>
+ <TechnicalProfiles>
+ <TechnicalProfile Id="AAD-UserWriteUsingLogonEmail">
+ <PersistedClaims>
+ <PersistedClaim ClaimTypeReferenceId="passwordPolicies" DefaultValue="DisablePasswordExpiration, DisableStrongPassword"/>
+ </PersistedClaims>
+ </TechnicalProfile>
+ <TechnicalProfile Id="AAD-UserWritePasswordUsingObjectId">
+ <PersistedClaims>
+ <PersistedClaim ClaimTypeReferenceId="passwordPolicies" DefaultValue="DisablePasswordExpiration, DisableStrongPassword"/>
+ </PersistedClaims>
+ </TechnicalProfile>
+ </TechnicalProfiles>
+ </ClaimsProvider>
+<!--
+</ClaimsProviders>-->
```
+If you use the [username based sign-in](https://github.com/azure-ad-b2c/samples/tree/master/policies/username-signup-or-signin) policy, update the `AAD-UserWriteUsingLogonEmail`, `AAD-UserWritePasswordUsingObjectId`, and `LocalAccountWritePasswordUsingObjectId` technical profiles with the *DisableStrongPassword* policy.
+ Save the policy file. ## Test your policy
active-directory User Provisioning Sync Attributes For Mapping https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/user-provisioning-sync-attributes-for-mapping.md
Previously updated : 05/13/2019 Last updated : 03/12/2021
When customizing attribute mappings for user provisioning, you might find that t
Azure AD must contain all the data required to create a user profile when provisioning user accounts from Azure AD to a SaaS app. In some cases, to make the data available you might need synchronize attributes from your on-premises AD to Azure AD. Azure AD Connect automatically synchronizes certain attributes to Azure AD, but not all attributes. Furthermore, some attributes (such as SAMAccountName) that are synchronized by default might not be exposed using the Microsoft Graph API. In these cases, you can use the Azure AD Connect directory extension feature to synchronize the attribute to Azure AD. That way, the attribute will be visible to the Microsoft Graph API and the Azure AD provisioning service.
-If the data you need for provisioning is in Active Directory but isn't available for provisioning because of the reasons described above, follow these steps.
+If the data you need for provisioning is in Active Directory but isn't available for provisioning because of the reasons described above, you can use Azure AD Connect or PowerShell to create extension attributes.
-## Sync an attribute
+## Create an extension attribute using Azure AD Connect
1. Open the Azure AD Connect wizard, choose Tasks, and then choose **Customize synchronization options**.
If the data you need for provisioning is in Active Directory but isn't available
> [!NOTE] > The ability to provision reference attributes from on-premises AD, such as **managedby** or **DN/DistinguishedName**, is not supported today. You can request this feature on [User Voice](https://feedback.azure.com/forums/169401-azure-active-directory).
+## Create an extension attribute using PowerShell
+Create a custom extension using PowerShell and assign a value to a user.
+
+```
+#Connect to your Azure AD tenant
+Connect-AzureAD
+
+#Create an application (you can instead use an existing application if you would like)
+$App = New-AzureADApplication -DisplayName ΓÇ£test app nameΓÇ¥ -IdentifierUris https://testapp
+
+#Create a service principal
+New-AzureADServicePrincipal -AppId $App.AppId
+
+#Create an extension property
+New-AzureADApplicationExtensionProperty -ObjectId $App.ObjectId -Name ΓÇ£TestAttributeNameΓÇ¥ -DataType ΓÇ£StringΓÇ¥ -TargetObjects ΓÇ£UserΓÇ¥
+
+#List users in your tenant to determine the objectid for your user
+Get-AzureADUser
+
+#Set a value for the extension property on the user. Replace the objectid with the id of the user and the extension name with the value from the previous step
+Set-AzureADUserExtension -objectid 0ccf8df6-62f1-4175-9e55-73da9e742690 -ExtensionName ΓÇ£extension_6552753978624005a48638a778921fan3_TestAttributeNameΓÇ¥
+
+#Verify that the attribute was added correctly.
+Get-AzureADUser -ObjectId 0ccf8df6-62f1-4175-9e55-73da9e742690 | Select -ExpandProperty ExtensionProperty
+
+```
+ ## Next steps * [Define who is in scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md)
active-directory Quickstart V2 Angular https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-angular.md
In this quickstart, you download and run a code sample that demonstrates how an
>|Enter_the_Redirect_Uri_Here|Replace with **http://localhost:4200**.| >|cacheLocation | (Optional) Set the browser storage for the authentication state. The default is **sessionStorage**. | >|storeAuthStateInCookie | (Optional) Identify the library that stores the authentication request state. This state is required to validate the authentication flows in the browser cookies. This cookie is set for Internet Explorer and Edge to accommodate those two browsers. For more details, see the [known issues](https://github.com/AzureAD/microsoft-authentication-library-for-js/wiki/Known-issues->on-IE-and-Edge-Browser#issues). |
-> > [!TIP]
-> > To find the values of **Application (client) ID**, **Directory (tenant) ID**, and **Supported account types**, go to the app's **Overview** page in the Azure portal.
+>
+> To find the values of **Application (client) ID**, **Directory (tenant) ID**, and **Supported account types**, go to the app's **Overview** page in the Azure portal.
For more information about available configurable options, see [Initialize client applications](msal-js-initializing-client-applications.md).
active-directory Quickstart V2 Aspnet Core Web Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-aspnet-core-web-api.md
In this quickstart, you download an ASP.NET Core web API code sample and review
> [!div renderon="docs"] > [Download the ASP.NET Core solution](https://github.com/Azure-Samples/active-directory-dotnet-native-aspnetcore-v2/archive/aspnetcore3-1.zip) from GitHub. + > [!div renderon="docs"] > ## Step 3: Configure the ASP.NET Core project >
active-directory Quickstart V2 Aspnet Core Webapp Calls Graph https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-aspnet-core-webapp-calls-graph.md
See [How the sample works](#how-the-sample-works) for an illustration.
> [!div renderon="portal" class="sxs-lookup" id="autoupdate" class="nextstepaction"] > [Download the code sample](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/archive/aspnetcore3-1-callsgraph.zip) + > [!div class="sxs-lookup" renderon="portal"] > #### Step 3: Your app is configured and ready to run > We have configured your project with values of your app's properties and it's ready to run.
active-directory Quickstart V2 Aspnet Core Webapp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-aspnet-core-webapp.md
In this quickstart, you download and run a code sample that demonstrates how an
> [!div renderon="portal" class="sxs-lookup" id="autoupdate" class="nextstepaction"] > [Download the code sample](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/archive/aspnetcore3-1.zip) + > [!div class="sxs-lookup" renderon="portal"] > #### Step 3: Your app is configured and ready to run > We've configured your project with values of your app's properties, and it's ready to run.
active-directory Quickstart V2 Aspnet Webapp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-aspnet-webapp.md
In this quickstart, you download and run a code sample that demonstrates how an
> [!div renderon="portal" id="autoupdate" class="sxs-lookup nextstepaction"] > [Download the code sample](https://github.com/AzureADQuickStarts/AppModelv2-WebApp-OpenIDConnect-DotNet/archive/master.zip) + > [!div class="sxs-lookup" renderon="portal"] > #### Step 3: Your app is configured and ready to run > We've configured your project with values of your app's properties.
active-directory Quickstart V2 Dotnet Native Aspnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-dotnet-native-aspnet.md
You can obtain the sample in either of two ways:
``` * [Download it as a ZIP file](https://github.com/AzureADQuickStarts/AppModelv2-NativeClient-DotNet/archive/complete.zip). + ## Register your web API In this section, you register your web API in **App registrations** in the Azure portal.
active-directory Quickstart V2 Javascript Auth Code Angular https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-javascript-auth-code-angular.md
This quickstart uses MSAL Angular v2 with the authorization code flow. For a sim
> Modify the values in the `auth` section as described here: > > - `Enter_the_Application_Id_Here` is the **Application (client) ID** for the application you registered.
+>
+> To find the value of **Application (client) ID**, go to the app registration's **Overview** page in the Azure portal.
> - `Enter_the_Cloud_Instance_Id_Here` is the instance of the Azure cloud. For the main or global Azure cloud, enter `https://login.microsoftonline.com/`. For **national** clouds (for example, China), see [National clouds](authentication-national-cloud.md). > - `Enter_the_Tenant_info_here` is set to one of the following: > - If your application supports *accounts in this organizational directory*, replace this value with the **Tenant ID** or **Tenant name**. For example, `contoso.microsoft.com`.
+>
+> To find the value of the **Directory (tenant) ID**, go to the app registration's **Overview** page in the Azure portal.
> - If your application supports *accounts in any organizational directory*, replace this value with `organizations`. > - If your application supports *accounts in any organizational directory and personal Microsoft accounts*, replace this value with `common`. **For this quickstart**, use `common`. > - To restrict support to *personal Microsoft accounts only*, replace this value with `consumers`.
+>
+> To find the value of **Supported account types**, go to the app registration's **Overview** page in the Azure portal.
> - `Enter_the_Redirect_Uri_Here` is `http://localhost:4200/`. > > The `authority` value in your *app.module.ts* should be similar to the following if you're using the main (global) Azure cloud:
This quickstart uses MSAL Angular v2 with the authorization code flow. For a sim
> authority: "https://login.microsoftonline.com/common", > ``` >
-> > [!TIP]
-> > To find the values of **Application (client) ID**, **Directory (tenant) ID**, and **Supported account types**, go to the app registration's **Overview** page in the Azure portal.
> [!div class="sxs-lookup" renderon="portal"] > #### Step 3: Your app is configured and ready to run
active-directory Quickstart V2 Javascript Auth Code React https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-javascript-auth-code-react.md
This quickstart uses MSAL React with the authorization code flow. For a similar
> Modify the values in the `msalConfig` section as described here: > > - `Enter_the_Application_Id_Here` is the **Application (client) ID** for the application you registered.
+>
+> To find the value of **Application (client) ID**, go to the app registration's **Overview** page in the Azure portal.
> - `Enter_the_Cloud_Instance_Id_Here` is the instance of the Azure cloud. For the main or global Azure cloud, enter `https://login.microsoftonline.com/`. For **national** clouds (for example, China), see [National clouds](authentication-national-cloud.md). > - `Enter_the_Tenant_info_here` is set to one of the following: > - If your application supports *accounts in this organizational directory*, replace this value with the **Tenant ID** or **Tenant name**. For example, `contoso.microsoft.com`.
+>
+> To find the value of the **Directory (tenant) ID**, go to the app registration's **Overview** page in the Azure portal.
> - If your application supports *accounts in any organizational directory*, replace this value with `organizations`. > - If your application supports *accounts in any organizational directory and personal Microsoft accounts*, replace this value with `common`. **For this quickstart**, use `common`. > - To restrict support to *personal Microsoft accounts only*, replace this value with `consumers`.
+>
+> To find the value of **Supported account types**, go to the app registration's **Overview** page in the Azure portal.
> - `Enter_the_Redirect_Uri_Here` is `http://localhost:3000/`. > > The `authority` value in your *authConfig.js* should be similar to the following if you're using the main (global) Azure cloud:
This quickstart uses MSAL React with the authorization code flow. For a similar
> authority: "https://login.microsoftonline.com/common", > ``` >
-> > [!TIP]
-> > To find the values of **Application (client) ID**, **Directory (tenant) ID**, and **Supported account types**, go to the app registration's **Overview** page in the Azure portal.
- > [!div class="sxs-lookup" renderon="portal"] > #### Step 3: Your app is configured and ready to run > We have configured your project with values of your app's properties.
active-directory Quickstart V2 Javascript Auth Code https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-javascript-auth-code.md
This quickstart uses MSAL.js 2.0 with the authorization code flow. For a similar
> Modify the values in the `msalConfig` section as described here: > > - `Enter_the_Application_Id_Here` is the **Application (client) ID** for the application you registered.
+>
+> To find the value of **Application (client) ID**, go to the app registration's **Overview** page in the Azure portal.
> - `Enter_the_Cloud_Instance_Id_Here` is the instance of the Azure cloud. For the main or global Azure cloud, enter `https://login.microsoftonline.com/`. For **national** clouds (for example, China), see [National clouds](authentication-national-cloud.md). > - `Enter_the_Tenant_info_here` is set to one of the following: > - If your application supports *accounts in this organizational directory*, replace this value with the **Tenant ID** or **Tenant name**. For example, `contoso.microsoft.com`.
+>
+> To find the value of the **Directory (tenant) ID**, go to the app registration's **Overview** page in the Azure portal.
> - If your application supports *accounts in any organizational directory*, replace this value with `organizations`. > - If your application supports *accounts in any organizational directory and personal Microsoft accounts*, replace this value with `common`. **For this quickstart**, use `common`. > - To restrict support to *personal Microsoft accounts only*, replace this value with `consumers`.
+>
+> To find the value of **Supported account types**, go to the app registration's **Overview** page in the Azure portal.
> - `Enter_the_Redirect_Uri_Here` is `http://localhost:3000/`. > > The `authority` value in your *authConfig.js* should be similar to the following if you're using the main (global) Azure cloud:
This quickstart uses MSAL.js 2.0 with the authorization code flow. For a similar
> authority: "https://login.microsoftonline.com/common", > ``` >
-> > [!TIP]
-> > To find the values of **Application (client) ID**, **Directory (tenant) ID**, and **Supported account types**, go to the app registration's **Overview** page in the Azure portal.
->
+ > [!div class="sxs-lookup" renderon="portal"] > #### Step 3: Your app is configured and ready to run > We have configured your project with values of your app's properties.
active-directory Quickstart V2 Javascript https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-javascript.md
See [How the sample works](#how-the-sample-works) for an illustration.
> [!div renderon="docs"] > > Where:
-> - *\<Enter_the_Application_Id_Here>* is the **Application (client) ID** for the application you registered.
-> - *\<Enter_the_Cloud_Instance_Id_Here>* is the instance of the Azure cloud. For the main or global Azure cloud, simply enter *https://login.microsoftonline.com*. For **national** clouds (for example, China), see [National clouds](./authentication-national-cloud.md).
-> - *\<Enter_the_Tenant_info_here>* is set to one of the following options:
-> - If your application supports *accounts in this organizational directory*, replace this value with the **Tenant ID** or **Tenant name** (for example, *contoso.microsoft.com*).
-> - If your application supports *accounts in any organizational directory*, replace this value with **organizations**.
-> - If your application supports *accounts in any organizational directory and personal Microsoft accounts*, replace this value with **common**. To restrict support to *personal Microsoft accounts only*, replace this value with **consumers**.
+> - `Enter_the_Application_Id_Here` is the **Application (client) ID** for the application you registered.
+>
+> To find the value of **Application (client) ID**, go to the app's **Overview** page in the Azure portal.
+> - `Enter_the_Cloud_Instance_Id_Here` is the instance of the Azure cloud. For the main or global Azure cloud, simply enter `https://login.microsoftonline.com`. For **national** clouds (for example, China), see [National clouds](./authentication-national-cloud.md).
+> - `Enter_the_Tenant_info_here` is set to one of the following options:
+> - If your application supports *accounts in this organizational directory*, replace this value with the **Tenant ID** or **Tenant name** (for example, `contoso.microsoft.com`).
+>
+> To find the value of the **Directory (tenant) ID**, go to the app registration's **Overview** page in the Azure portal.
+> - If your application supports *accounts in any organizational directory*, replace this value with `organizations`.
+> - If your application supports *accounts in any organizational directory and personal Microsoft accounts*, replace this value with `common`. To restrict support to *personal Microsoft accounts only*, replace this value with `consumers`.
+>
+> To find the value of **Supported account types**, go to the app registration's **Overview** page in the Azure portal.
>
-> > [!TIP]
-> > To find the values of **Application (client) ID**, **Directory (tenant) ID**, and **Supported account types**, go to the app's **Overview** page in the Azure portal.
> > [!div class="sxs-lookup" renderon="portal"] > #### Step 3: Your app is configured and ready to run
active-directory Quickstart V2 Netcore Daemon https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-netcore-daemon.md
This quickstart requires [.NET Core 3.1](https://www.microsoft.com/net/download/
> [!div class="sxs-lookup" renderon="portal" id="autoupdate" class="nextstepaction"] > [Download the code sample](https://github.com/Azure-Samples/active-directory-dotnetcore-daemon-v2/archive/master.zip) + > [!div class="sxs-lookup" renderon="portal"] > > [!NOTE] > > `Enter_the_Supported_Account_Info_Here`
active-directory Quickstart V2 Nodejs Webapp Msal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-nodejs-webapp-msal.md
This quickstart uses the Microsoft Authentication Library for Node.js (MSAL Node
> 1. Under **Manage**, select **Certificates & secrets** > **New client secret**. Leave the description blank and default expiration, and then select **Add**. > 1. Note the **Value** of the **Client Secret** for later use.
+> [!div class="sxs-lookup" renderon="portal"]
+> #### Step 1: Configure the application in Azure portal
+> For the code sample for this quickstart to work, you need to create a client secret and add a reply URL as **http://localhost:3000/redirect**.
+> > [!div renderon="portal" id="makechanges" class="nextstepaction"]
+> > [Make this change for me]()
+>
+> > [!div id="appconfigured" class="alert alert-info"]
+> > ![Already configured](media/quickstart-v2-windows-desktop/green-check.png) Your application is configured with these attributes.
+ #### Step 2: Download the project > [!div renderon="docs"]
This quickstart uses the Microsoft Authentication Library for Node.js (MSAL Node
> Modify the values in the `config` section as described here: > > - `Enter_the_Application_Id_Here` is the **Application (client) ID** for the application you registered.
+>
+> To find the value of **Application (client) ID**, go to the app registration's **Overview** page in the Azure portal.
> - `Enter_the_Client_Secret_Here` is the **Value** of the **Client secret** for the application you registered. >
+> To retrieve or generate a new **Client secret**, under **Manage**, select **Certificates & secrets**.
+>
> The default `authority` value represents the main (global) Azure cloud: > > ```javascript > authority: "https://login.microsoftonline.com/common", > ``` >
-> > [!TIP]
-> > To find the value of **Application (client) ID**, go to the app registration's **Overview** page in the Azure portal. Go under **Certificates & secrets** to retrieve or generate a new **Client secret**.
->
> [!div class="sxs-lookup" renderon="portal"] > #### Step 3: Your app is configured and ready to run >
active-directory Quickstart V2 Uwp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-uwp.md
See [How the sample works](#how-the-sample-works) for an illustration.
> [!div class="sxs-lookup" renderon="portal" id="autoupdate" class="nextstepaction"] > [Download the code sample](https://github.com/Azure-Samples/active-directory-dotnet-native-uwp-v2/archive/msal3x.zip) + > [!div class="sxs-lookup" renderon="portal"] > #### Step 3: Your app is configured and ready to run > We have configured your project with values of your app's properties and it's ready to run.
active-directory Quickstart V2 Windows Desktop https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-windows-desktop.md
See [How the sample works](#how-the-sample-works) for an illustration.
> [!div renderon="portal" id="autoupdate" class="sxs-lookup nextstepaction"] > [Download the code sample](https://github.com/Azure-Samples/active-directory-dotnet-desktop-msgraph-v2/archive/msal3x.zip) + > [!div class="sxs-lookup" renderon="portal"] > #### Step 3: Your app is configured and ready to run > We have configured your project with values of your app's properties and it's ready to run.
active-directory Service Accounts Governing Azure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/service-accounts-governing-azure.md
Establish a review process to ensure that service accounts are regularly reviewe
**The processes for deprovisioning should include the following tasks.**
-1. Once the associated application or script is deprovisioned, [monitor sign-ins](../reports-monitoring/concept-all-sign-ins.md#sign-ins-report) and resource access by the service account.
+1. Once the associated application or script is deprovisioned, [monitor sign-ins](https://docs.microsoft.com/azure/active-directory/reports-monitoring/concept-sign-ins#sign-ins-report) and resource access by the service account.
* If the account still is active, determine how it's being used before taking subsequent steps.
active-directory How To Connect Health Agent Install https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-health-agent-install.md
The following table lists requirements for using Azure AD Connect Health.
| You're a global administrator in Azure AD. |By default, only global administrators can install and configure the health agents, access the portal, and do any operations within Azure AD Connect Health. For more information, see [Administering your Azure AD directory](../fundamentals/active-directory-whatis.md). <br /><br /> By using Azure role-based access control (Azure RBAC), you can allow other users in your organization to access Azure AD Connect Health. For more information, see [Azure RBAC for Azure AD Connect Health](how-to-connect-health-operations.md#manage-access-with-azure-rbac). <br /><br />**Important**: Use a work or school account to install the agents. You can't use a Microsoft account. For more information, see [Sign up for Azure as an organization](../fundamentals/sign-up-organization.md). | | The Azure AD Connect Health agent is installed on each targeted server. | Health agents must be installed and configured on targeted servers so that they can receive data and provide monitoring and analytics capabilities. <br /><br />For example, to get data from your Active Directory Federation Services (AD FS) infrastructure, you must install the agent on the AD FS server and the Web Application Proxy server. Similarly, to get data from your on-premises Azure AD Domain Services (Azure AD DS) infrastructure, you must install the agent on the domain controllers. | | The Azure service endpoints have outbound connectivity. | During installation and runtime, the agent requires connectivity to Azure AD Connect Health service endpoints. If firewalls block outbound connectivity, add the [outbound connectivity endpoints](how-to-connect-health-agent-install.md#outbound-connectivity-to-the-azure-service-endpoints) to the allow list. |
-|Outbound connectivity is based on IP addresses. | For information about firewall filtering based on IP addresses, see [Azure IP ranges](https://www.microsoft.com/download/details.aspx?id=41653).|
+|Outbound connectivity is based on IP addresses. | For information about firewall filtering based on IP addresses, see [Azure IP ranges](https://www.microsoft.com/download/details.aspx?id=56519).|
| TLS inspection for outbound traffic is filtered or disabled. | The agent registration step or data upload operations might fail if there's TLS inspection or termination for outbound traffic at the network layer. For more information, see [Set up TLS inspection](/previous-versions/tn-archive/ee796230(v=technet.10)). | | Firewall ports on the server are running the agent. |The agent requires the following firewall ports to be open so that it can communicate with the Azure AD Connect Health service endpoints: <br /><li>TCP port 443</li><li>TCP port 5671</li> <br />The latest version of the agent doesn't require port 5671. Upgrade to the latest version so that only port 443 is required. For more information, see [Hybrid identity required ports and protocols](./reference-connect-ports.md). | | If Internet Explorer enhanced security is enabled, allow specified websites. |If Internet Explorer enhanced security is enabled, then allow the following websites on the server where you install the agent:<br /><li>https:\//login.microsoftonline.com</li><li>https:\//secure.aadcdn.microsoftonline-p.com</li><li>https:\//login.windows.net</li><li>https:\//aadcdn.msftauth.net</li><li>The federation server for your organization that's trusted by Azure AD (for example, https:\//sts.contoso.com)</li> <br />For more information, see [How to configure Internet Explorer](https://support.microsoft.com/help/815141/internet-explorer-enhanced-security-configuration-changes-the-browsing). If you have a proxy in your network, then see the note that appears at the end of this table.|
active-directory Application Proxy Network Topology https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/application-proxy-network-topology.md
If you have connectors installed in regions different from your default region,
In order to optimize the traffic flow and reduce latency to a connector group assign the connector group to the closest region. To assign a region:
+> [!IMPORTANT]
+> Connectors must be using at least version 1.5.1975.0 to use this capability.
+ 1. Sign in to the [Azure portal](https://portal.azure.com/) as an application administrator of the directory that uses Application Proxy. For example, if the tenant domain is contoso.com, the admin should be admin@contoso.com or any other admin alias on that domain. 1. Select your username in the upper-right corner. Verify you're signed in to a directory that uses Application Proxy. If you need to change directories, select **Switch directory** and choose a directory that uses Application Proxy. 1. In left navigation panel, select **Azure Active Directory**.
You can also consider using one other variant in this situation. If most users i
- [Enable Application Proxy](application-proxy-add-on-premises-application.md) - [Enable single-sign on](application-proxy-configure-single-sign-on-with-kcd.md) - [Enable Conditional Access](application-proxy-integrate-with-sharepoint-server.md)-- [Troubleshoot issues you're having with Application Proxy](application-proxy-troubleshoot.md)
+- [Troubleshoot issues you're having with Application Proxy](application-proxy-troubleshoot.md)
active-directory Application Sign In Other Problem Access Panel https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/application-sign-in-other-problem-access-panel.md
This is because My Apps currently reads up to 999 app role assignments to determ
To check if a user has more than 999 app role assignments, follow these steps: 1. Install the [**Microsoft.Graph**](https://github.com/microsoftgraph/msgraph-sdk-powershell) PowerShell module. 2. Run `Connect-MgGraph -Scopes "User.ReadBasic.All Application.Read.All"`.
-3. Run `(Get-MgUserAppRoleAssignment -UserId "<userId>" -Top 999).Count` to determine the number of app role assignments the user currently has granted.
+3. Run `(Get-MgUserAppRoleAssignment -UserId "<user-id>" -PageSize 999).Count` to determine the number of app role assignments the user currently has granted.
4. If the result is 999, the user likely has more than 999 app roles assignments. ### Check a userΓÇÖs assigned licenses
active-directory Migrate Adfs Apps To Azure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/migrate-adfs-apps-to-azure.md
Title: Moving application authentication from AD FS to Azure Active Directory
-description: This article is intended to help organizations understand how to move applications to Azure Active Directory, with a focus on federated SaaS applications.
+description: Learn how to use Azure Active Directory to replace Active Directory Federation Services (AD FS), giving users single sign-on to all their applications.
Previously updated : 02/10/2021 Last updated : 03/01/2021 # Moving application authentication from Active Directory Federation Services to Azure Active Directory
-[Azure Active Directory (Azure AD)](../fundamentals/active-directory-whatis.md) offers a universal identity platform that provides your people, partners, and customers a single identity to access applications and collaborate from any platform and device. Azure AD has a [full suite of identity management capabilities](../fundamentals/active-directory-whatis.md). Standardizing your application (app) authentication and authorization to Azure AD enables the benefits these capabilities provide.
+[Azure Active Directory (Azure AD)](../fundamentals/active-directory-whatis.md) offers a universal identity platform that provides your people, partners, and customers a single identity to access applications and collaborate from any platform and device. Azure AD has a [full suite of identity management capabilities](../fundamentals/active-directory-whatis.md). Standardizing your application authentication and authorization to Azure AD provides these benefits.
> [!TIP]
-> This article is written for a developer audience. Project managers and administrators planning an application's move to Azure AD should consider reading our [Migrating application authentication to Azure AD](migrate-application-authentication-to-azure-active-directory.md) article.
+> This article is written for a developer audience. Project managers and administrators planning to move an application to Azure AD should consider reading [Migrating application authentication to Azure AD](migrate-application-authentication-to-azure-active-directory.md).
-## Introduction
+## Azure AD benefits
If you have an on-premises directory that contains user accounts, you likely have many applications to which users authenticate. Each of these apps is configured for users to access using their identities.
+Users may also authenticate directly with your on-premises Active Directory. Active Directory Federation Services (AD FS) is a standards-based on-premises identity service. It extends the ability to use single sign-on (SSO) functionality between trusted business partners so that users aren't required to sign in separately to each application. This is known as federated identity.
-Users may also authenticate directly with your on-premises Active Directory. Active Directory Federation Services (AD FS) is a standards based on-premises identity service. AD FS extends the ability to use single sign-on (SSO) functionality between trusted business partners without requiring users to sign-in separately to each application. This is known as Federation.
+Many organizations have Software as a Service (SaaS) or custom line-of-business apps federated directly to AD FS, alongside Microsoft 365 and Azure AD-based apps.
-Many organizations have Software as a Service (SaaS) or custom Line-of-Business (LOB) apps federated directly to AD FS, alongside Microsoft 365 and Azure AD-based apps.
-
-![Applications connected directly on-premises](media/migrate-adfs-apps-to-azure/app-integration-before-migration1.png)
-
-**To increase application security, your goal is to have a single set of access controls and policies across your on-premises and cloud environments**.
-
-![Applications connected through Azure AD](media/migrate-adfs-apps-to-azure/app-integration-after-migration1.png)
+ ![Applications connected directly on-premises](media/migrate-adfs-apps-to-azure/app-integration-before-migration-1.png)
+> [!Important]
+> To increase application security, your goal is to have a single set of access controls and policies across your on-premises and cloud environments.
+ ![Applications connected through Azure AD](media/migrate-adfs-apps-to-azure/app-integration-after-migration-1.png)
## Types of apps to migrate Migrating all your application authentication to Azure AD is optimal, as it gives you a single control plane for identity and access management.
-Your applications may use modern or legacy protocols for authentication. Consider first migrating applications that use modern authentication protocols (such as SAML and Open ID Connect). These apps can be reconfigured to authenticate with Azure AD via either a built-in connector in our App Gallery, or by registering the application in Azure AD. Apps using older protocols can be integrated using [Application Proxy](./what-is-application-proxy.md).
+Your applications may use modern or legacy protocols for authentication. When you plan your migration to Azure AD, consider migrating the apps that use modern authentication protocols (such as SAML and Open ID Connect) first. These apps can be reconfigured to authenticate with Azure AD either via a built-in connector from the Azure App Gallery, or by registering the application in Azure AD. Apps that use older protocols can be integrated using Application Proxy.
-For more information, see [What types of applications can I integrate with Azure AD](./what-is-application-management.md)?
+For more information, see:
-You can use the [AD FS application activity report to migrate applications to Azure AD](./migrate-adfs-application-activity.md) if you have [Azure AD Connect Health enabled](../hybrid/how-to-connect-health-adfs.md).
+* [Using Azure AD Application Proxy to publish on-premises apps for remote users](what-is-application-proxy.md).
+* [What is application management?](what-is-application-management.md)
+* [AD FS application activity report to migrate applications to Azure AD](migrate-adfs-application-activity.md).
+* [Monitor AD FS using Azure AD Connect Health](../hybrid/how-to-connect-health-adfs.md).
### The migration process
-During the process of moving your app authentication to Azure AD, adequately test your apps and configuration. We recommend that you continue to use existing test environments for migration testing moving to the production environment. If a test environment is not currently available, you can set one up using [Azure App Service](https://azure.microsoft.com/services/app-service/) or [Azure Virtual Machines](https://azure.microsoft.com/free/virtual-machines/search/?OCID=AID2000128_SEM_lHAVAxZC&MarinID=lHAVAxZC_79233574796345_azure%20virtual%20machines_be_c__1267736956991399_kwd-79233582895903%3Aloc-190&lnkd=Bing_Azure_Brand&msclkid=df6ac75ba7b612854c4299397f6ab5b0&ef_id=XmAptQAAAJXRb3S4%3A20200306231230%3As&dclid=CjkKEQiAhojzBRDg5ZfomsvdiaABEiQABCU7XjfdCUtsl-Abe1RAtAT35kOyI5YKzpxRD6eJS2NM97zw_wcB), depending on the architecture of the application.
+During the process of moving your app authentication to Azure AD, test your apps and configuration. We recommend that you continue to use existing test environments for migration testing when you move to the production environment. If a test environment isn't currently available, you can set one up using [Azure App Service](https://azure.microsoft.com/services/app-service/) or [Azure Virtual Machines](https://azure.microsoft.com/free/virtual-machines/search/?OCID=AID2000128_SEM_lHAVAxZC&MarinID=lHAVAxZC_79233574796345_azure%20virtual%20machines_be_c__1267736956991399_kwd-79233582895903%3Aloc-190&lnkd=Bing_Azure_Brand&msclkid=df6ac75ba7b612854c4299397f6ab5b0&ef_id=XmAptQAAAJXRb3S4%3A20200306231230%3As&dclid=CjkKEQiAhojzBRDg5ZfomsvdiaABEiQABCU7XjfdCUtsl-Abe1RAtAT35kOyI5YKzpxRD6eJS2NM97zw_wcB), depending on the architecture of the application.
-You may choose to set up a separate test Azure AD tenant to use as you develop your app configurations.
+You may choose to set up a separate test Azure AD tenant on which to develop your app configurations.
Your migration process may look like this:
-**Stage 1 ΓÇô Current state: Production app authenticating with AD FS**
+#### Stage 1 ΓÇô Current state: The production app authenticates with AD FS
-![Migration stage 1 ](media/migrate-adfs-apps-to-azure/stage1.jpg)
+ ![Migration stage 1 ](media/migrate-adfs-apps-to-azure/stage1.jpg)
-
-**Stage 2 ΓÇô OPTIONAL: Test instance of app pointing to test Azure tenant**
+#### Stage 2 ΓÇô (Optional) Point a test instance of the app to the test Azure AD tenant
Update the configuration to point your test instance of the app to a test Azure AD tenant, and make any required changes. The app can be tested with users in the test Azure AD tenant. During the development process, you can use tools such as [Fiddler](https://www.telerik.com/fiddler) to compare and verify requests and responses.
-If setting up a separate test tenant isn't feasible, skip this stage and stand up a test instance of an app and point it to your production Azure AD tenant as described in Stage 3 below.
-
-![Migration stage 2 ](media/migrate-adfs-apps-to-azure/stage2.jpg)
+If it isn't feasible to set up a separate test tenant, skip this stage and point a test instance of the app to your production Azure AD tenant as described in Stage 3 below.
-**Stage 3 ΓÇô Test app pointing to production Azure tenant**
+ ![Migration stage 2 ](media/migrate-adfs-apps-to-azure/stage2.jpg)
-Update the configuration to point your test instance of the app to your production instance of Azure. You can now test with users in your production instance. If necessary review the section of this article on transitioning users.
+#### Stage 3 ΓÇô Point a test instance of the app to the production Azure AD tenant
-![Migration stage 3 ](media/migrate-adfs-apps-to-azure/stage3.jpg)
+Update the configuration to point your test instance of the app to your production Azure AD tenant. You can now test with users in your production tenant. If necessary, review the section of this article on transitioning users.
-**Stage 4 ΓÇô Production app pointing to production AD tenant**
+ ![Migration stage 3 ](media/migrate-adfs-apps-to-azure/stage3.jpg)
-Update the configuration of your production application to point to your production Azure tenant.
+#### Stage 4 ΓÇô Point the production app to the production Azure AD tenant
-![Migration stage 4 ](media/migrate-adfs-apps-to-azure/stage4.jpg)
+Update the configuration of your production app to point to your production Azure AD tenant.
- Apps that authenticate with AD FS may use Active Directory groups for permissions. Use [Azure AD Connect sync](../hybrid/how-to-connect-sync-whatis.md) to synchronize identity data between your on-premises environment and Azure AD before you begin migration. Verify those groups and membership before migration so that you can grant access to the same users when the application is migrated.
+ ![Migration stage 4 ](media/migrate-adfs-apps-to-azure/stage4.jpg)
-### Line of business (LOB) apps
+ Apps that authenticate with AD FS can use Active Directory groups for permissions. Use [Azure AD Connect sync](../hybrid/how-to-connect-sync-whatis.md) to sync identity data between your on-premises environment and Azure AD before you begin migration. Verify those groups and membership before migration so that you can grant access to the same users when the application is migrated.
-LOB apps are developed internally by your organization or available as a standard packaged product that's installed in your data center. Examples include apps built on Windows Identity Foundation and SharePoint apps (not SharePoint Online).
+### Line of business apps
-LOB apps that use OAuth 2.0, OpenID Connect, or WS-Federation can be integrated with Azure AD as [app registrations](../develop/quickstart-register-app.md). Integrate custom apps that use SAML 2.0 or WS-Federation as [non-gallery applications](./add-application-portal.md) on the enterprise applications page in the [Azure portal](https://portal.azure.com/).
+Your line-of-business apps are those that your organization developed or those that are a standard packaged product. Examples include apps built on Windows Identity Foundation and SharePoint apps (not SharePoint Online).
-## SAML-based single sign-On
+Line-of-business apps that use OAuth 2.0, OpenID Connect, or WS-Federation can be integrated with Azure AD as [app registrations](../develop/quickstart-register-app.md). Integrate custom apps that use SAML 2.0 or WS-Federation as [non-gallery applications](add-application-portal.md) on the enterprise applications page in the [Azure portal](https://portal.azure.com/).
-Apps that use SAML 2.0 for authentication can be configured for [SAML-based single sign-on](./what-is-single-sign-on.md) (SAML-based SSO). With [SAML-based SSO](./what-is-single-sign-on.md), you can map users to specific application roles based on rules that you define in your SAML claims.
+## SAML-based single sign-on
-To configure a SaaS application for SAML-based single sign-on, see [Configure SAML-based single sign-on](./view-applications-portal.md).
+Apps that use SAML 2.0 for authentication can be configured for [SAML-based single sign-on](what-is-single-sign-on.md) (SSO). With SAML-based SSO, you can map users to specific application roles based on rules that you define in your SAML claims.
-![SSO SAML User Screenshots ](media/migrate-adfs-apps-to-azure/sso-saml-user-attributes-claims.png)
+To configure a SaaS application for SAML-based SSO, see [Quickstart: Set up SAML-based single sign-on](add-application-portal-setup-sso.md).
+ ![SSO SAML User Screenshots ](media/migrate-adfs-apps-to-azure/sso-saml-user-attributes-claims.png)
-Many SaaS applications have an [application-specific tutorial](../saas-apps/tutorial-list.md) that step you through the configuration for SAML-based single sign-on.
+Many SaaS applications have an [application-specific tutorial](../saas-apps/tutorial-list.md) that steps you through the configuration for SAML-based SSO.
-![app tutorial](media/migrate-adfs-apps-to-azure/app-tutorial.png)
+ ![app tutorial](media/migrate-adfs-apps-to-azure/app-tutorial.png)
-Some apps can be migrated easily. Apps with more complex requirements, such as custom claims, might require additional configuration in Azure AD and/or Azure AD Connect. For information about supported claims mappings, see [Claims mapping in Azure Active Directory](../develop/active-directory-claims-mapping.md).
+Some apps can be migrated easily. Apps with more complex requirements, such as custom claims, may require additional configuration in Azure AD and/or Azure AD Connect. For information about supported claims mappings, see [How to: Customize claims emitted in tokens for a specific app in a tenant (Preview)](../develop/active-directory-claims-mapping.md).
Keep in mind the following limitations when mapping attributes:
-* Not all attributes that can be issued in AD FS will show up in Azure AD as attributes to emit to SAML tokens, even if those attributes are synced. When you edit the attribute, the Value dropdown list will show you the different attributes that are available in Azure AD. Check [Azure AD Connect sync](../hybrid/how-to-connect-sync-whatis.md) configuration to ensure that a required attribute--for example, samAccountName--is being synced to Azure AD. You can use the extension attributes to emit any claim that isn't part of the standard user schema in Azure AD.
+* Not all attributes that can be issued in AD FS show up in Azure AD as attributes to emit to SAML tokens, even if those attributes are synced. When you edit the attribute, the **Value** dropdown list shows you the different attributes that are available in Azure AD. Check [Azure AD Connect sync topics](../hybrid/how-to-connect-sync-whatis.md) configuration to ensure that a required attributeΓÇöfor example, **samAccountName**ΓÇöis synced to Azure AD. You can use the extension attributes to emit any claim that isn't part of the standard user schema in Azure AD.
+* In the most common scenarios, only the **NameID** claim and other common user identifier claims are required for an app. To determine if any additional claims are required, examine what claims you're issuing from AD FS.
+* Not all claims can be issued, as some claims are protected in Azure AD.
+* The ability to use encrypted SAML tokens is now in preview. See [How to: customize claims issued in the SAML token for enterprise applications](../develop/active-directory-saml-claims-customization.md).
-* In the most common scenarios, only the NameID claim and other common user identifier claims are required for an app. To determine if any additional claims are required, examine what claims you're issuing from AD FS.
+### Software as a service (SaaS) apps
-* Not all claims can be issues as some claims are protected in Azure AD.
+If your users sign in to SaaS apps such as Salesforce, ServiceNow, or Workday, and are integrated with AD FS, you're using federated sign-on for SaaS apps.
-* The ability to use encrypted SAML tokens is now in preview. See [How to: customize claims issued in the SAML token for enterprise applications](../develop/active-directory-saml-claims-customization.md).
+Most SaaS applications can be configured in Azure AD. Microsoft has many preconfigured connections to SaaS apps in the [Azure AD app gallery](https://azuremarketplace.microsoft.com/marketplace/apps/category/azure-active-directory-apps), which makes your transition easier. SAML 2.0 applications can be integrated with Azure AD via the Azure AD app gallery or as [non-gallery applications](add-application-portal.md).
+Apps that use OAuth 2.0 or OpenID Connect can be similarly integrated with Azure AD as [app registrations](../develop/quickstart-register-app.md). Apps that use legacy protocols can use [Azure AD Application Proxy](application-proxy.md) to authenticate with Azure AD.
+For any issues with onboarding your SaaS apps, you can contact the [SaaS Application Integration support alias](mailto:SaaSApplicationIntegrations@service.microsoft.com).
-### Software as a service (SaaS) apps
+### SAML signing certificates for SSO
-If your user's sign in to SaaS apps such as Salesforce, ServiceNow, or Workday, and are integrated with AD FS, you're using federated sign-on for SaaS apps.
+Signing certificates are an important part of any SSO deployment. Azure AD creates the signing certificates to establish SAML-based federated SSO to your SaaS applications. Once you add either gallery or non-gallery applications, you'll configure the added application using the federated SSO option. See [Manage certificates for federated single sign-on in Azure Active Directory](manage-certificates-for-federated-single-sign-on.md).
-Most SaaS applications can already be configured in Azure AD. Microsoft has many preconfigured connections to SaaS apps in the [Azure AD app gallery](https://azuremarketplace.microsoft.com/marketplace/apps/category/azure-active-directory-apps), which will make your transition easier. SAML 2.0 applications can be integrated with Azure AD via the Azure AD app gallery or as [non-gallery applications](./add-application-portal.md).
+### SAML token encryption
-Apps that use OAuth 2.0 or OpenID Connect can be integrated with Azure AD similarly as [app registrations](../develop/quickstart-register-app.md). Apps that use legacy protocols can use [Azure AD Application Proxy](./application-proxy.md) to authenticate with Azure AD.
+Both AD FS and Azure AD provide token encryptionΓÇöthe ability to encrypt the SAML security assertions that go to applications. The assertions are encrypted with a public key, and decrypted by the receiving application with the matching private key. When you configure token encryption, you upload X.509 certificate files to provide the public keys.
-For any issues with onboarding your SaaS apps, you can contact the [SaaS Application Integration support alias](mailto:SaaSApplicationIntegrations@service.microsoft.com).
+For information about Azure AD SAML token encryption and how to configure it, see [How to: Configure Azure AD SAML token encryption](howto-saml-token-encryption.md).
-**SAML signing certificates for SSO**: Signing certificates are an important part of any SSO deployment. Azure AD creates the signing certificates to establish SAML-based federated SSO to your SaaS applications. Once you add either gallery or non-gallery applications, you'll configure the added application using the federated SSO option. See [Manage certificates for federated single sign-on in Azure Active Directory](./manage-certificates-for-federated-single-sign-on.md).
+> [!NOTE]
+> Token encryption is an Azure Active Directory (Azure AD) premium feature. To learn more about Azure AD editions, features, and pricing, see [Azure AD pricing](https://azure.microsoft.com/pricing/details/active-directory/).
### Apps and configurations that can be moved today
-Apps that you can move easily today include SAML 2.0 apps that use the standard set of configuration elements and claims:
+Apps that you can move easily today include SAML 2.0 apps that use the standard set of configuration elements and claims. These standard items are:
* User Principal Name- * Email address- * Given name- * Surname- * Alternate attribute as SAML **NameID**, including the Azure AD mail attribute, mail prefix, employee ID, extension attributes 1-15, or on-premises **SamAccountName** attribute. For more information, see [Editing the NameIdentifier claim](../develop/active-directory-saml-claims-customization.md).- * Custom claims. The following require additional configuration steps to migrate to Azure AD:
-* Custom authorization or Multi-Factor Authentication (MFA) rules in AD FS. You configure them by using the [Azure AD Conditional Access](../conditional-access/overview.md) feature.
-
-* Apps with multiple Reply URL endpoints. You configure them in Azure AD by using PowerShell or in the Azure portal interface.
-
+* Custom authorization or multi-factor authentication (MFA) rules in AD FS. You configure them using the [Azure AD Conditional Access](../conditional-access/overview.md) feature.
+* Apps with multiple Reply URL endpoints. You configure them in Azure AD using PowerShell or the Azure portal interface.
* WS-Federation apps such as SharePoint apps that require SAML version 1.1 tokens. You can configure them manually using PowerShell. You can also add a pre-integrated generic template for SharePoint and SAML 1.1 applications from the gallery. We support the SAML 2.0 protocol.- * Complex claims issuance transforms rules. For information about supported claims mappings, see:
- * [Claims mapping in Azure Active Directory](../develop/active-directory-claims-mapping.md)
- * [Customizing claims issued in the SAML token for enterprise applications in Azure Active Directory](../develop/active-directory-saml-claims-customization.md)
--
+ * [Claims mapping in Azure Active Directory](../develop/active-directory-claims-mapping.md).
+ * [Customizing claims issued in the SAML token for enterprise applications in Azure Active Directory](../develop/active-directory-saml-claims-customization.md).
### Apps and configurations not supported in Azure AD today
-Apps that require the following capabilities can't be migrated today.
+Apps that require certain capabilities can't be migrated today.
-**Protocol capabilities**
+#### Protocol capabilities
-* Support for the WS-Trust ActAs pattern
+Apps that require the following protocol capabilities can't be migrated today:
+* Support for the WS-Trust ActAs pattern
* SAML artifact resolution- * Signature verification of signed SAML requests
-ΓÇÄNote that signed requests are accepted, but the signature is not verified.
-ΓÇÄGiven that Azure AD will only return the token to endpoints preconfigured in the application, signature verification is likely not required in most cases.
+ΓÇÄ
+ > [!Note]
+ > Signed requests are accepted, but the signature isn't verified.
-**Claims in token capabilities**
+ ΓÇÄGiven that Azure AD only returns the token to endpoints preconfigured in the application, signature verification probably isn't required in most cases.
-* Claims from attribute stores other than the Azure AD directory, unless that data is synced to Azure AD. For more information, see the [Azure AD synchronization API overview](/graph/api/resources/synchronization-overview?view=graph-rest-beta).
+#### Claims in token capabilities
+
+Apps that require the following claims in token capabilities can't be migrated today.
+* Claims from attribute stores other than the Azure AD directory, unless that data is synced to Azure AD. For more information, see the [Azure AD synchronization API overview](/graph/api/resources/synchronization-overview?view=graph-rest-beta).
* Issuance of directory multiple-value attributes. For example, we can't issue a multivalued claim for proxy addresses at this time. ## Map app settings from AD FS to Azure AD
-Migration starts with assessing how the application is configured on-premises and mapping that configuration to Azure AD. AD FS and Azure AD work similarly, so the concepts of configuring trust, sign-in and sign-out URLs, and identifiers apply in both cases. Document the AD FS configuration settings of your applications so that you can easily configure them in Azure AD.
+Migration requires assessing how the application is configured on-premises, and then mapping that configuration to Azure AD. AD FS and Azure AD work similarly, so the concepts of configuring trust, sign-on and sign-out URLs, and identifiers apply in both cases. Document the AD FS configuration settings of your applications so that you can easily configure them in Azure AD.
### Map app configuration settings The following table describes some of the most common mapping of settings between an AD FS Relying Party Trust to Azure AD Enterprise Application:
-* AD FS ΓÇô Find the setting in the AD FS Relying Party Trust for the app. Right-click the relying party and select Properties.
-
-* Azure ADΓÇô The setting is configured within [Azure portal](https://portal.azure.com/) in each application's Single sign-on properties.
+* AD FSΓÇöFind the setting in the AD FS Relying Party Trust for the app. Right-click the relying party and select Properties.
+* Azure ADΓÇöThe setting is configured within [Azure portal](https://portal.azure.com/) in each application's SSO properties.
| Configuration setting| AD FS| How to configure in Azure AD| SAML Token | | - | - | - | - |
-| **App sign-on URL** <p>The URL for the user to sign-in to the app in a Service Provider (SP)-initiated SAML flow.| N/A| Open Basic SAML Configuration from SAML based sign-on| N/A |
-| **App reply URL** <p>The URL of the app from the identity provider's (IdP's) perspective. The IdP sends the user and token here after the user has signed in to the IdP. ΓÇÄThis is also known as **SAML assertion consumer endpoint**.| Select the **Endpoints** tab| Open Basic SAML Configuration from SAML based sign-on| Destination element in the SAML token. Example value: `https://contoso.my.salesforce.com` |
-| **App sign-out URL** <p>This is the URL to which "sign-out cleanup" requests are sent when a user signs out from an app. The IdP sends the request to sign out the user from all other apps as well.| Select the **Endpoints** tab| Open Basic SAML Configuration from SAML based sign-on| N/A |
-| **App identifier** <p>This is the app identifier from the IdP's perspective. The sign-in URL value is often used for the identifier (but not always). ΓÇÄSometimes the app calls this the "entity ID."| Select the **Identifiers** tab|Open Basic SAML Configuration from SAML based sign-on| Maps to the **Audience** element in the SAML token. |
+| **App sign-on URL** <p>The URL for the user to sign in to the app in a SAML flow initiated by a Service Provider (SP).| N/A| Open Basic SAML Configuration from SAML based sign-on| N/A |
+| **App reply URL** <p>The URL of the app from the perspective of the identity provider (IdP). The IdP sends the user and token here after the user has signed in to the IdP. ΓÇÄThis is also known as **SAML assertion consumer endpoint**.| Select the **Endpoints** tab| Open Basic SAML Configuration from SAML based sign-on| Destination element in the SAML token. Example value: `https://contoso.my.salesforce.com` |
+| **App sign-out URL** <p>This is the URL to which sign-out cleanup requests are sent when a user signs out from an app. The IdP sends the request to sign out the user from all other apps as well.| Select the **Endpoints** tab| Open Basic SAML Configuration from SAML based sign-on| N/A |
+| **App identifier** <p>This is the app identifier from the IdP's perspective. The sign-on URL value is often used for the identifier (but not always). ΓÇÄSometimes the app calls this the "entity ID."| Select the **Identifiers** tab|Open Basic SAML Configuration from SAML based sign-on| Maps to the **Audience** element in the SAML token. |
| **App federation metadata** <p>This is the location of the app's federation metadata. The IdP uses it to automatically update specific configuration settings, such as endpoints or encryption certificates.| Select the **Monitoring** tab| N/A. Azure AD doesn't support consuming application federation metadata directly. You can manually import the federation metadata.| N/A |
-| **User Identifier/ Name ID** <p>Attribute that is used to uniquely indicate the user identity from Azure AD or AD FS to your app. ΓÇÄThis attribute is typically either the UPN or the email address of the user.| Claim rules. In most cases, the claim rule issues a claim with a type that ends with the NameIdentifier.| You can find the identifier under the header **User Attributes and Claims**. By default, the UPN is used| Maps to the **NameID** element in the SAML token. |
-| **Other claims** <p>Examples of other claim information that is commonly sent from the IdP to the app include First Name, Last Name, Email address, and group membership.| In AD FS, you can find this as other claim rules on the relying party.| You can find the identifier under the header **User Attributes & Claims**. Select **View** and edit all other user attributes.| N/A |
-
+| **User Identifier/ Name ID** <p>Attribute that is used to uniquely indicate the user identity from Azure AD or AD FS to your app. ΓÇÄThis attribute is typically either the UPN or the email address of the user.| Claim rules. In most cases, the claim rule issues a claim with a type that ends with the **NameIdentifier**.| You can find the identifier under the header **User Attributes and Claims**. By default, the UPN is used| Maps to the **NameID** element in the SAML token. |
+| **Other claims** <p>Examples of other claim information that is commonly sent from the IdP to the app include first name, last name, email address, and group membership.| In AD FS, you can find this as other claim rules on the relying party.| You can find the identifier under the header **User Attributes & Claims**. Select **View** and edit all other user attributes.| N/A |
### Map Identity Provider (IdP) settings
-Configure your applications to point to Azure AD versus AD FS for SSO. Here, we're focusing on SaaS apps that use the SAML protocol. However, this concept extends to custom LOB apps as well.
+Configure your applications to point to Azure AD versus AD FS for SSO. Here, we're focusing on SaaS apps that use the SAML protocol. However, this concept extends to custom line-of-business apps as well.
> [!NOTE]
-> The configuration values for Azure AD follows the pattern where your Azure Tenant ID replaces {tenant-id} and the Application ID replaces {application-id}. You find this information in the [Azure portal](https://portal.azure.com/) under Azure Active Directory > Properties:
+> The configuration values for Azure AD follows the pattern where your Azure Tenant ID replaces {tenant-id} and the Application ID replaces {application-id}. You find this information in the [Azure portal](https://portal.azure.com/) under **Azure Active Directory > Properties**:
* Select Directory ID to see your Tenant ID.- * Select Application ID to see your Application ID. At a high-level, map the following key SaaS apps configuration elements to Azure AD. -- | Element| Configuration Value | | - | - | | Identity provider issuer| https:\//sts.windows.net/{tenant-id}/ |
Configure your applications to point to Azure AD versus AD FS for SSO. Here, we'
| Identity provider logout URL| [https://login.microsoftonline.com/{tenant-id}/saml2](https://login.microsoftonline.com/{tenant-id}/saml2) | | Federation metadata location| [https://login.windows.net/{tenant-id}/federationmetadata/2007-06/federationmetadata.xml?appid={application-id}](https://login.windows.net/{tenant-id}/federationmetadata/2007-06/federationmetadata.xml?appid={application-id}) | - ### Map SSO settings for SaaS apps SaaS apps need to know where to send authentication requests and how to validate the received tokens. The following table describes the elements to configure SSO settings in the app, and their values or locations within AD FS and Azure AD
SaaS apps need to know where to send authentication requests and how to validate
| **Identifier/ "issuer"**<p>Identifier of the IdP from the app's perspective (sometimes called the "issuer ID").<p>ΓÇÄIn the SAML token, the value appears as the Issuer element.| The identifier for AD FS is usually the federation service identifier in AD FS Management under **Service > Edit Federation Service Properties**. For example: `http://fs.contoso.com/adfs/services/trust`| Replace {tenant-id} with your tenant ID.<p>https:\//sts.windows.net/{tenant-id}/ | | **IdP federation metadata**<p>Location of the IdP's publicly available federation metadata. (Some apps use federation metadata as an alternative to the administrator configuring URLs, identifier, and token signing certificate individually.)| Find the AD FS federation metadata URL in AD FS Management under **Service > Endpoints > Metadata > Type: Federation Metadata**. For example: `https://fs.contoso.com/FederationMetadat). | - ## Represent AD FS security policies in Azure AD
-When moving your app authentication to Azure AD, create mappings from existing security policies to their equivalent or alternative variants available in Azure AD. Ensuring that these mappings can be done while meeting security standards required by your app owners will make the rest of the app migration significantly easier.
+When moving your app authentication to Azure AD, create mappings from existing security policies to their equivalent or alternative variants available in Azure AD. Ensuring that these mappings can be done while meeting security standards required by your app owners makes the rest of the app migration significantly easier.
-For each rule type and its examples, we suggest here how the rule looks like in AD FS, the AD FS rule language equivalent code, and how this map in Azure AD.
+For each rule example, we show what the rule looks like in AD FS, the AD FS rule language equivalent code, and how this maps to Azure AD.
### Map authorization rules
-The following are examples of types of authorization rules in AD FS, and how you can map them to Azure AD:
+The following are examples of various types of authorization rules in AD FS, and how you map them to Azure AD.
#### Example 1: Permit access to all users
-Permit Access to All Users looks like in AD FS:
-
-![Screenshot shows the Set up Single Sign-On with SAML dialog box.](media/migrate-adfs-apps-to-azure/sso-saml-user-attributes-claims.png)
+Permit Access to All Users in AD FS:
+ ![Screenshot shows the Set up Single Sign-On with SAML dialog box.](media/migrate-adfs-apps-to-azure/permit-access-to-all-users-1.png)
This maps to Azure AD in one of the following ways:
-In the [Azure portal](https://portal.azure.com/):
-* Option 1: Set User assignment required to No ![edit access control policy for SaaS apps ](media/migrate-adfs-apps-to-azure/permit-access-to-all-users-2.png)
+1. Set **User assignment required** to **No**.
- Note that setting the User assignment required switch to Yes requires that users be assigned to the application to gain access. When set to No, all users have access. This switch does not control what shows for users in the My Apps experience.
+ ![edit access control policy for SaaS apps ](media/migrate-adfs-apps-to-azure/permit-access-to-all-users-2.png)
+ > [!Note]
+ > Setting **User assignment required** to **Yes** requires that users are assigned to the application to gain access. When set to **No**, all users have access. This switch doesn't control what users see in the **My Apps** experience.
-* Option 2: In the Users and groups tab, assign your application to the "All Users" automatic group. <p>
-You must [enable Dynamic Groups](../enterprise-users/groups-create-rule.md) in your Azure AD tenant for the default 'All Users' group to be available.
-
- ![My SaaS Apps in Azure AD ](media/migrate-adfs-apps-to-azure/permit-access-to-all-users-3.png)
+1. In the **Users and groups tab**, assign your application to the **All Users** automatic group. You must [enable Dynamic Groups](../enterprise-users/groups-create-rule.md) in your Azure AD tenant for the default **All Users** group to be available.
+ ![My SaaS Apps in Azure AD ](media/migrate-adfs-apps-to-azure/permit-access-to-all-users-3.png)
#### Example 2: Allow a group explicitly Explicit group authorization in AD FS:
+ ![Screenshot shows the Edit Rule dialog box for the Allow domain admins Claim rule.](media/migrate-adfs-apps-to-azure/allow-a-group-explicitly-1.png)
-![Screenshot shows the Edit Rule dialog box for the Allow domain admins Claim rule.](media/migrate-adfs-apps-to-azure/allow-a-group-explicitly-1.png)
-
+To map this rule to Azure AD:
-This is how the rule maps to Azure AD:
-
-In the [Azure portal](https://portal.azure.com/), you will first [create a user group](../fundamentals/active-directory-groups-create-azure-portal.md) that corresponds to the group of users from AD FS, and then assign app permissions to that group:
-
-![Add Assignment ](media/migrate-adfs-apps-to-azure/allow-a-group-explicitly-2.png)
+1. In the [Azure portal](https://portal.azure.com/), [create a user group](../fundamentals/active-directory-groups-create-azure-portal.md) that corresponds to the group of users from AD FS.
+1. Assign app permissions to the group:
+ ![Add Assignment ](media/migrate-adfs-apps-to-azure/allow-a-group-explicitly-2.png)
#### Example 3: Authorize a specific user Explicit user authorization in AD FS:
-![Screenshot shows the Edit Rule dialog box for the Allow a specific user Claim rule with an Incoming claim type of Primary S I D.](media/migrate-adfs-apps-to-azure/authorize-a-specific-user-1.png)
-
-This is how the rule maps to Azure AD:
+ ![Screenshot shows the Edit Rule dialog box for the Allow a specific user Claim rule with an Incoming claim type of Primary S I D.](media/migrate-adfs-apps-to-azure/authorize-a-specific-user-1.png)
-In the [Azure portal](https://portal.azure.com/), add a user to the app through the Add Assignment tab of the app as shown below:
+To map this rule to Azure AD:
-![My SaaS apps in Azure ](media/migrate-adfs-apps-to-azure/authorize-a-specific-user-2.png)
+* In the [Azure portal](https://portal.azure.com/), add a user to the app through the Add Assignment tab of the app as shown below:
+ ![My SaaS apps in Azure ](media/migrate-adfs-apps-to-azure/authorize-a-specific-user-2.png)
-### Map Multi-Factor Authentication rules
+### Map multi-factor authentication rules
-An on-premise deployment of [Multi-Factor Authentication (MFA)](../authentication/concept-mfa-howitworks.md) and AD FS will still work after the migration because you are federated with AD FS. However, consider migrating to Azure's built-in MFA capabilities that are tied into Azure AD's Conditional Access workflows.
+An on-premises deployment of [Multi-Factor Authentication (MFA)](../authentication/concept-mfa-howitworks.md) and AD FS still works after the migration because you are federated with AD FS. However, consider migrating to Azure's built-in MFA capabilities that are tied into Azure AD's Conditional Access workflows.
-The following are examples of types of MFA rules in AD FS, and how you can map them to Azure AD based on different conditions:
+The following are examples of types of MFA rules in AD FS, and how you can map them to Azure AD based on different conditions.
MFA rule settings in AD FS:
-![Screenshot shows Conditions for Azure A D in the Azure portal.](media/migrate-adfs-apps-to-azure/mfa-location-1.png)
-
+ ![Screenshot shows Conditions for Azure A D in the Azure portal.](media/migrate-adfs-apps-to-azure/mfa-settings-common-for-all-examples.png)
#### Example 1: Enforce MFA based on users/groups
-The User/Groups selector is a rule that allows you to enforce MFA on a per-Groups (Group SID) or a per-user (Primary SID) basis. Apart from the User/Groups assignments, all additional checkboxes in the AD FS MFA configuration UI function as additional rules that are evaluated after the User/Groups rule is enforced.
-
+The users/groups selector is a rule that allows you to enforce MFA on a per-group (Group SID) or per-user (Primary SID) basis. Apart from the users/groups assignments, all additional checkboxes in the AD FS MFA configuration UI function as additional rules that are evaluated after the users/groups rule is enforced.
Specify MFA rules for a user or a group in Azure AD: 1. Create a [new conditional access policy](../authentication/tutorial-enable-azure-mfa.md?bc=%2fazure%2factive-directory%2fconditional-access%2fbreadcrumb%2ftoc.json&toc=%2fazure%2factive-directory%2fconditional-access%2ftoc.json).
+1. Select **Assignments**. Add the user(s) or group(s) for which you want to enforce MFA.
+1. Configure the **Access controls** options as shown below:
-2. Select **Assignments**. Add the user(s) or group(s) you want to enforce MFA on.
-
-3. Configure the **Access controls** options as shown below:
-ΓÇÄ
-
-![Screenshot shows the Grant pane where you can grant access.](media/migrate-adfs-apps-to-azure/mfa-usersorgroups.png)
-
+ ΓÇÄ![Screenshot shows the Grant pane where you can grant access.](media/migrate-adfs-apps-to-azure/mfa-users-groups.png)
#### Example 2: Enforce MFA for unregistered devices Specify MFA rules for unregistered devices in Azure AD: 1. Create a [new conditional access policy](../authentication/tutorial-enable-azure-mfa.md?bc=%2fazure%2factive-directory%2fconditional-access%2fbreadcrumb%2ftoc.json&toc=%2fazure%2factive-directory%2fconditional-access%2ftoc.json).
+1. Set the **Assignments** to **All users**.
+1. Configure the **Access controls** options as shown below:
-2. Set the **Assignments** to **All users**.
-
-3. Configure the **Access controls** options as shown below:
-ΓÇÄ
-
-![Screenshot shows the Grant pane where you can grant access and specify other restrictions.](media/migrate-adfs-apps-to-azure/mfa-unregistered-devices.png)
-
+ ![Screenshot shows the Grant pane where you can grant access and specify other restrictions.](media/migrate-adfs-apps-to-azure/mfa-unregistered-devices.png)
-When you set the For multiple controls option to Require one of the selected controls, it means that if any one of the conditions specified by the checkbox are fulfilled by the user, they will be granted access to your app.
+When you set the **For multiple controls** option to **Require one of the selected controls**, it means that if any one of the conditions specified by the checkbox are met by the user, the user is granted access to your app.
#### Example 3: Enforce MFA based on location Specify MFA rules based on a user's location in Azure AD: 1. Create a [new conditional access policy](../authentication/tutorial-enable-azure-mfa.md?bc=%2fazure%2factive-directory%2fconditional-access%2fbreadcrumb%2ftoc.json&toc=%2fazure%2factive-directory%2fconditional-access%2ftoc.json).- 1. Set the **Assignments** to **All users**.-
-1. [Configure named locations in Azure AD](../reports-monitoring/quickstart-configure-named-locations.md) otherwise federation from inside your corporate network is trusted.
-
+1. [Configure named locations in Azure AD](../reports-monitoring/quickstart-configure-named-locations.md). Otherwise, federation from inside your corporate network is trusted.
1. Configure the **Conditions rules** to specify the locations for which you would like to enforce MFA.
-![Screenshot shows the Locations pane for Conditions rules.](media/migrate-adfs-apps-to-azure/mfa-location-1.png)
-
-5. Configure the **Access controls** options as shown below:
-
+ ![Screenshot shows the Locations pane for Conditions rules.](media/migrate-adfs-apps-to-azure/mfa-location-1.png)
-![Map access control policies](media/migrate-adfs-apps-to-azure/mfa-location-2.png)
+1. Configure the **Access controls** options as shown below:
+ ![Map access control policies](media/migrate-adfs-apps-to-azure/mfa-location-2.png)
### Map Emit attributes as Claims rule
-Here is an example of how attributes are mapped in AD FS:
+Emit attributes as Claims rule in AD FS:
-![Screenshot shows the Edit Rule dialog box for Emit attributes as Claims.](media/migrate-adfs-apps-to-azure/map-emit-attributes-as-claimsrule-1.png)
+ ![Screenshot shows the Edit Rule dialog box for Emit attributes as Claims.](media/migrate-adfs-apps-to-azure/map-emit-attributes-as-claims-rule-1.png)
+To map the rule to Azure AD:
-This is how the rule maps to Azure AD:
+1. In the [Azure portal](https://portal.azure.com/), select **Enterprise Applications** and then **Single sign-on** to view the SAML-based sign-on configuration:
-In the [Azure portal](https://portal.azure.com/), select **Enterprise Applications**, **Single sign-on**, and add **SAML Token Attributes** as shown below:
-
-![Screenshot shows the Single sign-on page for your Enterprise Application.](media/migrate-adfs-apps-to-azure/map-emit-attributes-as-claimsrule-2.png)
+ ![Screenshot shows the Single sign-on page for your Enterprise Application.](media/migrate-adfs-apps-to-azure/map-emit-attributes-as-claims-rule-2.png)
+1. Select **Edit** (highlighted) to modify the attributes:
+ ![This is the page to edit User Attributes and Claims](media/migrate-adfs-apps-to-azure/map-emit-attributes-as-claims-rule-3.png)
### Map built-In access control policies
-AD FS 2016 has several built-in access control policies that you can choose from:
-
-![Azure AD built in access control](media/migrate-adfs-apps-to-azure/map-builtin-access-control-policies-1.png)
-
+Built-in access control policies in AD FS 2016:
-To implement built-in policies in Azure AD, you can use a [new conditional access policy](../authentication/tutorial-enable-azure-mfa.md?bc=%2fazure%2factive-directory%2fconditional-access%2fbreadcrumb%2ftoc.json&toc=%2fazure%2factive-directory%2fconditional-access%2ftoc.json) and configure the access controls, or you can use the custom policy designer in AD FS 2016 to configure access control policies. The Rule Editor has an exhaustive list of Permit and Except options that can help you make all kinds of permutations.
-
-![Azure AD access control policies](media/migrate-adfs-apps-to-azure/map-builtin-access-control-policies-2.png)
+ ![Azure AD built in access control](media/migrate-adfs-apps-to-azure/map-built-in-access-control-policies-1.png)
+To implement built-in policies in Azure AD, use a [new conditional access policy](../authentication/tutorial-enable-azure-mfa.md?bc=%2fazure%2factive-directory%2fconditional-access%2fbreadcrumb%2ftoc.json&toc=%2fazure%2factive-directory%2fconditional-access%2ftoc.json) and configure the access controls, or use the custom policy designer in AD FS 2016 to configure access control policies. The Rule Editor has an exhaustive list of Permit and Except options that can help you make all kinds of permutations.
+ ![Azure AD access control policies](media/migrate-adfs-apps-to-azure/map-built-in-access-control-policies-2.png)
In this table, we've listed some useful Permit and Except options and how they map to Azure AD. - | Option | How to configure Permit option in Azure AD?| How to configure Except option in Azure AD? | | - | - | - | | From specific network| Maps to [Named Location](../reports-monitoring/quickstart-configure-named-locations.md) in Azure AD| Use the **Exclude** option for [trusted locations](../conditional-access/location-condition.md) |
-| From specific groups| [Set a User/Groups Assignment](./assign-user-or-group-access-portal.md)| Use the **Exclude** option in Users and Groups |
-| From Devices with Specific Trust Level| Set this from the 'Device State' control under Assignments -> Conditions| Use the **Exclude** option under Device State Condition and Include **All devices** |
+| From specific groups| [Set a User/Groups Assignment](assign-user-or-group-access-portal.md)| Use the **Exclude** option in Users and Groups |
+| From Devices with Specific Trust Level| Set this from the **Device State** control under Assignments -> Conditions| Use the **Exclude** option under Device State Condition and Include **All devices** |
| With Specific Claims in the Request| This setting can't be migrated| This setting can't be migrated |
+Here's an example of how to configure the Exclude option for trusted locations in the Azure portal:
-An example of how to configure the Exclude option for trusted locations in the Azure portal:
-
-![Screenshot of mapping access control policies](media/migrate-adfs-apps-to-azure/map-builtin-access-control-policies-3.png)
--
+ ![Screenshot of mapping access control policies](media/migrate-adfs-apps-to-azure/map-built-in-access-control-policies-3.png)
## Transition users from AD FS to Azure AD
When you map authorization rules, apps that authenticate with AD FS may use Acti
For more information, see [Prerequisites for using Group attributes synchronized from Active Directory](../hybrid/how-to-connect-fed-group-claims.md).
-### Setup user self-provisioning
+### Set up user self-provisioning
-Some SaaS applications support the ability to self-provision users when they first sign-in to the application. In Azure Active Directory (Azure AD), the term app provisioning refers to automatically creating user identities and roles in the cloud ([SaaS](https://azure.microsoft.com/overview/what-is-saas/)) applications that users need access to. Users that are migrated will already have an account in the SaaS application. Any new users added after the migration will need to be provisioned. Test [SaaS app provisioning](../app-provisioning/user-provisioning.md) once the application is migrated.
+Some SaaS applications support the ability to self-provision users when they first sign in to the application. In Azure AD, app provisioning refers to automatically creating user identities and roles in the cloud ([SaaS](https://azure.microsoft.com/overview/what-is-saas/)) applications that users need to access. Users that are migrated already have an account in the SaaS application. Any new users added after the migration need to be provisioned. Test [SaaS app provisioning](../app-provisioning/user-provisioning.md) once the application is migrated.
### Sync external users in Azure AD
-Your existing external users may be set up in two main ways within AD FS:
-
-#### External users with a local account within your organization
-
-You will continue to be able to use these accounts in the same way that your internal user accounts work. These external user accounts have a principle name within your organization, although the account's email may point externally. As you progress with your migration, you can take advantage of the benefits that [Azure AD B2B](../external-identities/what-is-b2b.md) offers by migrating these users to use their own corporate identity when such an identity is available. This streamlines the sign-in process for those users, as they're often signed in with their own corporate logon. Your organization's administration will be eased as well, by no longer having to manage accounts for external users.
+Your existing external users can be set up in these two ways in AD FS:
-#### Federated external Identities
-
-If you are currently federating with an external organization, you have a few approaches to take:
-
-* [Add Azure Active Directory B2B collaboration users in the Azure portal](../external-identities/add-users-administrator.md). You can proactively send B2B collaboration invites from the Azure AD administrative portal to the partner organization for individual members to continue using the apps and assets they're used to.
-
-* [Create a self-service B2B sign-up workflow](../external-identities/self-service-portal.md) that generates a request for individual users at your partner organization using the B2B invitation API.
+* **External users with a local account within your organization**ΓÇöYou continue to use these accounts in the same way that your internal user accounts work. These external user accounts have a principle name within your organization, although the account's email may point externally. As you progress with your migration, you can take advantage of the benefits that [Azure AD B2B](../external-identities/what-is-b2b.md) offers by migrating these users to use their own corporate identity when such an identity is available. This streamlines the process of signing in for those users, as they're often signed in with their own corporate logon. Your organization's administration is easier as well, by not having to manage accounts for external users.
+* **Federated external Identities**ΓÇöIf you are currently federating with an external organization, you have a few approaches to take:
+ * [Add Azure Active Directory B2B collaboration users in the Azure portal](../external-identities/add-users-administrator.md). You can proactively send B2B collaboration invitations from the Azure AD administrative portal to the partner organization for individual members to continue using the apps and assets they're used to.
+ * [Create a self-service B2B sign-up workflow](../external-identities/self-service-portal.md) that generates a request for individual users at your partner organization using the B2B invitation API.
No matter how your existing external users are configured, they likely have permissions that are associated with their account, either in group membership or specific permissions. Evaluate whether these permissions need to be migrated or cleaned up. Accounts within your organization that represent an external user need to be disabled once the user has been migrated to an external identity. The migration process should be discussed with your business partners, as there may be an interruption in their ability to connect to your resources. ## Migrate and test your apps
-Follow the migration process detailed in this article.
+Follow the migration process detailed in this article. Then go to the [Azure portal](https://aad.portal.azure.com/) to test if the migration was a success.
-Then go to the [Azure portal](https://aad.portal.azure.com/) to test if the migration was a success. Follow the instructions below:
-1. Select **Enterprise Applications** > **All applications** and find your app from the list.
+Follow these instructions:
+1. Select **Enterprise Applications** > **All applications** and find your app from the list.
1. Select **Manage** > **Users and groups** to assign at least one user or group to the app.- 1. Select **Manage** > **Conditional Access**. Review your list of policies and ensure that you are not blocking access to the application with a [conditional access policy](../conditional-access/overview.md). Depending on how you configure your app, verify that SSO works properly. | Authentication type| Testing |
-| - | - |
-| OAuth / OpenID Connect| Select **Enterprise applications > Permissions** and ensure you have consented to the application to be used in your organization in the user settings for your app.
-ΓÇÄ |
-| SAML-based SSO| Use the [Test SAML Settings](./debug-saml-sso-issues.md) button found under **Single Sign-On**.
-ΓÇÄ |
-| Password-Based SSO| Download and install the [MyApps Secure Sign](../user-help/my-apps-portal-end-user-access.md)[-](../user-help/my-apps-portal-end-user-access.md)[in Extension](../user-help/my-apps-portal-end-user-access.md). This extension helps you start any of your organization's cloud apps that require you to use an SSO process.
-ΓÇÄ |
-| Application Proxy| Ensure your connector is running and assigned to your application. Visit the [Application Proxy troubleshooting guide](./application-proxy-troubleshoot.md) for further assistance.
-ΓÇÄ |
+| :- | :- |
+| OAuth / OpenID Connect| Select **Enterprise applications > Permissions** and ensure you have consented to the application in the user settings for your app.|
+| SAML-based SSO | Use the [Test SAML Settings](debug-saml-sso-issues.md) button found under **Single Sign-On**. |
+| Password-Based SSO | Download and install the [MyApps Secure Sign](../user-help/my-apps-portal-end-user-access.md)[-](../user-help/my-apps-portal-end-user-access.md)[in Extension](../user-help/my-apps-portal-end-user-access.md). This extension helps you start any of your organization's cloud apps that require you to use an SSO process. |
+| Application Proxy | Ensure your connector is running and assigned to your application. Visit the [Application Proxy troubleshooting guide](application-proxy-troubleshoot.md) for further assistance. |
> [!NOTE]
-> Cookies from the old AD FS environment will still be persistent on the user's machines. These cookies might cause problems with the migration as users could be directed to the old AD FS login environment versus the new Azure AD login. You may need to clear the user browser cookies manually or using a script. You can also use the System Center Configuration Manager or a similar platform.
+> Cookies from the old AD FS environment persist on the user machines. These cookies might cause problems with the migration, as users could be directed to the old AD FS login environment versus the new Azure AD login. You may need to clear the user browser cookies manually or using a script. You can also use the System Center Configuration Manager or a similar platform.
### Troubleshoot
-If there are any errors from the test of the migrated applications, troubleshooting might be the first step before falling back to the existing AD FS Relying Parties. See [How to debug SAML-based single sign-on to applications in Azure Active Directory](./debug-saml-sso-issues.md).
+If there are any errors from the test of the migrated applications, troubleshooting may be the first step before falling back to the existing AD FS Relying Parties. See [How to debug SAML-based single sign-on to applications in Azure Active Directory](debug-saml-sso-issues.md).
### Rollback migration
-If the migration fails, we recommend that you leave the existing Relying Parties on the AD FS servers and remove access to the Relying Parties. This will allow for a quick fallback if needed during the deployment.
+If the migration fails, we recommend that you leave the existing Relying Parties on the AD FS servers and remove access to the Relying Parties. This allows for a quick fallback if needed during the deployment.
-### End-user communication
+### Employee communication
-While the planned outage window itself can be minimal, you should still plan on communicating these timeframes proactively to employees while making the cut-over from AD FS to Azure AD. Ensure that your app experience has a Feedback button, or pointers to your helpdesk for issues.
+While the planned outage window itself can be minimal, you should still plan on communicating these timeframes proactively to employees while switching from AD FS to Azure AD. Ensure that your app experience has a feedback button, or pointers to your helpdesk for issues.
-Once deployment is complete, you can send communication informing the users of the successful deployment and remind them of any new steps that they need to take.
+Once deployment is complete, you can inform users of the successful deployment and remind them of any steps that they need to take.
* Instruct users to use [My Apps](https://myapps.microsoft.com) to access all the migrated applications.- * Remind users they might need to update their MFA settings.- * If Self-Service Password Reset is deployed, users might need to update or verify their authentication methods. See [MFA](https://aka.ms/mfatemplates) and [SSPR](https://aka.ms/ssprtemplates) end-user communication templates.
-Communication to external users: This group of users are usually the most critically impacted in case of issues. This is especially true if your security posture dictates a different set of Conditional Access rules or risk profiles for external partners. Ensure that external partners are aware of the cloud migration schedule and have a timeframe during which they are encouraged to participate in a pilot deployment that tests out all flows unique to external collaboration. Finally, ensure they have a way to access your helpdesk in case of breaking issues.
+### External user communication
-## Next Steps
+This group of users is usually the most critically impacted in case of issues. This is especially true if your security posture dictates a different set of Conditional Access rules or risk profiles for external partners. Ensure that external partners are aware of the cloud migration schedule and have a timeframe during which they are encouraged to participate in a pilot deployment that tests out all flows unique to external collaboration. Finally, ensure they have a way to access your helpdesk in case there are problems.
-Read [Migrating application authentication to Azure AD](https://aka.ms/migrateapps/whitepaper)<p>
-Set up [Conditional Access](../conditional-access/overview.md) and [MFA](../authentication/concept-mfa-howitworks.md)
+## Next steps
-Try a step-wise code sample:[AD FS to Azure AD application migration playbook for developers](https://aka.ms/adfsplaybook)
+* Read [Migrating application authentication to Azure AD](https://aka.ms/migrateapps/whitepaper).
+* Set up [Conditional Access](../conditional-access/overview.md) and [MFA](../authentication/concept-mfa-howitworks.md).
+* Try a step-wise code sample:[AD FS to Azure AD application migration playbook for developers](https://aka.ms/adfsplaybook).
active-directory Services Support Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/managed-identities-azure-resources/services-support-managed-identities.md
Refer to the following list to configure managed identity for Azure App Service
Azure Arc enabled Kubernetes currently [supports system assigned identity](../../azure-arc/kubernetes/connect-cluster.md#azure-arc-agents-for-kubernetes). The managed service identity certificate is used by all Azure Arc enabled Kubernetes agents for communication with Azure.
+### Azure Arc enabled servers
+
+| Managed identity type | All Generally Available<br>Global Azure Regions | Azure Government | Azure Germany | Azure China 21Vianet |
+| | :-: | :-: | :-: | :-: |
+| System assigned | ![Available][check] | ![Available][check] | Not available | Not available |
+| User assigned | Not available | Not available | Not available | Not available |
+
+All Azure Arc enabled servers have a system assigned identity. You cannot disable or change the system assigned identity on an Azure Arc enabled server. Refer to the following resources to learn more about how to consume managed identities on Azure Arc enabled servers:
+
+- [Authenticate against Azure resources with Arc enabled servers](../../azure-arc/servers/managed-identity-authentication.md)
+- [Using a managed identity with Arc enabled servers](../../azure-arc/servers/security-overview.md#using-a-managed-identity-with-arc-enabled-servers)
+ ### Azure Automanage | Managed identity type | All Generally Available<br>Global Azure Regions | Azure Government | Azure Germany | Azure China 21Vianet |
active-directory Reference Azure Monitor Sign Ins Log Schema https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/reports-monitoring/reference-azure-monitor-sign-ins-log-schema.md
na Previously updated : 04/18/2019 Last updated : 03/12/2021 -+
This article describes the Azure Active Directory (Azure AD) sign-in log schema
## Field descriptions
-| Field name | Description |
-||-|
-| Time | The date and time, in UTC. |
-| ResourceId | This value is unmapped, and you can safely ignore this field. |
-| OperationName | For sign-ins, this value is always *Sign-in activity*. |
-| OperationVersion | The REST API version that's requested by the client. |
-| Category | For sign-ins, this value is always *SignIn*. |
-| TenantId | The tenant GUID that's associated with the logs. |
-| ResultType | The result of the sign-in operation can be *Success* or *Failure*. |
-| ResultSignature | Contains the error code, if any, for the sign-in operation. |
-| ResultDescription | Provides the error description for the sign-in operation. |
+| Field name | Key | Description |
+| | | |
+| Time | - | The date and time, in UTC. |
+| ResourceId | - | This value is unmapped, and you can safely ignore this field. |
+| OperationName | - | For sign-ins, this value is always *Sign-in activity*. |
+| OperationVersion | - | The REST API version that's requested by the client. |
+| Category | - | For sign-ins, this value is always *SignIn*. |
+| TenantId | - | The tenant GUID that's associated with the logs. |
+| ResultType | - | The result of the sign-in operation can be *Success* or *Failure*. |
+| ResultSignature | - | Contains the error code, if any, for the sign-in operation. |
+| ResultDescription | N/A or blank | Provides the error description for the sign-in operation. |
| riskDetail | riskDetail | Provides the 'reason' behind a specific state of a risky user, sign-in or a risk detection. The possible values are: `none`, `adminGeneratedTemporaryPassword`, `userPerformedSecuredPasswordChange`, `userPerformedSecuredPasswordReset`, `adminConfirmedSigninSafe`, `aiConfirmedSigninSafe`, `userPassedMFADrivenByRiskBasedPolicy`, `adminDismissedAllRiskForUser`, `adminConfirmedSigninCompromised`, `unknownFutureValue`. The value `none` means that no action has been performed on the user or sign-in so far. <br>**Note:** Details for this property require an Azure AD Premium P2 license. Other licenses return the value `hidden`. | | riskEventTypes | riskEventTypes | Risk detection types associated with the sign-in. The possible values are: `unlikelyTravel`, `anonymizedIPAddress`, `maliciousIPAddress`, `unfamiliarFeatures`, `malwareInfectedIPAddress`, `suspiciousIPAddress`, `leakedCredentials`, `investigationsThreatIntelligence`, `generic`, and `unknownFutureValue`. |
+| authProcessingDetails | Azure AD app authentication library | Contains Family, Library, and Platform information in format: "Family: ADAL Library: ADAL.JS 1.0.0 Platform: JS" |
+| authProcessingDetails | IsCAEToken | Values are True or False |
| riskLevelAggregated | riskLevel | Aggregated risk level. The possible values are: `none`, `low`, `medium`, `high`, `hidden`, and `unknownFutureValue`. The value `hidden` means the user or sign-in was not enabled for Azure AD Identity Protection. **Note:** Details for this property are only available for Azure AD Premium P2 customers. All other customers will be returned `hidden`. | | riskLevelDuringSignIn | riskLevel | Risk level during sign-in. The possible values are: `none`, `low`, `medium`, `high`, `hidden`, and `unknownFutureValue`. The value `hidden` means the user or sign-in was not enabled for Azure AD Identity Protection. **Note:** Details for this property are only available for Azure AD Premium P2 customers. All other customers will be returned `hidden`. | | riskState | riskState | Reports status of the risky user, sign-in, or a risk detection. The possible values are: `none`, `confirmedSafe`, `remediated`, `dismissed`, `atRisk`, `confirmedCompromised`, `unknownFutureValue`. |
-| DurationMs | This value is unmapped, and you can safely ignore this field. |
-| CallerIpAddress | The IP address of the client that made the request. |
-| CorrelationId | The optional GUID that's passed by the client. This value can help correlate client-side operations with server-side operations, and it's useful when you're tracking logs that span services. |
-| Identity | The identity from the token that was presented when you made the request. It can be a user account, system account, or service principal. |
-| Level | Provides the type of message. For audit, it's always *Informational*. |
-| Location | Provides the location of the sign-in activity. |
-| Properties | Lists all the properties that are associated with sign-ins. For more information, see [Microsoft Graph API Reference](/graph/api/resources/signin?view=graph-rest-beta). This schema uses the same attribute names as the sign-in resource, for readability.
+| DurationMs | - | This value is unmapped, and you can safely ignore this field. |
+| CallerIpAddress | - | The IP address of the client that made the request. |
+| CorrelationId | - | The optional GUID that's passed by the client. This value can help correlate client-side operations with server-side operations, and it's useful when you're tracking logs that span services. |
+| Identity | - | The identity from the token that was presented when you made the request. It can be a user account, system account, or service principal. |
+| Level | - | Provides the type of message. For audit, it's always *Informational*. |
+| Location | - | Provides the location of the sign-in activity. |
+| Properties | - | Lists all the properties that are associated with sign-ins.|
## Next steps
active-directory My Staff Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/my-staff-configure.md
My Staff enables you to delegate permissions to a figure of authority, such as a store manager or a team lead, to ensure that their staff members are able to access their Azure AD accounts. Instead of relying on a central helpdesk, organizations can delegate common tasks such as resetting passwords or changing phone numbers to a local team manager. With My Staff, a user who can't access their account can regain access in just a couple of clicks, with no helpdesk or IT staff required.
-Before you configure My Staff for your organization, we recommend that you review this documentation as well as the [user documentation](../user-help/my-staff-team-manager.md) to ensure you understand how it works and hot it impacts your users. You can leverage the user documentation to train and prepare your users for the new experience and help to ensure a successful rollout.
+Before you configure My Staff for your organization, we recommend that you review this documentation as well as the [user documentation](../user-help/my-staff-team-manager.md) to ensure you understand how it works and how it impacts your users. You can leverage the user documentation to train and prepare your users for the new experience and help to ensure a successful rollout.
## How My Staff works
active-directory Britive Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/britive-provisioning-tutorial.md
+
+ Title: 'Tutorial: Configure Britive for automatic user provisioning with Azure Active Directory | Microsoft Docs'
+description: Learn how to automatically provision and de-provision user accounts from Azure AD to Britive.
+
+documentationcenter: ''
+
+writer: Zhchia
++
+ms.assetid: 622688b3-9d20-482e-aab9-ce2a1f01e747
+++
+ na
+ms.devlang: na
+ Last updated : 03/05/2021+++
+# Tutorial: Configure Britive for automatic user provisioning
+
+This tutorial describes the steps you need to perform in both Britive and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Britive](https://www.britive.com/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
++
+## Capabilities Supported
+> [!div class="checklist"]
+> * Create users in Britive
+> * Remove users in Britive when they do not require access anymore
+> * Keep user attributes synchronized between Azure AD and Britive
+> * Provision groups and group memberships in Britive
+> * [Single sign-on](britive-tutorial.md) to Britive (recommended)
+
+## Prerequisites
+
+The scenario outlined in this tutorial assumes that you already have the following prerequisites:
+
+* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md)
+* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* A [Britive](https://www.britive.com/) tenant.
+* A user account in Britive with Admin permissions.
++
+## Step 1. Plan your provisioning deployment
+1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
+1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+1. Determine what data to [map between Azure AD and Britive](../app-provisioning/customize-application-attributes.md).
+
+## Step 2. Configure Britive to support provisioning with Azure AD
+
+The application will have to be manually configured using the steps provided below:
+1. Login to Britive application with administrator privileges
+1. Click on **Admin->User Administration->Identity Providers**
+1. Click on **Add Identity Provider**. Enter the name and description. Click on Add Identity Provider button.
+
+ ![Identity Provider](media/britive-provisioning-tutorial/identity.png)
+
+1. A configuration page similar to one displayed below will be shown.
+
+ ![Configuration Page](media/britive-provisioning-tutorial/configuration.png)
+
+1. Click on **SCIM** tab. Change the SCIM provider from Generic to Azure and save the changes. Copy the SCIM URL and note it down.This values will be entered in the **Tenant URL** boxes on the Provisioning tab of your Britive application in the Azure portal.
+
+ ![SCIM Page](media/britive-provisioning-tutorial/scim.png)
+
+1. Click on **Create Token**. Select the validity of the token as required and click on Create Token button.
+
+ ![Create Token](media/britive-provisioning-tutorial/create-token.png)
+
+1. Copy the token generated and note it down. Click OK. Please note that the user will not be able to see the token again. Click on Re-Create button to generate a new token if needed.This values will be entered in the **Secret Token** and Tenant URL boxes on the Provisioning tab of your getAbstract application in the Azure portal.
+
+ ![Copy Token](media/britive-provisioning-tutorial/copy-token.png)
++
+## Step 3. Add Britive from the Azure AD application gallery
+
+Add Britive from the Azure AD application gallery to start managing provisioning to Britive. If you have previously setup Britive for SSO you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
+
+## Step 4. Define who will be in scope for provisioning
+
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* When assigning users and groups to Britive, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
+
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
++
+## Step 5. Configure automatic user provisioning to Britive
+
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in Britive based on user and/or group assignments in Azure AD.
+
+### To configure automatic user provisioning for Britive in Azure AD:
+
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
+
+ ![Enterprise applications blade](common/enterprise-applications.png)
+
+1. In the applications list, select **Britive**.
+
+ ![The Britive link in the Applications list](common/all-applications.png)
+
+1. Select the **Provisioning** tab.
+
+ ![Provisioning tab](common/provisioning.png)
+
+1. Set the **Provisioning Mode** to **Automatic**.
+
+ ![Provisioning tab automatic](common/provisioning-automatic.png)
+
+1. Under the **Admin Credentials** section, input your Britive Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to Britive. If the connection fails, ensure your Britive account has Admin permissions and try again.
+
+ ![Token](common/provisioning-testconnection-tenanturltoken.png)
+
+1. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+
+ ![Notification Email](common/provisioning-notification-email.png)
+
+1. Select **Save**.
+
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Britive**.
+
+1. Review the user attributes that are synchronized from Azure AD to Britive in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Britive for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you will need to ensure that the Britive API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|
+ ||||
+ |userName|String|&check;
+ |active|Boolean|
+ |displayName|String|
+ |title|String|
+ |externalId|String|
+ |preferredLanguage|String|
+ |name.givenName|String|
+ |name.familyName|String|
+ |nickName|String|
+ |userType|String|
+ |locale|String|
+ |timezone|String|
+ |emails[type eq "home"].value|String|
+ |emails[type eq "other"].value|String|
+ |emails[type eq "work"].value|String|
+ |phoneNumbers[type eq "home"].value|String|
+ |phoneNumbers[type eq "other"].value|String|
+ |phoneNumbers[type eq "pager"].value|String|
+ |phoneNumbers[type eq "work"].value|String|
+ |phoneNumbers[type eq "mobile"].value|String|
+ |phoneNumbers[type eq "fax"].value|String|
+ |addresses[type eq "work"].formatted|String|
+ |addresses[type eq "work"].streetAddress|String|
+ |addresses[type eq "work"].locality|String|
+ |addresses[type eq "work"].region|String|
+ |addresses[type eq "work"].postalCode|String|
+ |addresses[type eq "work"].country|String|
+ |addresses[type eq "home"].formatted|String|
+ |addresses[type eq "home"].streetAddress|String|
+ |addresses[type eq "home"].locality|String|
+ |addresses[type eq "home"].region|String|
+ |addresses[type eq "home"].postalCode|String|
+ |addresses[type eq "home"].country|String|
+ |addresses[type eq "other"].formatted|String|
+ |addresses[type eq "other"].streetAddress|String|
+ |addresses[type eq "other"].locality|String|
+ |addresses[type eq "other"].region|String|
+ |addresses[type eq "other"].postalCode|String|
+ |addresses[type eq "other"].country|String|
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:employeeNumber|String|
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:costCenter|String|
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:organization|String|
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:division|String|
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:department|String|
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:manager|Reference|
++
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to Britive**.
+
+1. Review the group attributes that are synchronized from Azure AD to Britive in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in Britive for update operations. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|
+ ||||
+ |displayName|String|&check;
+ |externalId|String|
+ |members|Reference|
+
+1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+1. To enable the Azure AD provisioning service for Britive, change the **Provisioning Status** to **On** in the **Settings** section.
+
+ ![Provisioning Status Toggled On](common/provisioning-toggle-on.png)
+
+1. Define the users and/or groups that you would like to provision to Britive by choosing the desired values in **Scope** in the **Settings** section.
+
+ ![Provisioning Scope](common/provisioning-scope.png)
+
+1. When you are ready to provision, click **Save**.
+
+ ![Saving Provisioning Configuration](common/provisioning-configuration-save.png)
+
+This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
+
+## Step 6. Monitor your deployment
+Once you've configured provisioning, use the following resources to monitor your deployment:
+
+* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
+* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it is to completion
+* If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+
+## Additional resources
+
+* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
+* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+
+## Next steps
+
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
active-directory Transperfect Globallink Dashboard Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/transperfect-globallink-dashboard-tutorial.md
+
+ Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with TransPerfect GlobalLink Dashboard | Microsoft Docs'
+description: Learn how to configure single sign-on between Azure Active Directory and TransPerfect GlobalLink Dashboard.
++++++++ Last updated : 03/12/2021++++
+# Tutorial: Azure Active Directory single sign-on (SSO) integration with TransPerfect GlobalLink Dashboard
+
+In this tutorial, you'll learn how to integrate TransPerfect GlobalLink Dashboard with Azure Active Directory (Azure AD). When you integrate TransPerfect GlobalLink Dashboard with Azure AD, you can:
+
+* Control in Azure AD who has access to TransPerfect GlobalLink Dashboard.
+* Enable your users to be automatically signed-in to TransPerfect GlobalLink Dashboard with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* TransPerfect GlobalLink Dashboard single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* TransPerfect GlobalLink Dashboard supports **SP and IDP** initiated SSO.
+* TransPerfect GlobalLink Dashboard supports **Just In Time** user provisioning.
+
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+
+## Adding TransPerfect GlobalLink Dashboard from the gallery
+
+To configure the integration of TransPerfect GlobalLink Dashboard into Azure AD, you need to add TransPerfect GlobalLink Dashboard from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **TransPerfect GlobalLink Dashboard** in the search box.
+1. Select **TransPerfect GlobalLink Dashboard** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
++
+## Configure and test Azure AD SSO for TransPerfect GlobalLink Dashboard
+
+Configure and test Azure AD SSO with TransPerfect GlobalLink Dashboard using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in TransPerfect GlobalLink Dashboard.
+
+To configure and test Azure AD SSO with TransPerfect GlobalLink Dashboard, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure TransPerfect GlobalLink Dashboard SSO](#configure-transperfect-globallink-dashboard-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create TransPerfect GlobalLink Dashboard test user](#create-transperfect-globallink-dashboard-test-user)** - to have a counterpart of B.Simon in TransPerfect GlobalLink Dashboard that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **TransPerfect GlobalLink Dashboard** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, the user does not have to perform any step as the app is already pre-integrated with Azure.
+
+1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
+
+ In the **Sign-on URL** text box, type the URL:
+ `https://sso.transperfect.com`
+
+1. Click **Save**.
+
+1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
+
+ ![The Certificate download link](common/copy-metadataurl.png)
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to TransPerfect GlobalLink Dashboard.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **TransPerfect GlobalLink Dashboard**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure TransPerfect GlobalLink Dashboard SSO
+
+To configure single sign-on on **TransPerfect GlobalLink Dashboard** side, you need to send the **App Federation Metadata Url** to [TransPerfect GlobalLink Dashboard support team](mailto:TechOps_Consulting@transperfect.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create TransPerfect GlobalLink Dashboard test user
+
+In this section, a user called B.Simon is created in TransPerfect GlobalLink Dashboard. TransPerfect GlobalLink Dashboard supports just-in-time provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in TransPerfect GlobalLink Dashboard, a new one is created when you attempt to access TransPerfect GlobalLink Dashboard.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to TransPerfect GlobalLink Dashboard Sign on URL where you can initiate the login flow.
+
+* Go to TransPerfect GlobalLink Dashboard Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the TransPerfect GlobalLink Dashboard for which you set up the SSO
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the TransPerfect GlobalLink Dashboard tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the TransPerfect GlobalLink Dashboard for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
++
+## Next steps
+
+Once you configure TransPerfect GlobalLink Dashboard you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
++
active-directory User Help Auth App Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/user-help/user-help-auth-app-faq.md
The Microsoft Authenticator app replaced the Azure Authenticator app, and it's t
**Q**: Will my employees or students get to use password autofill in Authenticator app?
-**A**: Yes, Autofill now works for most enterprise users even when a work or school account is added to the Authenticator app. You can fill out a form to configure (allow or deny) Autofill for your organization and [send it to the Authenticator team](https://aka.ms/ConfigureAutofillInAuthenticator).
+**A**: Yes, Autofill for your [personal Microsoft accounts](https://go.microsoft.com/fwlink/?linkid=2144423) now works for most enterprise users even when a work or school account is added to the Authenticator app. You can fill out a form to allow or deny Autofill for your organization and [send it to the Authenticator team](https://aka.ms/ConfigureAutofillInAuthenticator). Autofill is not currently available for work or school accounts.
**Q**: Will my usersΓÇÖ work or school account password get automatically synced?
aks Managed Aad https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/managed-aad.md
Learn more about the Azure AD integration flow on the [Azure Active Directory in
* If you are using [helm](https://github.com/helm/helm), minimum version of helm 3.3. > [!Important]
-> You must use Kubectl with a minimum version of 1.18.1 or kubelogin. If you don't use the correct version, you will notice authentication issues.
+> You must use Kubectl with a minimum version of 1.18.1 or kubelogin. The difference between the minor versions of Kubernetes and kubectl should not be more than 1 version. If you don't use the correct version, you will notice authentication issues.
To install kubectl and kubelogin, use the following commands:
aks Use Azure Ad Pod Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/use-azure-ad-pod-identity.md
Title: Use Azure Active Directory pod-managed identities in Azure Kubernetes Ser
description: Learn how to use AAD pod-managed managed identities in Azure Kubernetes Service (AKS) Previously updated : 12/01/2020 Last updated : 3/12/2021
Azure Active Directory pod-managed identities uses Kubernetes primitives to asso
You must have the following resource installed:
-* The Azure CLI, version 2.8.0 or later
-* The `azure-preview` extension version 0.4.68 or later
+* The Azure CLI, version 2.20.0 or later
+* The `azure-preview` extension version 0.5.5 or later
### Limitations
-* A maximum of 50 pod identities are allowed for a cluster.
-* A maximum of 50 pod identity exceptions are allowed for a cluster.
+* A maximum of 200 pod identities are allowed for a cluster.
+* A maximum of 200 pod identity exceptions are allowed for a cluster.
* Pod-managed identities are available on Linux node pools only. ### Register the `EnablePodIdentityPreview`
Use [az aks get-credentials][az-aks-get-credentials] to sign in to your AKS clus
```azurecli-interactive az aks get-credentials --resource-group myResourceGroup --name myAKSCluster ```
+## Create an AKS cluster with Kubenet network plugin
+
+Create an AKS cluster with Kubenet network plugin and pod-managed identity enabled.
+
+```azurecli-interactive
+az aks create -g $MY_RESOURCE_GROUP -n $MY_CLUSTER --enable-pod-identity --enable-pod-identity-with-kubenet
+```
+
+## Update an existing AKS cluster with Kubenet network plugin
+
+Update an existing AKS cluster with Kubnet network plugin to include pod-managed identity.
+
+```azurecli-interactive
+az aks update -g $MY_RESOURCE_GROUP -n $MY_CLUSTER --enable-pod-identity --enable-pod-identity-with-kubenet
+```
## Create an identity
aks Windows Container Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/windows-container-powershell.md
Title: Create a Windows Server container on an AKS cluster by using PowerShell
description: Learn how to quickly create a Kubernetes cluster, deploy an application in a Windows Server container in Azure Kubernetes Service (AKS) using PowerShell. Previously updated : 05/26/2020 Last updated : 03/12/2021
network resources if they don't exist.
> node pool. ```azurepowershell-interactive
-$Password = Read-Host -Prompt 'Please enter your password' -AsSecureString
-New-AzAksCluster -ResourceGroupName myResourceGroup -Name myAKSCluster -NodeCount 2 -KubernetesVersion 1.16.7 -NetworkPlugin azure -NodeVmSetType VirtualMachineScaleSets -WindowsProfileAdminUserName akswinuser -WindowsProfileAdminUserPassword $Password
+$Username = Read-Host -Prompt 'Please create a username for the administrator credentials on your Windows Server containers: '
+$Password = Read-Host -Prompt 'Please create a password for the administrator credentials on your Windows Server containers: ' -AsSecureString
+New-AzAksCluster -ResourceGroupName myResourceGroup -Name myAKSCluster -NodeCount 2 -NetworkPlugin azure -NodeVmSetType VirtualMachineScaleSets -WindowsProfileAdminUserName $Username -WindowsProfileAdminUserPassword $Password
``` > [!Note]
By default, an AKS cluster is created with a node pool that can run Linux contai
Linux node pool. ```azurepowershell-interactive
-New-AzAksNodePool -ResourceGroupName myResourceGroup -ClusterName myAKSCluster -VmSetType VirtualMachineScaleSets -OsType Windows -Name npwin -KubernetesVersion 1.16.7
+New-AzAksNodePool -ResourceGroupName myResourceGroup -ClusterName myAKSCluster -VmSetType VirtualMachineScaleSets -OsType Windows -Name npwin
``` The above command creates a new node pool named **npwin** and adds it to the **myAKSCluster**. When
api-management Api Management Caching Policies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/api-management-caching-policies.md
description: Learn about the caching policies available for use in Azure API Man
documentationcenter: '' - - Previously updated : 11/13/2020 Last updated : 03/08/2021 # API Management caching policies
This topic provides a reference for the following API Management policies. For i
## <a name="CachingPolicies"></a> Caching policies - Response caching policies
- - [Get from cache](api-management-caching-policies.md#GetFromCache) - Perform cache look up and return a valid cached responses when available.
- - [Store to cache](api-management-caching-policies.md#StoreToCache) - Caches responses according to the specified cache control configuration.
+ - [Get from cache](#GetFromCache) - Perform cache look up and return a valid cached responses when available.
+ - [Store to cache](#StoreToCache) - Caches responses according to the specified cache control configuration.
- Value caching policies - [Get value from cache](#GetFromCacheByKey) - Retrieve a cached item by key. - [Store value in cache](#StoreToCacheByKey) - Store an item in the cache by key.
This topic provides a reference for the following API Management policies. For i
Use the `cache-lookup` policy to perform cache look up and return a valid cached response when available. This policy can be applied in cases where response content remains static over a period of time. Response caching reduces bandwidth and processing requirements imposed on the backend web server and lowers latency perceived by API consumers. > [!NOTE]
-> This policy must have a corresponding [Store to cache](api-management-caching-policies.md#StoreToCache) policy.
+> This policy must have a corresponding [Store to cache](#StoreToCache) policy.
### Policy statement
The `cache-store` policy caches responses according to the specified cache setti
### Policy statement ```xml
-<cache-store duration="seconds" />
+<cache-store duration="seconds" cache-response="true | false" />
``` ### Examples
For more information, see [Policy expressions](api-management-policy-expressions
| Name | Description | Required | Default | ||-|-|-|
-| duration | Time-to-live of the cached entries, specified in seconds. | Yes | N/A |
+| duration | Time-to-live of the cached entries, specified in seconds. | Yes | N/A |
+| cache-response | Set to true to cache the current HTTP response. If the attribute is omitted or set to false, only HTTP responses with the status code `200 OK` are cached. | No | false |
### Usage This policy can be used in the following policy [sections](./api-management-howto-policies.md#sections) and [scopes](./api-management-howto-policies.md#scopes).
azure-functions Create First Function Vs Code Other https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/create-first-function-vs-code-other.md
The *function.json* file in the *HttpExample* folder declares an HTTP trigger fu
```toml [dependencies]
- warp = "0.2"
- tokio = { version = "0.2", features = ["full"] }
+ warp = "0.3"
+ tokio = { version = "1", features = ["rt", "macros", "rt-multi-thread"] }
``` 1. In *src/main.rs*, add the following code and save the file. This is your Rust custom handler.
azure-government Documentation Government Stig Linux Vm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/documentation-government-stig-linux-vm.md
Previously updated : 03/09/2021 Last updated : 03/11/2021 # Deploy STIG-compliant Linux Virtual Machines (Preview)
-Microsoft Azure Security Technical Implementation Guides (STIGs) solution templates help you accelerate your [DoD STIG compliance](https://public.cyber.mil/stigs/) by delivering an automated solution to deploy virtual machines and apply STIGs through the Azure portal. For questions about this offering, contact [Azure STIG Support](mailto:azurestigsupport@microsoft.com).
+Microsoft Azure Security Technical Implementation Guides (STIGs) solution templates help you accelerate your [DoD STIG compliance](https://public.cyber.mil/stigs/) by delivering an automated solution to deploy virtual machines and apply STIGs through the Azure portal. For questions about this offering, contact [Azure STIG support](mailto:azurestigsupport@microsoft.com).
-This quickstart shows how to use the Azure portal to deploy a STIG-compliant Linux virtual machine (Preview).
+This quickstart shows how to deploy a STIG-compliant Linux virtual machine (Preview) on Azure or Azure Government using the corresponding portal.
## Prerequisites -- Azure Government subscription
+- Azure or Azure Government subscription
- Storage account - If desired, must be in the same resource group/region as the VM - Required if you plan to store Log Analytics diagnostics
This quickstart shows how to use the Azure portal to deploy a STIG-compliant Lin
## Sign in to Azure
-Sign in at the [Azure Government portal](https://portal.azure.us/).
+Sign in at the [Azure portal](https://ms.portal.azure.com/) or [Azure Government portal](https://portal.azure.us/) depending on your subscription.
## Create a STIG-compliant virtual machine
Select the resource group for the virtual machine, then select **Delete**. Confi
## Next steps
-This quickstart showed you how to deploy a STIG-compliant Linux virtual machine (Preview) on Azure Government. For more information about creating virtual machines in Azure Government, see [Tutorial: Create Virtual Machines](./documentation-government-quickstarts-vm.md). To learn more about Azure services, continue to the Azure documentation.
+This quickstart showed you how to deploy a STIG-compliant Linux virtual machine (Preview) on Azure or Azure Government. For more information about creating virtual machines in:
+
+- Azure, see [Quickstart: Create a Linux virtual machine in the Azure portal](../virtual-machines/linux/quick-create-portal.md).
+- Azure Government, see [Tutorial: Create virtual machines](./documentation-government-quickstarts-vm.md).
+
+To learn more about Azure services, continue to the Azure documentation.
> [!div class="nextstepaction"] > [Azure documentation](../index.yml)
azure-government Documentation Government Stig Windows Vm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/documentation-government-stig-windows-vm.md
Previously updated : 03/09/2021 Last updated : 03/11/2021 # Deploy STIG-compliant Windows Virtual Machines (Preview)
-Microsoft Azure Security Technical Implementation Guides (STIGs) solution templates help you accelerate your [DoD STIG compliance](https://public.cyber.mil/stigs/) by delivering an automated solution to deploy virtual machines and apply STIGs through the Azure portal. For questions about this offering, contact [Azure STIG Support](mailto:azurestigsupport@microsoft.com).
+Microsoft Azure Security Technical Implementation Guides (STIGs) solution templates help you accelerate your [DoD STIG compliance](https://public.cyber.mil/stigs/) by delivering an automated solution to deploy virtual machines and apply STIGs through the Azure portal. For questions about this offering, contact [Azure STIG support](mailto:azurestigsupport@microsoft.com).
-This quickstart shows how to use the Azure portal to deploy a STIG-compliant Windows virtual machine (Preview).
+This quickstart shows how to deploy a STIG-compliant Windows virtual machine (Preview) on Azure or Azure Government using the corresponding portal.
## Prerequisites -- Azure Government subscription
+- Azure or Azure Government subscription
- Storage account - If desired, must be in the same resource group/region as the VM - Required if you plan to store Log Analytics diagnostics
This quickstart shows how to use the Azure portal to deploy a STIG-compliant Win
## Sign in to Azure
-Sign in at the [Azure Government portal](https://portal.azure.us/).
+Sign in at the [Azure portal](https://ms.portal.azure.com/) or [Azure Government portal](https://portal.azure.us/) depending on your subscription.
## Create a STIG-compliant virtual machine
Select the resource group for the virtual machine, then select **Delete**. Confi
## Next steps
-This quickstart showed you how to deploy a STIG-compliant Windows virtual machine (Preview) on Azure Government. For more information about creating virtual machines in Azure Government, see [Tutorial: Create Virtual Machines](./documentation-government-quickstarts-vm.md). To learn more about Azure services, continue to the Azure documentation.
+This quickstart showed you how to deploy a STIG-compliant Windows virtual machine (Preview) on Azure or Azure Government. For more information about creating virtual machines in:
+
+- Azure, see [Quickstart: Create a Windows virtual machine in the Azure portal](../virtual-machines/windows/quick-create-portal.md).
+- Azure Government, see [Tutorial: Create virtual machines](./documentation-government-quickstarts-vm.md).
+
+To learn more about Azure services, continue to the Azure documentation.
> [!div class="nextstepaction"] > [Azure documentation](../index.yml)
azure-monitor Api Custom Events Metrics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/api-custom-events-metrics.md
There are some [limits on the number of properties, property values, and metrics
*JavaScript* ```javascript
-appInsights.trackEvent
- ("WinGame",
- // String properties:
- {Game: currentGame.name, Difficulty: currentGame.difficulty},
- // Numeric metrics:
- {Score: currentGame.score, Opponents: currentGame.opponentCount}
- );
-
-appInsights.trackPageView
- ("page name", "http://fabrikam.com/pageurl.html",
- // String properties:
- {Game: currentGame.name, Difficulty: currentGame.difficulty},
- // Numeric metrics:
- {Score: currentGame.score, Opponents: currentGame.opponentCount}
- );
+appInsights.trackEvent({
+ name: 'some event',
+ properties: { // accepts any type
+ prop1: 'string',
+ prop2: 123.45,
+ prop3: { nested: 'objects are okay too' }
+ }
+});
+
+appInsights.trackPageView({
+ name: 'some page',
+ properties: { // accepts any type
+ prop1: 'string',
+ prop2: 123.45,
+ prop3: { nested: 'objects are okay too' }
+ }
+});
``` *C#*
azure-monitor Cloudservices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/cloudservices.md
The telemetry from your app is stored, analyzed, and displayed in an Azure resou
Each resource belongs to a resource group. Resource groups are used to manage costs, to grant access to team members, and to deploy updates in a single coordinated transaction. For example, you could [write a script to deploy](../../azure-resource-manager/templates/deploy-powershell.md) an Azure cloud service and its Application Insights monitoring resources all in one operation. ### Resources for components
-We recommend that you create a separate resource for each component of your app. That is, you create a resource for each web role and worker role. You can analyze each component separately, but you create a [dashboard](./overview-dashboard.md) that brings together the key charts from all the components, so that you can compare and monitor them together in a single view.
-An alternative approach is to send the telemetry from more than one role to the same resource, but [add a dimension property to each telemetry item](./api-filtering-sampling.md#addmodify-properties-itelemetryinitializer) that identifies its source role. In this approach, metric charts, such as exceptions, normally show an aggregation of the counts from the various roles, but you can segment the chart by the role identifier, as necessary. You can also filter searches by the same dimension. This alternative makes it a bit easier to view everything at the same time, but it could also lead to some confusion between the roles.
+We recommend that you [add a dimension property to each telemetry item](./api-filtering-sampling.md#addmodify-properties-itelemetryinitializer) that identifies its source role. In this approach, metric charts, such as exceptions, normally show an aggregation of the counts from the various roles, but you can segment the chart by the role identifier, as necessary. You can also filter searches by the same dimension. This alternative makes it a bit easier to view everything at the same time, but it could also lead to some confusion between the roles.
Browser telemetry is usually included in the same resource as its server-side web role.
azure-monitor Java Standalone Config https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/java-standalone-config.md
For more information, check out the [telemetry processor](./java-standalone-tele
Log4j, Logback, and java.util.logging are auto-instrumented, and logging performed via these logging frameworks is auto-collected.
-Logging is only captured if it first meets the logging frameworks' configured threshold,
-and second also meets the Application Insights configured threshold.
+Logging is only captured if it first meets the level that is configured for the logging framework,
+and second, also meets the level that is configured for Application Insights.
-The default Application Insights threshold is `INFO`. If you want to change this level:
+For example, if your logging framework is configured to log `WARN` (and above) from package `com.example`,
+and Application Insights is configured to capture `INFO` (and above),
+then Application Insights will only capture `WARN` (and above) from package `com.example`.
+
+The default level configured for Application Insights is `INFO`. If you want to change this level:
```json {
azure-monitor Java Standalone Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/java-standalone-troubleshoot.md
These changes include:
## Some logging is not auto-collected
-Logging is only captured if it first meets the logging frameworks' configured threshold,
-and second also meets the Application Insights configured threshold.
+Logging is only captured if it first meets the level that is configured for the logging framework,
+and second, also meets the level that is configured for Application Insights.
+
+For example, if your logging framework is configured to log `WARN` (and above) from package `com.example`,
+and Application Insights is configured to capture `INFO` (and above),
+then Application Insights will only capture `WARN` (and above) from package `com.example`.
The best way to know if a particular logging statement meets the logging frameworks' configured threshold is to confirm that it is showing up in your normal application log (e.g. file or console).
azure-monitor Azure Monitor Data Explorer Proxy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/azure-monitor-data-explorer-proxy.md
adx('https://help.kusto.windows.net/Samples').StormEvents
> [!NOTE] >* Database names are case sensitive. >* Cross-resource query as an alert is not supported.
+>* Identifying the Timestamp column in the cluster is not supported, Log Analytics query API will not pass along the time filter
## Combine Azure Data Explorer cluster tables with a Log Analytics workspace
azure-portal Azure Portal Quickstart Center https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-portal/azure-portal-quickstart-center.md
Title: Get started with the Azure Quickstart Center description: Use the Azure Quickstart Center guided experience to get started with Azure. Learn to set up, migrate, and innovate. Previously updated : 01/29/2020 Last updated : 03/10/2021
azure-resource-manager Move Support Resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/move-support-resources.md
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | > | - | -- | - |
-> | flexibleServers | Yes | Yes |
+> | flexibleServers | No | No |
> | servergroups | No | No | > | servers | Yes | Yes | > | serversv2 | Yes | Yes |
azure-resource-manager Bicep Decompile https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/bicep-decompile.md
+
+ Title: Convert templates between JSON and Bicep
+description: Describes commands for converting Azure Resource Manager templates from Bicep to JSON and from JSON to Bicep.
+ Last updated : 03/12/2021+
+# Converting ARM templates between JSON and Bicep
+
+This article describes how you convert Azure Resource Manager templates (ARM templates) from JSON to Bicep and from Bicep to JSON.
+
+You must have the [Bicep CLI installed](bicep-install.md) to run the conversion commands.
+
+The conversion commands produce templates that are functionally equivalent. However, they might not be exactly the same in implementation. Converting a template from JSON to Bicep and then back to JSON probably results in a template with different syntax than the original template. When deployed, the converted templates produce the same results.
+
+## Convert from JSON to Bicep
+
+The Bicep CLI provides a command to decompile any existing JSON template to a Bicep file. To decompile a JSON file, use:
+
+```azurecli
+bicep decompile mainTemplate.json
+```
+
+This command provides a starting point for Bicep authoring. The command doesn't work for all templates. Currently, nested templates can be decompiled only if they use the 'inner' expression evaluation scope. Templates that use copy loops can't be decompiled.
+
+## Convert from Bicep to JSON
+
+The Bicep CLI also provides a command to convert Bicep to JSON. To build a JSON file, use:
+
+```azurecli
+bicep build mainTemplate.bicep
+```
+
+## Export template and convert
+
+You can export the template for a resource group, and then pass it directly to the decompile command. The following example shows how to decompile an exported template.
+
+# [Azure CLI](#tab/azure-cli)
+
+```azurecli
+az group export --name "your_resource_group_name" > main.json
+bicep decompile main.json
+```
+
+# [PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+Export-AzResourceGroup -ResourceGroupName "your_resource_group_name" -Path ./main.json
+bicep decompile main.json
+```
+
+# [Portal](#tab/azure-portal)
+
+[Export the template](export-template-portal.md) through the portal.
+
+Use `bicep decompile <filename>` on the downloaded file.
+++
+## Side-by-side view
+
+The [Bicep playground](https://aka.ms/bicepdemo) enables you to view equivalent JSON and Bicep files side by side. You can select a sample template to see both versions. Or, select `Decompile` to upload your own JSON template and view the equivalent Bicep file.
+
+## Next steps
+
+For information about the Bicep, see [Bicep tutorial](./bicep-tutorial-create-first-bicep.md).
azure-resource-manager Bicep Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/bicep-overview.md
Title: Bicep language for Azure Resource Manager templates description: Describes the Bicep language for deploying infrastructure to Azure through Azure Resource Manager templates. Previously updated : 03/03/2021 Last updated : 03/12/2021 # What is Bicep (Preview)?
-Bicep is a language for declaratively deploying Azure resources. It simplifies the authoring experience by providing concise syntax and better support for code reuse. Bicep is a domain-specific language (DSL), which means it's designed for a particular scenario or domain. Bicep isn't intended as a general programming language for writing applications.
+Bicep is a language for declaratively deploying Azure resources. You can use Bicep instead of JSON for developing your Azure Resource Manager templates (ARM templates). Bicep simplifies the authoring experience by providing concise syntax, better support for code reuse, and improved type safety. Bicep is a domain-specific language (DSL), which means it's designed for a particular scenario or domain. It isn't intended as a general programming language for writing applications.
-In the past, you developed Azure Resource Manager templates (ARM templates) with JSON. The JSON syntax for creating template can be verbose and require complicated expression. Bicep improves that experience without losing any of the capabilities of a JSON template. It's a transparent abstraction over the JSON for ARM templates. Each Bicep file compiles to a standard ARM template. Resource types, API versions, and properties that are valid in an ARM template are valid in a Bicep file.
+The JSON syntax for creating template can be verbose and require complicated expression. Bicep improves that experience without losing any of the capabilities of a JSON template. It's a transparent abstraction over the JSON for ARM templates. Each Bicep file compiles to a standard ARM template. Resource types, API versions, and properties that are valid in an ARM template are valid in a Bicep file. There are a few [known limitations](#known-limitations) in the current release.
## Get started
After installing the tools, try the [Bicep tutorial](./bicep-tutorial-create-fir
To view equivalent JSON and Bicep files side by side, see the [Bicep Playground](https://aka.ms/bicepdemo).
-If you have an existing ARM template that you would like to convert to Bicep, see [Decompile JSON to Bicep](compare-template-syntax.md#decompile-json-to-bicep).
+If you have an existing ARM template that you would like to convert to Bicep, see [Converting ARM templates between JSON and Bicep](bicep-decompile.md).
## Bicep improvements
With Bicep, you can break your project into multiple modules.
The structure of the Bicep file is more flexible than the JSON template. You can declare parameters, variables, and outputs anywhere in the file. In JSON, you have to declare all parameters, variables, and outputs within the corresponding sections of the template.
-The VS Code extension for Bicep offers richer validation and intellisense. For example, the extension has intellisense for getting properties of a resource.
+The VS Code extension for Bicep offers rich validation and intellisense. For example, you can use the extension's intellisense for getting properties of a resource.
+
+## Known limitations
+
+The following limits currently exist:
+
+* Can't set mode or batch size on copy loops.
+* Can't combine loops and conditions.
+* Single-line object and arrays, like `['a', 'b', 'c']`, aren't supported.
## FAQ
Bicep is a DSL focused on deploying complete solutions to Azure. Meeting that go
They continue to function exactly as they always have. You don't need to make any changes. We'll continue to support the underlying ARM template JSON language. Bicep files compile to JSON, and that JSON is sent to Azure for deployment.
-When you're ready, you can [convert the JSON files to Bicep](compare-template-syntax.md#decompile-json-to-bicep).
+When you're ready, you can [convert the JSON files to Bicep](bicep-decompile.md).
## Next steps
azure-resource-manager Compare Template Syntax https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/compare-template-syntax.md
Title: Compare syntax for Azure Resource Manager templates in JSON and Bicep description: Compares Azure Resource Manager templates developed with JSON and Bicep, and shows how to convert between the languages. Previously updated : 03/03/2021 Last updated : 03/12/2021 # Comparing JSON and Bicep for templates
If you're familiar with using JSON to develop ARM templates, use the following t
* Use consistent casing for identifiers. If you're unsure what type of casing to use, try camel casing. For example, `param myCamelCasedParameter string`. * Add a description to a parameter only when the description provides essential information to users. You can use `//` comments for some information.
-## Decompile JSON to Bicep
-
-The Bicep CLI provides a command to decompile any existing ARM template to a Bicep file. To decompile a JSON file, use: `bicep decompile "path/to/file.json"`
-
-This command provides a starting point for Bicep authoring, but the command doesn't work for all templates. The command may fail or you may have to fix issues after the decompilation. Currently, nested templates can be decompiled only if they use the 'inner' expression evaluation scope.
-
-You can export the template for a resource group, and then pass it directly to the bicep decompile command. The following example shows how to decompile an exported template.
-
-# [Azure CLI](#tab/azure-cli)
-
-```azurecli
-az group export --name "your_resource_group_name" > main.json
-bicep decompile main.json
-```
-
-# [PowerShell](#tab/azure-powershell)
-
-```azurepowershell
-Export-AzResourceGroup -ResourceGroupName "your_resource_group_name" -Path ./main.json
-bicep decompile main.json
-```
-
-# [Portal](#tab/azure-portal)
-
-[Export the template](export-template-portal.md) through the portal. Use `bicep decompile <filename>` on the downloaded file.
---
-## Build JSON from Bicep
-
-The Bicep CLI also provides a command to convert Bicep to JSON. To build a JSON file, use: `bicep build "path/to/file.json"`
-
-## Side-by-side view
-
-The [Bicep playground](https://aka.ms/bicepdemo) enables you to view equivalent JSON and Bicep files side by side. You can select a sample template to see both versions. Or, select `Decompile` to upload your own JSON template and view the equivalent Bicep file.
- ## Next steps
-For information about the Bicep, see [Bicep tutorial](./bicep-tutorial-create-first-bicep.md).
+* For information about the Bicep, see [Bicep tutorial](./bicep-tutorial-create-first-bicep.md).
+* To learn about converting templates between the languages, see [Converting ARM templates between JSON and Bicep](bicep-decompile.md).
azure-resource-manager Frequently Asked Questions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/frequently-asked-questions.md
This article answers frequently asked questions about Azure Resource Manager tem
* **Will you offer a tool to convert my JSON templates to the new template language?**
- Yes. See [Decompile JSON to Bicep](compare-template-syntax.md#decompile-json-to-bicep).
+ Yes. See [Converting ARM templates between JSON and Bicep](bicep-decompile.md).
## Template Specs
azure-resource-manager Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/overview.md
Title: Templates overview description: Describes the benefits using Azure Resource Manager templates (ARM templates) for deployment of resources. Previously updated : 03/08/2021 Last updated : 03/12/2021 # What are ARM templates?
To implement infrastructure as code for your Azure solutions, use Azure Resource
We've introduced a new language for developing ARM templates. The language is named Bicep, and is currently in preview. Bicep and JSON templates offer the same capabilities. You can convert template between the two languages. Bicep provides a syntax that is easier to use for creating templates. For more information, see [What is Bicep (Preview)?](bicep-overview.md).
+To learn about how you can get started with ARM templates, see the following video.
+
+> [!VIDEO https://channel9.msdn.com/Shows/Azure-Enablement/How-and-why-to-learn-about-ARM-templates/player]
+ ## Why choose ARM templates? If you're trying to decide between using ARM templates and one of the other infrastructure as code services, consider the following advantages of using templates:
azure-sql Resource Health To Troubleshoot Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/resource-health-to-troubleshoot-connectivity.md
You can access up to 14 days of health history in the Health History section of
### Downtime reasons
-When your database experiences downtime, analysis is performed to determine a reason. When available, the downtime reason is reported in the Health History section of Resource Health. Downtime reasons are typically published 30 minutes after an event.
+When your database experiences downtime, analysis is performed to determine a reason. When available, the downtime reason is reported in the Health History section of Resource Health. Downtime reasons are typically published within 45 minutes after an event.
#### Planned maintenance
azure-sql Load From Csv With Bcp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/load-from-csv-with-bcp.md
To complete the steps in this article, you need:
* The bcp command-line utility installed * The sqlcmd command-line utility installed
-You can download the bcp and sqlcmd utilities from the [Microsoft Download Center][Microsoft Download Center].
+You can download the bcp and sqlcmd utilities from the [Microsoft sqlcmd Documentation][https://docs.microsoft.com/sql/tools/sqlcmd-utility?view=sql-server-ver15].
### Data in ASCII or UTF-16 format
To migrate a SQL Server database, see [SQL Server database migration](database/m
[CREATE TABLE syntax]: /sql/t-sql/statements/create-table-azure-sql-data-warehouse <!--Other Web references-->
-[Microsoft Download Center]: https://www.microsoft.com/download/details.aspx?id=36433
+[Microsoft Download Center]: https://www.microsoft.com/download/details.aspx?id=36433
azure-sql Frequently Asked Questions Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/frequently-asked-questions-faq.md
This article provides answers to some of the most common questions about running
1. **Are distributed transactions with MSDTC supported on SQL Server VMs?** Yes. Local DTC is supported for SQL Server 2016 SP2 and greater. However, applications must be tested when utilizing Always On availability groups, as transactions in-flight during a failover will fail and must be retried. Clustered DTC is available starting with Windows Server 2019.
+
+1. **Does Azure SQL virtual machine move or store customer data out of region?**
+
+ No. In fact, Azure SQL virtual machine and the SQL IaaS Agent Extension do not store any customer data.
## SQL Server IaaS Agent extension
azure-sql Sql Server Iaas Agent Extension Automate Management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/sql-server-iaas-agent-extension-automate-management.md
The SQL IaaS Agent extension only supports:
- SQL Server VMs deployed to the public or Azure Government cloud. Deployments to other private or government clouds are not supported.
+## In-region data residency
+Azure SQL virtual machine and the SQL IaaS Agent Extension do not move or store customer data out of the region in which they are deployed.
## Next steps
backup Backup Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-support-matrix.md
The following table describes the features of Recovery Services vaults:
**Move data between vaults** | Moving backed-up data between vaults isn't supported. **Modify vault storage type** | You can modify the storage replication type (either geo-redundant storage or locally redundant storage) for a vault before backups are stored. After backups begin in the vault, the replication type can't be modified. **Zone-redundant storage (ZRS)** | Available in the UK South (UKS) and South East Asia (SEA) regions.
+**Private Endpoints** | See [this section](https://docs.microsoft.com/azure/backup/private-endpoints#before-you-start) for requirements to create private endpoints for a recovery service vault.
## On-premises backup support
backup Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/private-endpoints.md
This article will help you understand the process of creating private endpoints
- A private endpoint connection for Backup uses a total of 11 private IPs in your subnet, including those used by Azure Backup for storage. This number may be higher (up to 25) for certain Azure regions. So we suggest that you have enough private IPs available when you attempt to create private endpoints for Backup. - While a Recovery Services vault is used by (both) Azure Backup and Azure Site Recovery, this article discusses use of private endpoints for Azure Backup only. - Azure Active Directory doesn't currently support private endpoints. So IPs and FQDNs required for Azure Active Directory to work in a region will need to be allowed outbound access from the secured network when performing backup of databases in Azure VMs and backup using the MARS agent. You can also use NSG tags and Azure Firewall tags for allowing access to Azure AD, as applicable.-- Virtual networks with Network Policies aren't supported for Private Endpoints. You'll need to disable Network Polices before continuing.
+- Virtual networks with Network Policies aren't supported for Private Endpoints. You'll need to [disable Network Polices](https://docs.microsoft.com/azure/private-link/disable-private-endpoint-network-policy) before continuing.
- You need to re-register the Recovery Services resource provider with the subscription if you registered it before May 1 2020. To re-register the provider, go to your subscription in the Azure portal, navigate to **Resource provider** on the left navigation bar, then select **Microsoft.RecoveryServices** and select **Re-register**. - [Cross-region restore](backup-create-rs-vault.md#set-cross-region-restore) for SQL and SAP HANA database backups aren't supported if the vault has private endpoints enabled. - When you move a Recovery Services vault already using private endpoints to a new tenant, you'll need to update the Recovery Services vault to recreate and reconfigure the vaultΓÇÖs managed identity and create new private endpoints as needed (which should be in the new tenant). If this isn't done, the backup and restore operations will start failing. Also, any role-based access control (RBAC) permissions set up within the subscription will need to be reconfigured.
When using the MARS Agent to back up your on-premises resources, make sure your
But if you remove private endpoints for the vault after a MARS agent has been registered to it, you'll need to re-register the container with the vault. You don't need to stop protection for them.
+## Deleting Private EndPoints
+
+See [this section](https://docs.microsoft.com/rest/api/virtualnetwork/privateendpoints/delete) to learn how to delete Private EndPoints.
+ ## Additional topics ### Create a Recovery Services vault using the Azure Resource Manager client
A. After following the process detailed in this article, you don't need to do ad
## Next steps -- Read about all the [security features in Azure Backup](security-overview.md)
+- Read about all the [security features in Azure Backup](security-overview.md).
bastion Howto Metrics Monitor Alert https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/bastion/howto-metrics-monitor-alert.md
Previously updated : 03/09/2021 Last updated : 03/12/2021
You can view memory utilization across each bastion instance, split across each
#### Session count
-You can view the count of active sessions per bastion instance, aggregated across each session type (RDP and SSH). Each Azure Bastion can support a range of active RDP and SSH sessions. Monitoring this metric will help you to understand if you need to adjust the number of instances running the bastion service. For more information about the session count Azure Bastion can support, refer to the [Azure Bastion FAQ](bastion-faq.md). For more information about which Bastion SKUs support instance scaling, refer to [About Bastion SKUs](bastion-connect-vm-scale-set.md).
+You can view the count of active sessions per bastion instance, aggregated across each session type (RDP and SSH). Each Azure Bastion can support a range of active RDP and SSH sessions. Monitoring this metric will help you to understand if you need to adjust the number of instances running the bastion service. For more information about the session count Azure Bastion can support, refer to the [Azure Bastion FAQ](bastion-faq.md).
:::image type="content" source="./media/metrics-monitor-alert/session-count.png" alt-text="Screenshot showing session count.":::
You can view the count of active sessions per bastion instance, aggregated acros
## Next steps Read the [Bastion FAQ](bastion-faq.md).
-
+
batch Tutorial Run Python Batch Azure Data Factory https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/tutorial-run-python-batch-azure-data-factory.md
If you donΓÇÖt have an Azure subscription, create a [free account](https://azure
* An installed [Python](https://www.python.org/downloads/) distribution, for local testing. * The [azure-storage-blob](https://pypi.org/project/azure-storage-blob/) `pip` package.
-* The [iris.csv dataset](https://www.kaggle.com/uciml/iris/version/2#Iris.csv)
+* The [iris.csv dataset](https://github.com/Azure-Samples/batch-adf-pipeline-tutorial/blob/master/iris.csv)
* An Azure Batch account and a linked Azure Storage account. See [Create a Batch account](quick-create-portal.md#create-a-batch-account) for more information on how to create and link Batch accounts to storage accounts. * An Azure Data Factory account. See [Create a data factory](../data-factory/quickstart-create-data-factory-portal.md#create-a-data-factory) for more information on how to create a data factory through the Azure portal. * [Batch Explorer](https://azure.github.io/BatchExplorer/).
Here you'll create blob containers that will store your input and output files f
1. Sign in to Storage Explorer using your Azure credentials. 1. Using the storage account linked to your Batch account, create two blob containers (one for input files, one for output files) by following the steps at [Create a blob container](../vs-azure-tools-storage-explorer-blobs.md#create-a-blob-container). * In this example, we'll call our input container `input`, and our output container `output`.
-1. Upload [`iris.csv`](https://www.kaggle.com/uciml/iris/version/2#Iris.csv) to your input container `input` using Storage Explorer by following the steps at [Managing blobs in a blob container](../vs-azure-tools-storage-explorer-blobs.md#managing-blobs-in-a-blob-container)
+1. Upload [`iris.csv`](https://github.com/Azure-Samples/batch-adf-pipeline-tutorial/blob/master/iris.csv) to your input container `input` using Storage Explorer by following the steps at [Managing blobs in a blob container](../vs-azure-tools-storage-explorer-blobs.md#managing-blobs-in-a-blob-container)
## Develop a script in Python
The following Python script loads the `iris.csv` dataset from your `input` conta
``` python # Load libraries
-from azure.storage.blob import BlobServiceClient
+from azure.storage.blob import BlobClient
import pandas as pd # Define parameters
-storageAccountURL = "<storage-account-url>"
-storageKey = "<storage-account-key>"
-containerName = "output"
+connectionString = "<storage-account-connection-string>"
+containerName = "output"
+outputBlobName = "iris_setosa.csv"
# Establish connection with the blob storage account
-blob_service_client = BlockBlobService(account_url=storageAccountURL,
- credential=storageKey
- )
+blob = BlobClient.from_connection_string(conn_str=connectionString, container_name=containerName, blob_name=outputBlobName)
# Load iris dataset from the task node df = pd.read_csv("iris.csv")
-# Subset records
+# Take a subset of the records
df = df[df['Species'] == "setosa"] # Save the subset of the iris dataframe locally in task node
-df.to_csv("iris_setosa.csv", index = False)
+df.to_csv(outputBlobName, index = False)
-# Upload iris dataset
-container_client = blob_service_client.get_container_client(containerName)
-with open("iris_setosa.csv", "rb") as data:
- blob_client = container_client.upload_blob(name="iris_setosa.csv", data=data)
+with open(outputBlobName, "rb") as data:
+ blob.upload_blob(data)
``` Save the script as `main.py` and upload it to the **Azure Storage** `input` container. Be sure to test and validate its functionality locally before uploading it to your blob container:
In this section, you'll create and validate a pipeline using your Python script.
![In the General tab, set the name of the pipeline as "Run Python"](./media/run-python-batch-azure-data-factory/create-pipeline.png)
-1. In the **Activities** box, expand **Batch Service**. Drag the custom activity from the **Activities** toolbox to the pipeline designer surface.
-1. In the **General** tab, specify **testPipeline** for Name
-
+1. In the **Activities** box, expand **Batch Service**. Drag the custom activity from the **Activities** toolbox to the pipeline designer surface. Fill out the following tabs for the custom activity:
+ 1. In the **General** tab, specify **testPipeline** for Name
![In the General tab, specify testPipeline for Name](./media/run-python-batch-azure-data-factory/create-custom-task.png)
-1. In the **Azure Batch** tab, add the **Batch Account** that was created in the previous steps and **Test connection** to ensure that it is successful
-
+ 1. In the **Azure Batch** tab, add the **Batch Account** that was created in the previous steps and **Test connection** to ensure that it is successful.
![In the Azure Batch tab, add the Batch Account that was created in the previous steps, then test connection](./media/run-python-batch-azure-data-factory/integrate-pipeline-with-azure-batch.png)
+ 1. In the **Settings** tab:
+ 1. Set the **Command** as `python main.py`.
+ 1. For the **Resource Linked Service**, add the storage account that was created in the previous steps. Test the connection to ensure it is successful.
+ 1. In the **Folder Path**, select the name of the **Azure Blob Storage** container that contains the Python script and the associated inputs. This will download the selected files from the container to the pool node instances before the execution of the Python script.
-1. In the **Settings** tab, enter the command `python main.py`.
-1. For the **Resource Linked Service**, add the storage account that was created in the previous steps. Test the connection to ensure it is successful.
-1. In the **Folder Path**, select the name of the **Azure Blob Storage** container that contains the Python script and the associated inputs. This will download the selected files from the container to the pool node instances before the execution of the Python script.
+ ![In the Folder Path, select the name of the Azure Blob Storage container](./media/run-python-batch-azure-data-factory/create-custom-task-py-script-command.png)
- ![In the Folder Path, select the name of the Azure Blob Storage container](./media/run-python-batch-azure-data-factory/create-custom-task-py-script-command.png)
1. Click **Validate** on the pipeline toolbar above the canvas to validate the pipeline settings. Confirm that the pipeline has been successfully validated. To close the validation output, select the &gt;&gt; (right arrow) button. 1. Click **Debug** to test the pipeline and ensure it works accurately. 1. Click **Publish** to publish the pipeline.
In this tutorial, you learned how to:
To learn more about Azure Data Factory, see: > [!div class="nextstepaction"]
-> [Azure Data Factory overview](../data-factory/introduction.md)
+> [Azure Data Factory overview](../data-factory/introduction.md)
cloud-services-extended-support Deploy Prerequisite https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/deploy-prerequisite.md
Deployments that utilized the old remote desktop plugins need to have the module
## Key Vault creation
-Key Vault is used to store certificates that are associated to Cloud Services (extended support). Add the certificates to Key Vault, then reference the certificate thumbprints in Service Configuration file. You also need to enable Key Vault for appropriate permissions so that Cloud Services (extended support) resource can retrieve certificate stored as secrets from Key Vault. Key Vault can be created through [Azure portal](../key-vault/general/quick-create-portal.md)and [PowerShell](../key-vault/general/quick-create-powershell.md). The Key Vault must be created in the same region and subscription as cloud service. For more information see [Use certificates with Azure Cloud Services (extended support)](certificates-and-key-vault.md).
+Key Vault is used to store certificates that are associated to Cloud Services (extended support). Add the certificates to Key Vault, then reference the certificate thumbprints in Service Configuration file. You also need to enable Key Vault for appropriate permissions so that Cloud Services (extended support) resource can retrieve certificate stored as secrets from Key Vault. You can create a key vault in the [Azure portal](../key-vault/general/quick-create-portal.md) or by using [PowerShell](../key-vault/general/quick-create-powershell.md). The key vault must be created in the same region and subscription as the cloud service. For more information, see [Use certificates with Azure Cloud Services (extended support)](certificates-and-key-vault.md).
## Next steps - Review the [deployment prerequisites](deploy-prerequisite.md) for Cloud Services (extended support). - Deploy a Cloud Service (extended support) using the [Azure portal](deploy-portal.md), [PowerShell](deploy-powershell.md), [Template](deploy-template.md) or [Visual Studio](deploy-visual-studio.md). - Review [frequently asked questions](faq.md) for Cloud Services (extended support).-- Visit the [Cloud Services (extended support) samples repository](https://github.com/Azure-Samples/cloud-services-extended-support)
+- Visit the [Cloud Services (extended support) samples repository](https://github.com/Azure-Samples/cloud-services-extended-support)
cognitive-services Role Based Access Control https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/QnAMaker/Concepts/role-based-access-control.md
Collaborate with other authors and editors using Azure role-based access control
## Access is provided on the QnA Maker resource
-All permissions are controlled by the permissions placed on the QnA Maker resource. These permissions align to read, write, publish, and full access.
+All permissions are controlled by the permissions placed on the QnA Maker resource. These permissions align to read, write, publish, and full access. You can allow collaboration among multiple users by [updating RBAC access](../how-to/manage-qna-maker-app.md) for QnA Maker resource.
This Azure RBAC feature includes: * Azure Active Directory (AAD) is 100% backward compatible with key-based authentication for owners and contributors. Customers can use either key-based authentication or Azure RBAC-based authentication in their requests.
If you author and collaborate using the APIs, either through REST or the SDKs, y
## Next step
-* Design a knowledge base for languages and for client applications
+* Design a knowledge base for languages and for client applications
cognitive-services Metadata Generateanswer Usage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/QnAMaker/How-To/metadata-generateanswer-usage.md
var qnaResults = await this.qnaMaker.getAnswers(stepContext.context, qnaMakerOpt
The previous JSON requested only answers that are at 30% or above the threshold score.
-## Return Precise Answers
+## Get precise answers with GenerateAnswer API
-### Generate Answer API
+# [QnA Maker GA (stable release)](#tab/v1)
+
+We offer precise answer feature only with the QnA Maker managed version.
+
+# [QnA Maker managed (preview release)](#tab/v2)
The user can enable [precise answers](../reference-precise-answering.md) when using the QnA Maker managed resource. The answerSpanRequest parameter has to be updated for the same.
If you want to configure precise answer settings for your bot service, navigate
|Long Answers Only|false|false| |Both Long and Precise Answers|true|false| ++ ## Common HTTP errors |Code|Explanation|
cognitive-services Network Isolation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/QnAMaker/How-To/network-isolation.md
Last updated 11/09/2020
# Recommended settings for network isolation
-You should follow the follow the steps below to restrict public access to QnA Maker resources. Protect a Cognitive Services resource from public access by [configuring the virtual network](../../cognitive-services-virtual-networks.md?tabs=portal).
-
-## Restrict access to Cognitive Search Resource
-
-# [QnA Maker GA (stable release)](#tab/v1)
-
-Configuring Cognitive Search as a private endpoint inside a VNET. When a Search instance is created during the creation of a QnA Maker resource, you can force Cognitive Search to support a private endpoint configuration created entirely within a customerΓÇÖs VNet.
-
-All resources must be created in the same region to use a private endpoint.
-
-* QnA Maker resource
-* new Cognitive Search resource
-* new Virtual Network resource
-
-Complete the following steps in the [Azure portal](https://portal.azure.com):
-
-1. Create a [QnA Maker resource](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesQnAMaker).
-2. Create a new Cognitive Search resource with Endpoint connectivity (data) set to _Private_. Create the resource in the same region as the QnA Maker resource created in step 1. Learn more about [creating a Cognitive Search resource](../../../search/search-create-service-portal.md), then use this link to go directly to the [creation page of the resource](https://ms.portal.azure.com/#create/Microsoft.Search).
-3. Create a new [Virtual Network resource](https://ms.portal.azure.com/#create/Microsoft.VirtualNetwork-ARM).
-4. Configure the VNET on the App service resource created in step 1 of this procedure. Create a new DNS entry in the VNET for new Cognitive Search resource created in step 2. to the Cognitive Search IP address.
-5. [Associate the App service to the new Cognitive Search resource](../how-to/set-up-qnamaker-service-azure.md) created in step 2. Then, you can delete the original Cognitive Search resource created in step 1.
-
-In the [QnA Maker portal](https://www.qnamaker.ai/), create your first knowledge base.
-
-# [QnA Maker managed (preview release)](#tab/v2)
-
-[Create Private endpoints](../reference-private-endpoint.md) to the Azure Search resource.
--
+You should follow the steps below to restrict public access to QnA Maker resources. Protect a Cognitive Services resource from public access by [configuring the virtual network](../../cognitive-services-virtual-networks.md?tabs=portal).
## Restrict access to App Service (QnA Runtime)
-You can add IPs to App service allowlist to restrict access or Configure App Service Environemnt to host QnA Maker App Service.
+You can add IPs to App service allow list to restrict access or Configure App Service Environment to host QnA Maker App Service.
-#### Add IPs to App Service allowlist
+#### Add IPs to App Service allow list
-1. Allow traffic only from Cognitive Services IPs. These are already included in Service Tag `CognitiveServicesManagement`. This is required for Authoring APIs (Create/Update KB) to invoke the app service and update Azure Search service accordingly. Check out [more information about service tags.](../../../virtual-network/service-tags-overview.md)
+1.
+traffic only from Cognitive Services IPs. These are already included in Service Tag `CognitiveServicesManagement`. This is required for Authoring APIs (Create/Update KB) to invoke the app service and update Azure Search service accordingly. Check out [more information about service tags.](../../../virtual-network/service-tags-overview.md)
2. Make sure you also allow other entry points like Azure Bot Service, QnA Maker portal, etc. for prediction "GenerateAnswer" API access.
-3. Please follow these steps to add the IP Address ranges to an allowlist:
+3. Please follow these steps to add the IP Address ranges to an allow list:
1. Download [IP Ranges for all service tags](https://www.microsoft.com/download/details.aspx?id=56519). 2. Select the IPs of "CognitiveServicesManagement".
- 3. Navigate to the networking section of your App Service resource, and click on "Configure Access Restriction" option to add the IPs to an allowlist.
+ 3. Navigate to the networking section of your App Service resource, and click on "Configure Access Restriction" option to add the IPs to an allow list.
-We also have an automated script to do the same for your App Service. You can find the [PowerShell script to configure an allowlist](https://github.com/pchoudhari/QnAMakerBackupRestore/blob/master/AddRestrictedIPAzureAppService.ps1) on GitHub. You need to input subscription id, resource group and actual App Service name as script parameters. Running the script will automatically add the IPs to App Service allowlist.
+We also have an automated script to do the same for your App Service. You can find the [PowerShell script to configure an allow list](https://github.com/pchoudhari/QnAMakerBackupRestore/blob/master/AddRestrictedIPAzureAppService.ps1) on GitHub. You need to input subscription id, resource group and actual App Service name as script parameters. Running the script will automatically add the IPs to App Service allow list.
#### Configure App Service Environment to host QnA Maker App Service
The App Service Environment(ASE) can be used to host QnA Maker App service. Plea
4. Create a QnA Maker cognitive service instance (Microsoft.CognitiveServices/accounts) using Azure Resource Manager, where QnA Maker endpoint should be set to the App Service Endpoint created above (https:// mywebsite.myase.p.azurewebsite.net). +
+## Restrict access to Cognitive Search Resource
+
+# [QnA Maker GA (stable release)](#tab/v1)
+
+Cognitive Search instance can be isolated via a Private Endpoint after the QnA Maker Resources have been created. Private Endpoint connections require a VNet through which the Search Service Instance can be accessed.
+
+If the QnA Maker App Service is restricted using an App Service Environment, use the same VNet to create a Private Endpoint connection to the Cognitive Search instance. Create a new DNS entry in the VNet to map the Cognitive Search endpoint to the Cognitive Search Private Endpoint IP address.
+
+If an App Service Environment is not used for the QnAMaker App Service, create a new VNet resource first and then create the Private Endpoint connection to the Cognitive Search instance. In this case, the QnA Maker App Service needs [to be integrated with the VNet](https://docs.microsoft.com/azure/app-service/web-sites-integrate-with-vnet) to connect to the Cognitive Search instance.
+
+# [QnA Maker managed (preview release)](#tab/v2)
+
+[Create Private endpoints](../reference-private-endpoint.md) to the Azure Search resource.
++
cognitive-services Reference Precise Answering https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/QnAMaker/reference-precise-answering.md
The precise answering feature introduced in QnA Maker managed (Preview), allows
detects the precise short answer from the answer passage, if there is a short answer present as a fact in the answer passage. This feature is on by-default in the test pane, so that you can test the functionality specific to your scenario. This feature is extremely beneficial for both content developers as well as
-end users. Now, content developers don't need to manually curate specific QnA pairs for every fact present in the knowledge-base, and the end user doesn't need to look through the whole answer passage returned from the service to find the actual fact that answers the user's query.
+end users. Now, content developers don't need to manually curate specific QnA pairs for every fact present in the knowledge-base, and the end user doesn't need to look through the whole answer passage returned from the service to find the actual fact that answers the user's query. You can fetch [precise answers via the Generate Answer API](How-To/metadata-generateanswer-usage.md#get-precise-answers-with-generateanswer-api).
## Precise answering on QnA Maker portal In the QnA Maker portal, when you open the test-pane, you will see an option to **Display short answer** on the top. This option will be selected by-default.
-When you enter a query in the test pane, you will see a short-answer along with the answer passage, if there is a short answer present in the answer passage.
+When you enter a query in the test pane, you will see a short-answer along with the answer passage, if there is a short answer present in the answer passage.
![Managed enabled test pane](../QnAMaker/media/conversational-context/test-pane-with-managed.png)
When you publish a bot, you get the precise answer enabled experience by default
## Language support
-Currently the precise answer feature is only supported for English.
+Currently the precise answer feature is only supported for English.
cognitive-services How To Custom Speech Test And Train https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-custom-speech-test-and-train.md
A model trained on a subset of scenarios can only perform well in those scenario
> Start with small sets of sample data that match the language and acoustics your model will encounter. > For example, record a small but representative sample of audio on the same hardware and in the same acoustic environment your model will find in production scenarios. > Small datasets of representative data can expose problems before you have invested in gathering a much larger datasets for training.
+>
+> To quickly get started, consider using sample data. See this GitHub repository for <a href="https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/sampledata/customspeech" target="_target">sample Custom Speech data </a>
## Data types
This table lists accepted data types, when each data type should be used, and th
| [Audio + Human-labeled transcripts](#audio--human-labeled-transcript-data-for-testingtraining) | Yes<br>Used to evaluate accuracy | 0.5-5 hours of audio | Yes | 1-20 hours of audio | | [Related text](#related-text-data-for-training) | No | N/a | Yes | 1-200 MB of related text |
-When you train a new model, start with [related text](#related-text-data-for-training). This data will already improve the recognition of special terms and phrases. Training with text is much faster than training with audio (minutes vs. days).
- Files should be grouped by type into a dataset and uploaded as a .zip file. Each dataset can only contain a single data type. > [!TIP]
-> To quickly get started, consider using sample data. See this GitHub repository for <a href="https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/sampledata/customspeech" target="_target">sample Custom Speech data </a>
+> When you train a new model, start with [related text](#related-text-data-for-training). This data will already improve the recognition of special terms and phrases. Training with text is much faster than training with audio (minutes vs. days).
> [!NOTE] > Not all base models support training with audio. If a base model does not support it, the Speech service will only use the text from the transcripts and ignore the audio. See [Language support](language-support.md#speech-to-text) for a list of base models that support training with audio data. Even if a base model supports training with audio data, the service might use only part of the audio. Still it will use all the transcripts.-
-> [!NOTE]
+>
> In cases when you change the base model used for training, and you have audio in the training dataset, *always* check whether the new selected base model [supports training with audio data](language-support.md#speech-to-text). If the previously used base model did not support training with audio data, and the training dataset contains audio, training time with the new base model will **drastically** increase, and may easily go from several hours to several days and more. This is especially true if your Speech service subscription is **not** in a [region with the dedicated hardware](custom-speech-overview.md#set-up-your-azure-account) for training. > > If you face the issue described in the paragraph above, you can quickly decrease the training time by reducing the amount of audio in the dataset or removing it completely and leaving only the text. The latter option is highly recommended if your Speech service subscription is **not** in a [region with the dedicated hardware](custom-speech-overview.md#set-up-your-azure-account) for training.
cognitive-services How To Use Codec Compressed Audio Input Streams https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-use-codec-compressed-audio-input-streams.md
The Speech service SDK **Compressed Audio Input Stream** API provides a way to s
Platform | Languages | Supported GStreamer version | : | : | ::
-Windows (excluding UWP) | C++, C#, Java, Python | [1.15.1](https://gstreamer.freedesktop.org/releases/gstreamer/1.5.1.html)
+Windows (excluding UWP) | C++, C#, Java, Python | [1.18.3](https://gstreamer.freedesktop.org/data/pkg/windows/1.18.3/)
Linux | C++, C#, Java, Python | [supported Linux distributions and target architectures](~/articles/cognitive-services/speech-service/speech-sdk.md)
-Android | Java | [1.14.4](https://gstreamer.freedesktop.org/data/pkg/android/1.14.4/)
+Android | Java | [1.18.3](https://gstreamer.freedesktop.org/data/pkg/android/1.18.3/)
## Speech SDK version required for compressed audio input * Speech SDK version 1.10.0 or later is required for RHEL 8 and CentOS 8 * Speech SDK version 1.11.0 or later is required for for Windows.
+* Speech SDK version 1.16.0 or later for latest gstreamer on Windows and Android.
[!INCLUDE [supported-audio-formats](includes/supported-audio-formats.md)]
cognitive-services Speech Container Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/speech-container-faq.md
+
+ Title: Speech service containers frequently asked questions (FAQ)
+
+description: Install and run speech containers. speech-to-text transcribes audio streams to text in real time that your applications, tools, or devices can consume or display. Text-to-speech converts input text into human-like synthesized speech.
++++++ Last updated : 03/11/2021++++
+# Speech service containers frequently asked questions (FAQ)
+
+When using the Speech service with containers, rely on this collection of frequently asked questions before escalating to support. This article captures questions varying degree, from general to technical. To expand an answer, click on the question.
+
+## General questions
+
+<details>
+<summary>
+<b>How do Speech containers work and how do I set them up?</b>
+</summary>
+
+**Answer:** When setting up the production cluster, there are several things to consider. First, setting up single language, multiple containers, on the same machine, should not be a large issue. If you are experiencing problems, it may be a hardware-related issue - so we would first look at resource, that is; CPU and memory specifications.
+
+Consider for a moment, the `ja-JP` container and latest model. The acoustic model is the most demanding piece CPU-wise, while the language model demands the most memory. When we benchmarked the use, it takes about 0.6 CPU cores to process a single speech-to-text request when audio is flowing in at real-time (like from the microphone). If you are feeding audio faster than real-time (like from a file), that usage can double (1.2x cores). Meanwhile, the memory listed below is operating memory for decoding speech. It does *not* take into account the actual full size of the language model, which will reside in file cache. For `ja-JP` that's an additional 2 GB; for `en-US`, it may be more (6-7 GB).
+
+If you have a machine where memory is scarce, and you are trying to deploy multiple languages on it, it is possible that file cache is full, and the OS is forced to page models in and out. For a running transcription, that could be disastrous, and may lead to slowdowns and other performance implications.
+
+Furthermore, we pre-package executables for machines with the [advanced vector extension (AVX2)](speech-container-howto.md#advanced-vector-extension-support) instruction set. A machine with the AVX512 instruction set will require code generation for that target, and starting 10 containers for 10 languages may temporarily exhaust CPU. A message like this one will appear in the docker logs:
+
+```console
+2020-01-16 16:46:54.981118943
+[W:onnxruntime:Default, tvm_utils.cc:276 LoadTVMPackedFuncFromCache]
+Cannot find Scan4_llvm__mcpu_skylake_avx512 in cache, using JIT...
+```
+
+You can set the number of decoders you want inside a *single* container using `DECODER MAX_COUNT` variable. So, basically, we should start with your SKU (CPU/memory), and we can suggest how to get the best out of it. A great starting point is referring to the recommended host machine resource specifications.
+
+<br>
+</details>
+
+<details>
+<summary>
+<b>Could you help with capacity planning and cost estimation of on-prem Speech-to-text containers?</b>
+</summary>
+
+**Answer:** For container capacity in batch processing mode, each decoder could handle 2-3x in real time, with two CPU cores, for a single recognition. We do not recommend keeping more than two concurrent recognitions per container instance, but recommend running more instances of containers for reliability/availability reasons, behind a load balancer.
+
+Though we could have each container instance running with more decoders. For example, we may be able to set up 7 decoders per container instance on an eight core machine (at at more than 2x each), yielding 15x throughput. There is a param `DECODER_MAX_COUNT` to be aware of. For the extreme case, reliability and latency issues arise, with throughput increased significantly. For a microphone, it will be at 1x real time. The overall usage should be at about one core for a single recognition.
+
+For scenario of processing 1 K hours/day in batch processing mode, in an extreme case, 3 VMs could handle it within 24 hours but not guaranteed. To handle spike days, failover, update, and to provide minimum backup/BCP, we recommend 4-5 machines instead of 3 per cluster, and with 2+ clusters.
+
+For hardware, we use standard Azure VM `DS13_v2` as a reference (each core must be 2.6 GHz or better, with AVX2 instruction set enabled).
+
+| Instance | vCPU(s) | RAM | Temp storage | Pay-as-you-go with AHB | 1-year reserve with AHB (% Savings) | 3-year reserved with AHB (% Savings) |
+|--||--|--||-|--|
+| `DS13 v2` | 8 | 56 GiB | 112 GiB | $0.598/hour | $0.3528/hour (~41%) | $0.2333/hour (~61%) |
+
+Based on the design reference (two clusters of 5 VMs to handle 1 K hours/day audio batch processing), 1-year hardware cost will be:
+
+> 2 (clusters) * 5 (VMs per cluster) * $0.3528/hour * 365 (days) * 24 (hours) = $31K / year
+
+When mapping to physical machine, a general estimation is 1 vCPU = 1 Physical CPU Core. In reality, 1vCPU is more powerful than a single core.
+
+For on-prem, all of these additional factors come into play:
+
+- On what type the physical CPU is and how many cores on it
+- How many CPUs running together on the same box/machine
+- How VMs are set up
+- How hyper-threading / multi-threading is used
+- How memory is shared
+- The OS, etc.
+
+Normally it is not as well tuned as Azure the environment. Considering other overhead, I would say a safe estimation is 10 physical CPU cores = 8 Azure vCPU. Though popular CPUs only have eight cores. With on-prem deployment, the cost will be higher than using Azure VMs. Also, consider the depreciation rate.
+
+Service cost is the same as the online service: $1/hour for speech-to-text. The Speech service cost is:
+
+> $1 * 1000 * 365 = $365K
+
+Maintenance cost paid to Microsoft depends on the service level and content of the service. It various from $29.99/month for basic level to hundreds of thousands if onsite service involved. A rough number is $300/hour for service/maintain. People cost is not included. Other infrastructure costs (such as storage, networks, and load balancers) are not included.
+
+<br>
+</details>
+
+<details>
+<summary>
+<b>Why is punctuation missing from the transcription?</b>
+</summary>
+
+**Answer:** The `speech_recognition_language=<YOUR_LANGUAGE>` should be explicitly configured in the request if they are using Carbon client.
+
+For example:
+
+```python
+if not recognize_once(
+ speechsdk.SpeechRecognizer(
+ speech_config=speechsdk.SpeechConfig(
+ endpoint=template.format("interactive"),
+ speech_recognition_language="ja-JP"),
+ audio_config=audio_config)):
+
+ print("Failed interactive endpoint")
+ exit(1)
+```
+Here is the output:
+
+```cmd
+RECOGNIZED: SpeechRecognitionResult(
+ result_id=2111117c8700404a84f521b7b805c4e7,
+ text="まだ早いまだ早いは猫である名前はまだないどこで生まれたかとんと見当を検討をなつかぬ。
+ 何でも薄暗いじめじめした所でながら泣いていた事だけは記憶している。
+ まだは今ここで初めて人間と言うものを見た。
+ しかも後で聞くと、それは書生という人間中で一番同額同額。",
+ reason=ResultReason.RecognizedSpeech)
+```
+
+<br>
+</details>
+
+<details>
+<summary>
+<b>Can I use a custom acoustic model and language model with Speech container?</b>
+</summary>
+
+We are currently only able to pass one model ID, either custom language model or custom acoustic model.
+
+**Answer:** The decision to *not* support both acoustic and language models concurrently was made. This will remain in effect, until a unified identifier is created to reduce API breaks. So, unfortunately this is not supported right now.
+
+<br>
+</details>
+
+<details>
+<summary>
+<b>Could you explain these errors from the custom speech-to-text container?</b>
+</summary>
+
+**Error 1:**
+
+```cmd
+Failed to fetch manifest: Status: 400 Bad Request Body:
+{
+ "code": "InvalidModel",
+ "message": "The specified model is not supported for endpoint manifests."
+}
+```
+
+**Answer 1:** If you're training with the latest custom model, we currently don't support that. If you train with an older version, it should be possible to use. We are still working on supporting the latest versions.
+
+Essentially, the custom containers do not support Halide or ONNX-based acoustic models (which is the default in the custom training portal). This is due to custom models not being encrypted and we don't want to expose ONNX models, however; language models are fine. The customer will need to explicitly select an older non-ONNX model for custom training. Accuracy will not be affected. The model size may be larger (by 100 MB).
+
+> Support model > 20190220 (v4.5 Unified)
+
+**Error 2:**
+
+```cmd
+HTTPAPI result code = HTTPAPI_OK.
+HTTP status code = 400.
+Reason: Synthesis failed.
+StatusCode: InvalidArgument,
+Details: Voice does not match.
+```
+
+**Answer 2:** You need to provide the correct voice name in the request, which is case-sensitive. Refer to the full service name mapping.
+
+**Error 3:**
+
+```json
+{
+ "code": "InvalidProductId",
+ "message": "The subscription SKU \"CognitiveServices.S0\" is not supported in this service instance."
+}
+```
+
+**Answer 3:** You reed to create a Speech resource, not a Cognitive Services resource.
++
+<br>
+</details>
+
+<details>
+<summary>
+<b>What API protocols are supported, REST or WS?</b>
+</summary>
+
+**Answer:** For speech-to-text and custom speech-to-text containers, we currently only support the websocket based protocol. The SDK only supports calling in WS but not REST. There's a plan to add REST support, but not ETA for the moment. Always refer to the official documentation, see [query prediction endpoints](speech-container-howto.md#query-the-containers-prediction-endpoint).
+
+<br>
+</details>
+
+<details>
+<summary>
+<b>Is CentOS supported for Speech containers?</b>
+</summary>
+
+**Answer:** CentOS 7 is not supported by Python SDK yet, also Ubuntu 19.04 is not supported.
+
+The Python Speech SDK package is available for these operating systems:
+- **Windows** - x64 and x86
+- **Mac** - macOS X version 10.12 or later
+- **Linux** - Ubuntu 16.04, Ubuntu 18.04, Debian 9 on x64
+
+For more information on environment setup, see [Python platform setup](quickstarts/setup-platform.md?pivots=programming-language-python). For now, Ubuntu 18.04 is the recommended version.
+
+<br>
+</details>
+
+<details>
+<summary>
+<b>Why am I getting errors when attempting to call LUIS prediction endpoints?</b>
+</summary>
+
+I am using the LUIS container in an IoT Edge deployment and am attempting to call the LUIS prediction endpoint from another container. The LUIS container is listening on port 5001, and the URL I'm using is this:
+
+```csharp
+var luisEndpoint =
+ $"ws://192.168.1.91:5001/luis/prediction/v3.0/apps/{luisAppId}/slots/production/predict";
+var config = SpeechConfig.FromEndpoint(new Uri(luisEndpoint));
+```
+
+The error I'm getting is:
+
+```cmd
+WebSocket Upgrade failed with HTTP status code: 404 SessionId: 3cfe2509ef4e49919e594abf639ccfeb
+```
+
+I see the request in the LUIS container logs and the message says:
+
+```cmd
+The request path /luis//predict" does not match a supported file type.
+```
+
+What does this mean? What am I missing? I was following the example for the Speech SDK, from [here](https://github.com/Azure-Samples/cognitive-services-speech-sdk). The scenario is that we are detecting the audio directly from the PC microphone and trying to determine the intent, based on the LUIS app we trained. The example I linked to does exactly that. And it works well with the LUIS cloud-based service. Using the Speech SDK seemed to save us from having to make a separate explicit call to the speech-to-text API and then a second call to LUIS.
+
+So, all I am attempting to do is switch from the scenario of using LUIS in the cloud to using the LUIS container. I can't imagine if the Speech SDK works for one, it won't work for the other.
+
+**Answer:**
+The Speech SDK should not be used against a LUIS container. For using the LUIS container, the LUIS SDK or LUIS REST API should be used. Speech SDK should be used against a speech container.
+
+A cloud is different than a container. A cloud can be composed of multiple aggregated containers (sometimes called micro services). So there is a LUIS container and then there is a Speech container - Two separate containers. The Speech container only does speech. The LUIS container only does LUIS. In the cloud, because both containers are known to be deployed, and it is bad performance for a remote client to go to the cloud, do speech, come back, then go to the cloud again and do LUIS, we provide a feature that allows the client to go to Speech, stay in the cloud, go to LUIS then come back to the client. Thus even in this scenario the Speech SDK goes to Speech cloud container with audio, and then Speech cloud container talks to LUIS cloud container with text. The LUIS container has no concept of accepting audio (it would not make sense for LUIS container to accept streaming audio - LUIS is a text-based service). With on-prem, we have no certainty our customer has deployed both containers, we don't presume to orchestrate between containers in our customers' premises, and if both containers are deployed on-prem, given they are more local to the client, it is not a burden to go the SR first, back to client, and have the customer then take that text and go to LUIS.
+
+<br>
+</details>
+
+<details>
+<summary>
+<b>Why are we getting errors with macOS, Speech container and the Python SDK?</b>
+</summary>
+
+When we send a *.wav* file to be transcribed, the result comes back with:
+
+```cmd
+recognition is running....
+Speech Recognition canceled: CancellationReason.Error
+Error details: Timeout: no recognition result received.
+When creating a websocket connection from the browser a test, we get:
+wb = new WebSocket("ws://localhost:5000/speech/recognition/dictation/cognitiveservices/v1")
+WebSocket
+{
+ url: "ws://localhost:5000/speech/recognition/dictation/cognitiveservices/v1",
+ readyState: 0,
+ bufferedAmount: 0,
+ onopen: null,
+ onerror: null,
+ ...
+}
+```
+
+We know the websocket is set up correctly.
+
+**Answer:**
+If that is the case, then see [this GitHub issue](https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues/310). We have a work-around, [proposed here](https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues/310#issuecomment-527542722).
+
+Carbon fixed this at version 1.8.
++
+<br>
+</details>
+
+<details>
+<summary>
+<b>What are the differences in the Speech container endpoints?</b>
+</summary>
+
+Could you help fill the following test metrics, including what functions to test, and how to test the SDK and REST APIs? Especially, differences in "interactive" and "conversation", which I did not see from existing doc/sample.
+
+| Endpoint | Functional test | SDK | REST API |
+||-|--|-|
+| `/speech/synthesize/cognitiveservices/v1` | Synthesize Text (text-to-speech) | | Yes |
+| `/speech/recognition/dictation/cognitiveservices/v1` | Cognitive Services on-prem dictation v1 websocket endpoint | Yes | No |
+| `/speech/recognition/interactive/cognitiveservices/v1` | The Cognitive Services on-prem interactive v1 websocket endpoint | | |
+| `/speech/recognition/conversation/cognitiveservices/v1` | The cognitive services on-prem conversation v1 websocket endpoint | | |
+
+**Answer:**
+This is a fusion of:
+- People trying the dictation endpoint for containers, (I'm not sure how they got that URL)
+- The 1<sup>st</sup> party endpoint being the one in a container.
+- The 1<sup>st</sup> party endpoint returning speech.fragment messages instead of the `speech.hypothesis` messages the 3<sup>rd</sup> part endpoints return for the dictation endpoint.
+- The Carbon quickstarts all use `RecognizeOnce` (interactive mode)
+- Carbon having an assert that for `speech.fragment` messages requiring they aren't returned in interactive mode.
+- Carbon having the asserts fire in release builds (killing the process).
+
+The workaround is either switch to using continuous recognition in your code, or (quicker) connect to either the interactive or continuous endpoints in the container.
+For your code, set the endpoint to `host:port`/speech/recognition/interactive/cognitiveservices/v1
+
+For the various modes, see Speech modes - see below:
+
+## Speech modes - Interactive, conversation, dictation
++
+The proper fix is coming with SDK 1.8, which has on-prem support (will pick the right endpoint, so we will be no worse than online service). In the meantime, there is a sample for continuous recognition, why don't we point to it?
+
+https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/6805d96bf69d9e95c9137fe129bc5d81e35f6309/samples/python/console/speech_sample.py#L196
+
+<br>
+</details>
+
+<details>
+<summary>
+<b>Which mode should I use for various audio files?</b>
+</summary>
+
+**Answer:** Here's a [quickstart using Python](./get-started-speech-to-text.md?pivots=programming-language-python). You can find the other languages linked on the docs site.
+
+Just to clarify for the interactive, conversation, and dictation; this is an advanced way of specifying the particular way in which our service will handle the speech request. Unfortunately, for the on-prem containers we have to specify the full URI (since it includes local machine), so this information leaked from the abstraction. We are working with the SDK team to make this more usable in the future.
+
+<br>
+</details>
+
+<details>
+<summary>
+<b>How can we benchmark a rough measure of transactions/second/core?</b>
+</summary>
+
+**Answer:** Here are some of the rough numbers to expect from existing model (will change for the better in the one we will ship in GA):
+
+- For files, the throttling will be in the speech SDK, at 2x. First five seconds of audio are not throttled. Decoder is capable of doing about 3x real time. For this, the overall CPU usage will be close to 2 cores for a single recognition.
+- For mic, it will be at 1x real time. The overall usage should be at about 1 core for a single recognition.
+
+This can all be verified from the docker logs. We actually dump the line with session and phrase/utterance statistics, and that includes the RTF numbers.
++
+<br>
+</details>
+
+<details>
+<summary>
+<b>Is it common to split audio files into chucks for Speech container usage?</b>
+</summary>
+
+My current plan is to take an existing audio file and split it up into 10 second chunks and send those through the container. Is that an acceptable scenario? Is there a better way to process larger audio files with the container?
+
+**Answer:** Just use the speech SDK and give it the file, it will do the right thing. Why do you need to chunk the file?
++
+<br>
+</details>
+
+<details>
+<summary>
+<b>How do I make multiple containers run on the same host?</b>
+</summary>
+
+The doc says to expose a different port, which I do, but the LUIS container is still listening on port 5000?
+
+**Answer:** Try `-p <outside_unique_port>:5000`. For example, `-p 5001:5000`.
++
+<br>
+</details>
+
+## Technical questions
+
+<details>
+<summary>
+<b>How can I get non-batch APIs to handle audio &lt;15 seconds long?</b>
+</summary>
+
+**Answer:** `RecognizeOnce()` in interactive mode only processes up to 15 seconds of audio, as the mode is intended for Speech Commanding where utterances are expected to be short. If you use `StartContinuousRecognition()` for dictation or conversation, there is no 15 second limit.
++
+<br>
+</details>
+
+<details>
+<summary>
+<b>What are the recommended resources, CPU and RAM; for 50 concurrent requests?</b>
+</summary>
+
+How many concurrent requests will a 4 core, 4 GB RAM handle? If we have to serve for example, 50 concurrent requests, how many Core and RAM is recommended?
+
+**Answer:**
+At real time, 8 with our latest `en-US`, so we recommend using more docker containers beyond 6 concurrent requests. It gets crazier beyond 16 cores, and it becomes non-uniform memory access (NUMA) node sensitive. The following table describes the minimum and recommended allocation of resources for each Speech container.
+
+# [Speech-to-text](#tab/stt)
+
+| Container | Minimum | Recommended |
+|-|||
+| Speech-to-text | 2 core, 2-GB memory | 4 core, 4-GB memory |
+
+# [Custom Speech-to-text](#tab/cstt)
+
+| Container | Minimum | Recommended |
+|--|||
+| Custom Speech-to-text | 2 core, 2-GB memory | 4 core, 4-GB memory |
+
+# [Text-to-speech](#tab/tts)
+
+| Container | Minimum | Recommended |
+|-|||
+| Text-to-speech | 1 core, 2-GB memory | 2 core, 3-GB memory |
+
+# [Custom Text-to-speech](#tab/ctts)
+
+| Container | Minimum | Recommended |
+|--|||
+| Custom Text-to-speech | 1 core, 2-GB memory | 2 core, 3-GB memory |
+
+***
+
+- Each core must be at least 2.6 GHz or faster.
+- For files, the throttling will be in the Speech SDK, at 2x (first 5 seconds of audio are not throttled).
+- The decoder is capable of doing about 2-3x real time. For this, the overall CPU usage will be close to two cores for a single recognition. That's why we do not recommend keeping more than two active connections, per container instance. The extreme side would be to put about 10 decoders at 2x real time in an eight core machine like `DS13_V2`. For the container version 1.3 and later, there's a param you could try setting `DECODER_MAX_COUNT=20`.
+- For microphone, it will be at 1x real time. The overall usage should be at about one core for a single recognition.
+
+Consider the total number of hours of audio you have. If the number is large, to improve reliability/availability, we suggest running more instances of containers, either on a single box or on multiple boxes, behind a load balancer. Orchestration could be done using Kubernetes (K8S) and Helm, or with Docker compose.
+
+As an example, to handle 1000 hours/24 hours, we have tried setting up 3-4 VMs, with 10 instances/decoders per VM.
+
+<br>
+</details>
+
+<details>
+<summary>
+<b>Does the Speech container support punctuation?</b>
+</summary>
+
+**Answer:** We have capitalization (ITN) available in the on-prem container. Punctuation is language-dependent, and not supported for some languages, including Chinese and Japanese.
+
+We *do* have implicit and basic punctuation support for the existing containers, but it is `off` by default. What that means is that you can get the `.` character in your example, but not the `。` character. To enable this implicit logic, here's an example of how to do so in Python using our Speech SDK (it would be similar in other languages):
+
+```python
+speech_config.set_service_property(
+ name='punctuation',
+ value='implicit',
+ channel=speechsdk.ServicePropertyChannel.UriQueryParameter
+)
+```
+
+<br>
+</details>
+
+<details>
+<summary>
+<b>Why am I getting 404 errors when attempting to POST data to speech-to-text container?</b>
+</summary>
+
+Here is an example HTTP POST:
+
+```http
+POST /speech/recognition/conversation/cognitiveservices/v1?language=en-US&format=detailed HTTP/1.1
+Accept: application/json;text/xml
+Content-Type: audio/wav; codecs=audio/pcm; samplerate=16000
+Transfer-Encoding: chunked
+User-Agent: PostmanRuntime/7.18.0
+Cache-Control: no-cache
+Postman-Token: xxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
+Host: 10.0.75.2:5000
+Accept-Encoding: gzip, deflate
+Content-Length: 360044
+Connection: keep-alive
+HTTP/1.1 404 Not Found
+Date: Tue, 22 Oct 2019 15:42:56 GMT
+Server: Kestrel
+Content-Length: 0
+```
+
+**Answer:** We do not support REST API in either speech-to-text container, we only support WebSockets through the Speech SDK. Always refer to the official documentation, see [query prediction endpoints](speech-container-howto.md#query-the-containers-prediction-endpoint).
+
+<br>
+</details>
++
+<details>
+<summary>
+<b> Why is the container running as a non-root user? What issues might occur because of this?</b>
+</summary>
+
+**Answer:** Note that the default user inside the container is a non-root user. This provides protection against processes escaping the container and obtaining escalated permissions on the host node. By default, some platforms like the OpenShift Container Platform already do this by running containers using an arbitrarily assigned user ID. For these platforms, the non-root user will need to have permissions to write to any externally mapped volume that requires writes. For example a logging folder, or a custom model download folder.
+<br>
+</details>
+
+<details>
+<summary>
+<b>When using the speech-to-text service, why am I getting this error?</b>
+</summary>
+
+```cmd
+Error in STT call for file 9136835610040002161_413008000252496:
+{
+ "reason": "ResultReason.Canceled",
+ "error_details": "Due to service inactivity the client buffer size exceeded. Resetting the buffer. SessionId: xxxxx..."
+}
+```
+
+**Answer:** This typically happens when you feed the audio faster than the Speech recognition container can take it. Client buffers fill up, and the cancellation is triggered. You need to control the concurrency and the RTF at which you send the audio.
+
+<br>
+</details>
+
+<details>
+<summary>
+<b>Could you explain these text-to-speech container errors from the C++ examples?</b>
+</summary>
+
+**Answer:** If the container version is older than 1.3, then this code should be used:
+
+```cpp
+const auto endpoint = "http://localhost:5000/speech/synthesize/cognitiveservices/v1";
+auto config = SpeechConfig::FromEndpoint(endpoint);
+auto synthesizer = SpeechSynthesizer::FromConfig(config);
+auto result = synthesizer->SpeakTextAsync("{{{text1}}}").get();
+```
+
+Older containers don't have the required endpoint for Carbon to work with the `FromHost` API. If the containers used for version 1.3, then this code should be used:
+
+```cpp
+const auto host = "http://localhost:5000";
+auto config = SpeechConfig::FromHost(host);
+config->SetSpeechSynthesisVoiceName(
+ "Microsoft Server Speech Text to Speech Voice (en-US, AriaRUS)");
+auto synthesizer = SpeechSynthesizer::FromConfig(config);
+auto result = synthesizer->SpeakTextAsync("{{{text1}}}").get();
+```
+
+Below is an example of using the `FromEndpoint` API:
+
+```cpp
+const auto endpoint = "http://localhost:5000/cognitiveservices/v1";
+auto config = SpeechConfig::FromEndpoint(endpoint);
+config->SetSpeechSynthesisVoiceName(
+ "Microsoft Server Speech Text to Speech Voice (en-US, AriaRUS)");
+auto synthesizer = SpeechSynthesizer::FromConfig(config);
+auto result = synthesizer->SpeakTextAsync("{{{text2}}}").get();
+```
+
+ The `SetSpeechSynthesisVoiceName` function is called because the containers with an updated text-to-speech engine require the voice name.
+
+<br>
+</details>
+
+<details>
+<summary>
+<b>How can I use v1.7 of the Speech SDK with a Speech container?</b>
+</summary>
+
+**Answer:** There are three endpoints on the Speech container for different usages, they're defined as Speech modes - see below:
+
+## Speech modes
++
+They are for different purposes and are used differently.
+
+Python [samples](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/python/console/speech_sample.py):
+- For single recognition (interactive mode) with a custom endpoint (that is; `SpeechConfig` with an endpoint parameter), see `speech_recognize_once_from_file_with_custom_endpoint_parameters()`.
+- For continuous recognition (conversation mode), and just modify to use a custom endpoint as above, see `speech_recognize_continuous_from_file()`.
+- To enable dictation in samples like above (only if you really need it), right after you create `speech_config`, add code `speech_config.enable_dictation()`.
+
+In C# to enable dictation, invoke the `SpeechConfig.EnableDictation()` function.
+
+### `FromEndpoint` APIs
+| Language | API details |
+|-|:|
+| C++ | <a href="https://docs.microsoft.com/cpp/cognitive-services/speech/speechconfig#fromendpoint" target="_blank">`SpeechConfig::FromEndpoint` <span class="docon docon-navigate-external x-hidden-focus"></span></a> |
+| C# | <a href="https://docs.microsoft.com/dotnet/api/microsoft.cognitiveservices.speech.speechconfig.fromendpoint" target="_blank">`SpeechConfig.FromEndpoint` <span class="docon docon-navigate-external x-hidden-focus"></span></a> |
+| Java | <a href="https://docs.microsoft.com/java/api/com.microsoft.cognitiveservices.speech.speechconfig.fromendpoint" target="_blank">`SpeechConfig.fromendpoint` <span class="docon docon-navigate-external x-hidden-focus"></span></a> |
+| Objective-C | <a href="https://docs.microsoft.com/objectivec/cognitive-services/speech/spxspeechconfiguration#initwithendpoint" target="_blank">`SPXSpeechConfiguration:initWithEndpoint;` <span class="docon docon-navigate-external x-hidden-focus"></span></a> |
+| Python | <a href="https://docs.microsoft.com/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.speechconfig" target="_blank">`SpeechConfig;` <span class="docon docon-navigate-external x-hidden-focus"></span></a> |
+| JavaScript | Not currently supported, nor is it planned. |
+
+<br>
+</details>
+
+<details>
+<summary>
+<b>How can I use v1.8 of the Speech SDK with a Speech container?</b>
+</summary>
+
+**Answer:** There's a new `FromHost` API. This does not replace or modify any existing APIs. It just adds an alternative way to create a speech config using a custom host.
+
+### `FromHost` APIs
+
+| Language | API details |
+|--|:-|
+| C# | <a href="https://docs.microsoft.com/dotnet/api/microsoft.cognitiveservices.speech.speechconfig.fromhost" target="_blank">`SpeechConfig.FromHost` <span class="docon docon-navigate-external x-hidden-focus"></span></a> |
+| C++ | <a href="https://docs.microsoft.com/cpp/cognitive-services/speech/speechconfig#fromhost" target="_blank">`SpeechConfig::FromHost` <span class="docon docon-navigate-external x-hidden-focus"></span></a> |
+| Java | <a href="https://docs.microsoft.com/java/api/com.microsoft.cognitiveservices.speech.speechconfig.fromhost" target="_blank">`SpeechConfig.fromHost` <span class="docon docon-navigate-external x-hidden-focus"></span></a> |
+| Objective-C | <a href="https://docs.microsoft.com/objectivec/cognitive-services/speech/spxspeechconfiguration#initwithhost" target="_blank">`SPXSpeechConfiguration:initWithHost;` <span class="docon docon-navigate-external x-hidden-focus"></span></a> |
+| Python | <a href="https://docs.microsoft.com/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.speechconfig" target="_blank">`SpeechConfig;` <span class="docon docon-navigate-external x-hidden-focus"></span></a> |
+| JavaScript | Not currently supported |
+
+> Parameters: host (mandatory), subscription key (optional, if you can use the service without it).
+
+Format for host is `protocol://hostname:port` where `:port` is optional (see below):
+- If the container is running locally, the hostname is `localhost`.
+- If the container is running on a remote server, use the hostname or IPv4 address of that server.
+
+Host parameter examples for speech-to-text:
+- `ws://localhost:5000` - non-secure connection to a local container using port 5000
+- `ws://some.host.com:5000` - non-secure connection to a container running on a remote server
+
+Python samples from above, but use `host` parameter instead of `endpoint`:
+
+```python
+speech_config = speechsdk.SpeechConfig(host="ws://localhost:5000")
+```
+
+<br>
+</details>
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Cognitive Services containers](speech-container-howto.md)
communication-services Privacy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/privacy.md
Using the Azure portal or Azure Resource Manager APIs with Communication Service
### Telephone number management
-Azure Communication Services maintains a directory of phone numbers associated with a Communication Services resource. Use these APIs to retrieve phone numbers and delete them:
+Azure Communication Services maintains a directory of phone numbers associated with a Communication Services resource. Use [Phone Number Administration APIs](/rest/api/communication/phonenumberadministration) to retrieve phone numbers and delete them:
+
+- `Get All Phone Numbers`
- `Release Phone Number` ### Chat
-Chat threads and messages are retained until explicitly deleted. A fully idle thread will be automatically deleted after 30 days. Use [Chat APIs](/rest/api/communication/chat/deletechatmessage/deletechatmessage) to get, list, update, and delete messages.
+Chat threads and messages are retained until explicitly deleted. A fully idle thread will be automatically deleted after 30 days. Use [Chat APIs](/rest/api/communication/chat/chatthread) to get, list, update, and delete messages.
- `Get Thread` - `Get Message`
container-instances Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-instances/policy-reference.md
+
+ Title: Built-in policy definitions for Azure Container Instances
+description: Lists Azure Policy built-in policy definitions for Azure Container Instances. These built-in policy definitions provide common approaches to managing your Azure resources.
Last updated : 03/10/2021+++
+# Azure Policy built-in definitions for Azure Container Instances
+
+This page is an index of [Azure Policy](../governance/policy/overview.md) built-in policy
+definitions for Azure Container Instances. For additional Azure Policy built-ins for other services,
+see [Azure Policy built-in definitions](../governance/policy/samples/built-in-policies.md).
+
+The name of each built-in policy definition links to the policy definition in the Azure portal. Use
+the link in the **Version** column to view the source on the
+[Azure Policy GitHub repo](https://github.com/Azure/azure-policy).
+
+## Azure Container Instances
++
+## Next steps
+
+- See the built-ins on the [Azure Policy GitHub repo](https://github.com/Azure/azure-policy).
+- Review the [Azure Policy definition structure](../governance/policy/concepts/definition-structure.md).
+- Review [Understanding policy effects](../governance/policy/concepts/effects.md).
cost-management-billing Pay Bill https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/understand/pay-bill.md
To pay invoices in the Azure portal, you must have the correct [MCA permissions]
The invoice status shows *paid* within 24 hours.
+## Pay now for customers in India
+
+The Reserve Bank of India issued [new regulations](https://www.rbi.org.in/Scripts/NotificationUser.aspx?Id=12002&Mode=0) that will take effect on April 1st 2021. After this date, banks in India may start declining automatic recurring payments, and payments will need to be made manually in the Azure portal.
+
+If your bank declines an automatic recurring payment, weΓÇÖll notify you via email and provide instructions on how to proceed.
+
+Beginning April 1st 2021, you may pay an outstanding balance any time by following these steps:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/) as the Account Administrator.
+1. Search for **Cost Management + Billing**.
+1. On the Overview page, select the **Pay now** button. (If you don't see the **Pay now** button, you do not have an outstanding balance.)
+ ## Check access to a Microsoft Customer Agreement [!INCLUDE [billing-check-mca](../../../includes/billing-check-mca.md)] ## Next steps -- To become eligible to pay by check/wire transfer, see [how to pay by invoice](../manage/pay-by-invoice.md)
+- To become eligible to pay by check/wire transfer, see [how to pay by invoice](../manage/pay-by-invoice.md)
data-factory Ci Cd Github Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/ci-cd-github-troubleshoot-guide.md
Previously updated : 12/03/2020 Last updated : 03/12/2021 # Troubleshoot CI-CD, Azure DevOps, and GitHub issues in ADF
Azure Resource Manager restricts template size to be 4mb. Limit the size of your
For small to medium solutions, a single template is easier to understand and maintain. You can see all the resources and values in a single file. For advanced scenarios, linked templates enable you to break down the solution into targeted components. Please follow best practice at [Using Linked and Nested Templates](../azure-resource-manager/templates/linked-templates.md?tabs=azure-powershell).
-### Cannot connect to GIT Enterprise
+### Cannot connect to GIT Enterprise Cloud
##### Issue
-You cannot connect to GIT Enterprise because of permission issues. You can see error like **422 - Unprocessable Entity.**
+You cannot connect to GIT Enterprise Cloud because of permission issues. You can see error like **422 - Unprocessable Entity.**
#### Cause
-You have not configured Oauth for ADF. Your URL is misconfigured.
+* You are using Git Enterprise on prem server.
+* You have not configured Oauth for ADF.
+* Your URL is misconfigured.
##### Resolution
-You grant Oauth access to ADF at first. Then, you have to use correct URL to connect to GIT Enterprise. The configuration must be set to the customer organization(s). For example, ADF will first try *https://hostname/api/v3/search/repositories?q=user%3<customer credential>....* and fail. Then, it will try *https://hostname/api/v3/orgs/<org>/<repo>...*, and succeed.
+You grant Oauth access to ADF at first. Then, you have to use correct URL to connect to GIT Enterprise. The configuration must be set to the customer organization(s). For example, ADF will try *https://hostname/api/v3/search/repositories?q=user%3<customer credential>....* at first and fail. Then, it will try *https://hostname/api/v3/orgs/<org>/<repo>...*, and succeed.
### Recover from a deleted data factory
data-factory Connector Azure Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-sql-database.md
Previously updated : 01/11/2021 Last updated : 03/12/2021 # Copy and transform data in Azure SQL Database by using Azure Data Factory
More specifically:
Driver={ODBC Driver 17 for SQL Server};Server=<serverName>;Database=<databaseName>;ColumnEncryption=Enabled;KeyStoreAuthentication=KeyVaultClientSecret;KeyStorePrincipalId=<servicePrincipalKey>;KeyStoreSecret=<servicePrincipalKey> ```
- - To use **Data Factory Managed Identity authentication**:
+ - If you run Self-hosted Integration Runtime on Azure Virtual Machine, you can use **Managed Identity authentication** with Azure VM's identity:
1. Follow the same [prerequisites](#managed-identity) to create database user for the managed identity and grant the proper role in your database. 2. In linked service, specify the ODBC connection string as below, and select **Anonymous** authentication as the connection string itself indicates`Authentication=ActiveDirectoryMsi`.
data-factory Connector Azure Sql Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-sql-managed-instance.md
Previously updated : 12/18/2020 Last updated : 03/12/2021 # Copy and transform data in Azure SQL Managed Instance by using Azure Data Factory
More specifically:
Driver={ODBC Driver 17 for SQL Server};Server=<serverName>;Database=<databaseName>;ColumnEncryption=Enabled;KeyStoreAuthentication=KeyVaultClientSecret;KeyStorePrincipalId=<servicePrincipalKey>;KeyStoreSecret=<servicePrincipalKey> ```
- - To use **Data Factory Managed Identity authentication**:
+ - If you run Self-hosted Integration Runtime on Azure Virtual Machine, you can use **Managed Identity authentication** with Azure VM's identity:
1. Follow the same [prerequisites](#managed-identity) to create database user for the managed identity and grant the proper role in your database. 2. In linked service, specify the ODBC connection string as below, and select **Anonymous** authentication as the connection string itself indicates`Authentication=ActiveDirectoryMsi`.
data-factory Connector Sap Table https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-sap-table.md
Previously updated : 03/03/2021 Last updated : 03/12/2021 # Copy data from an SAP table by using Azure Data Factory
To copy data from an SAP table, the following properties are supported:
<br/> >To load data partitions in parallel to speed up copy, the parallel degree is controlled by the [`parallelCopies`](copy-activity-performance-features.md#parallel-copy) setting on the copy activity. For example, if you set `parallelCopies` to four, Data Factory concurrently generates and runs four queries based on your specified partition option and settings, and each query retrieves a portion of data from your SAP table. We strongly recommend making `maxPartitionsNumber` a multiple of the value of the `parallelCopies` property. When copying data into file-based data store, it's also recommanded to write to a folder as multiple files (only specify folder name), in which case the performance is better than writing to a single file. +
+>[!TIP]
+> The `BASXML` is enabled by default for this SAP Table connector on Azure Data Factory side.
+ In `rfcTableOptions`, you can use the following common SAP query operators to filter the rows: | Operator | Description |
data-factory How To Create Event Trigger https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/how-to-create-event-trigger.md
Previously updated : 10/18/2018 Last updated : 03/11/2021 # Create a trigger that runs a pipeline in response to a storage event
For a ten-minute introduction and demonstration of this feature, watch the follo
This section shows you how to create a storage event trigger within the Azure Data Factory User Interface.
-1. Switch to the **Edit** tab, shown with a pencil symbol.
+1. Switch to the **Edit** tab, shown with a pencil symbol.
-1. Select **Trigger** on the menu, then select **New/Edit**.
+1. Select **Trigger** on the menu, then select **New/Edit**.
-1. On the **Add Triggers** page, select **Choose trigger...**, then select **+New**.
+1. On the **Add Triggers** page, select **Choose trigger...**, then select **+New**.
1. Select trigger type **Storage Event**
- ![Create new storage event trigger](media/how-to-create-event-trigger/event-based-trigger-image1.png)
+ :::image type="content" source="media/how-to-create-event-trigger/event-based-trigger-image1.png" alt-text="Screenshot of Author page to create a new storage event trigger in Data Factory UI.":::
1. Select your storage account from the Azure subscription dropdown or manually using its Storage account resource ID. Choose which container you wish the events to occur on. Container selection is optional, but be mindful that selecting all containers can lead to a large number of events.
This section shows you how to create a storage event trigger within the Azure Da
> The Storage Event Trigger currently supports only Azure Data Lake Storage Gen2 and General-purpose version 2 storage accounts. Due to an Azure Event Grid limitation, Azure Data Factory only supports a maximum of 500 storage event triggers per storage account. > [!NOTE]
- > To create and modify a new Storage Event Trigger, the Azure account used to log into Data Factory must have at least *Owner* permission on the storage account. No additional permission is required: Service Principal for the Azure Data Factory does _not_ need special permission to either the Storage account or Event Grid.
+ > To create and modify a new Storage Event Trigger, the Azure account used to log into Data Factory and publish the storage event trigger must have appropriate role based access control (Azure RBAC) permission on the storage account. No additional permission is required: Service Principal for the Azure Data Factory does _not_ need special permission to either the Storage account or Event Grid. For more information about access control, see [Role based access control](#role-based-access-control) section.
1. The **Blob path begins with** and **Blob path ends with** properties allow you to specify the containers, folders, and blob names for which you want to receive events. Your storage event trigger requires at least one of these properties to be defined. You can use variety of patterns for both **Blob path begins with** and **Blob path ends with** properties, as shown in the examples later in this article. * **Blob path begins with:** The blob path must start with a folder path. Valid values include `2018/` and `2018/april/shoes.csv`. This field can't be selected if a container isn't selected. * **Blob path ends with:** The blob path must end with a file name or extension. Valid values include `shoes.csv` and `.csv`. Container and folder name are optional but, when specified, they must be separated by a `/blobs/` segment. For example, a container named 'orders' can have a value of `/orders/blobs/2018/april/shoes.csv`. To specify a folder in any container, omit the leading '/' character. For example, `april/shoes.csv` will trigger an event on any file named `shoes.csv` in folder a called 'april' in any container.
- * Note: Blob path **begins with** and **ends with** are the only pattern matching allowed in Storage Event Trigger. Other types of wildcard matching aren't supported for the trigger type.
+ * Note that Blob path **begins with** and **ends with** are the only pattern matching allowed in Storage Event Trigger. Other types of wildcard matching aren't supported for the trigger type.
1. Select whether your trigger will respond to a **Blob created** event, **Blob deleted** event, or both. In your specified storage location, each event will trigger the Data Factory pipelines associated with the trigger.
- ![Configure the storage event trigger](media/how-to-create-event-trigger/event-based-trigger-image2.png)
+ :::image type="content" source="media/how-to-create-event-trigger/event-based-trigger-image2.png" alt-text="Screenshot of storage event trigger creation page.":::
-1. Select whether or not your trigger ignore blobs with zero bytes.
+1. Select whether or not your trigger ignores blobs with zero bytes.
-1. Once you've configured you trigger, click on **Next: Data preview**. This screen shows the existing blobs matched by your storage event trigger configuration. Make sure you've specific filters. Configuring filters that are too broad can match a large number of files created/deleted and may significantly impact your cost. Once your filter conditions have been verified, click **Finish**.
+1. After you configure you trigger, click on **Next: Data preview**. This screen shows the existing blobs matched by your storage event trigger configuration. Make sure you've specific filters. Configuring filters that are too broad can match a large number of files created/deleted and may significantly impact your cost. Once your filter conditions have been verified, click **Finish**.
- ![Storage Event trigger data preview](media/how-to-create-event-trigger/event-based-trigger-image3.png)
+ :::image type="content" source="media/how-to-create-event-trigger/event-based-trigger-image3.png" alt-text="Screenshot of storage event trigger preview page.":::
1. To attach a pipeline to this trigger, go to the pipeline canvas and click **Add trigger** and select **New/Edit**. When the side nav appears, click on the **Choose trigger...** dropdown and select the trigger you created. Click **Next: Data preview** to confirm the configuration is correct and then **Next** to validate the Data preview is correct.
-1. If your pipeline has parameters, you can specify them on the trigger runs parameter side nav. The storage event trigger captures the folder path and file name of the blob into the properties `@triggerBody().folderPath` and `@triggerBody().fileName`. To use the values of these properties in a pipeline, you must map the properties to pipeline parameters. After mapping the properties to parameters, you can access the values captured by the trigger through the `@pipeline().parameters.parameterName` expression throughout the pipeline. Click **Finish** once you are done.
+1. If your pipeline has parameters, you can specify them on the trigger runs parameter side nav. The storage event trigger captures the folder path and file name of the blob into the properties `@triggerBody().folderPath` and `@triggerBody().fileName`. To use the values of these properties in a pipeline, you must map the properties to pipeline parameters. After mapping the properties to parameters, you can access the values captured by the trigger through the `@pipeline().parameters.parameterName` expression throughout the pipeline. For detailed explanation, see [Reference Trigger Metadata in Pipelines](how-to-use-trigger-parameterization.md)
- ![Mapping properties to pipeline parameters](media/how-to-create-event-trigger/event-based-trigger-image4.png)
+ :::image type="content" source="media/how-to-create-event-trigger/event-based-trigger-image4.png" alt-text="Screenshot of storage event trigger mapping properties to pipeline parameters.":::
+
+ In the preceding example, the trigger is configured to fire when a blob path ending in .csv is created in the folder _event-testing_ in the container _sample-data_. The **folderPath** and **fileName** properties capture the location of the new blob. For example, when MoviesDB.csv is added to the path sample-data/event-testing, `@triggerBody().folderPath` has a value of `sample-data/event-testing` and `@triggerBody().fileName` has a value of `moviesDB.csv`. These values are mapped, in the example, to the pipeline parameters `sourceFolder` and `sourceFile`, which can be used throughout the pipeline as `@pipeline().parameters.sourceFolder` and `@pipeline().parameters.sourceFile` respectively.
-In the preceding example, the trigger is configured to fire when a blob path ending in .csv is created in the folder event-testing in the container sample-data. The **folderPath** and **fileName** properties capture the location of the new blob. For example, when MoviesDB.csv is added to the path sample-data/event-testing, `@triggerBody().folderPath` has a value of `sample-data/event-testing` and `@triggerBody().fileName` has a value of `moviesDB.csv`. These values are mapped in the example to the pipeline parameters `sourceFolder` and `sourceFile` which can be used throughout the pipeline as `@pipeline().parameters.sourceFolder` and `@pipeline().parameters.sourceFile` respectively.
+1. Click **Finish** once you are done.
## JSON schema
The following table provides an overview of the schema elements that are related
| - | | -- | | | | **scope** | The Azure Resource Manager resource ID of the Storage Account. | String | Azure Resource Manager ID | Yes | | **events** | The type of events that cause this trigger to fire. | Array | Microsoft.Storage.BlobCreated, Microsoft.Storage.BlobDeleted | Yes, any combination of these values. |
-| **blobPathBeginsWith** | The blob path must begin with the pattern provided for the trigger to fire. For example, `/records/blobs/december/` only fires the trigger for blobs in the `december` folder under the `records` container. | String | | You have to provide a value for at least one of these properties: `blobPathBeginsWith` or `blobPathEndsWith`. |
+| **blobPathBeginsWith** | The blob path must begin with the pattern provided for the trigger to fire. For example, `/records/blobs/december/` only fires the trigger for blobs in the `december` folder under the `records` container. | String | | Provide a value for at least one of these properties: `blobPathBeginsWith` or `blobPathEndsWith`. |
| **blobPathEndsWith** | The blob path must end with the pattern provided for the trigger to fire. For example, `december/boxes.csv` only fires the trigger for blobs named `boxes` in a `december` folder. | String | | You have to provide a value for at least one of these properties: `blobPathBeginsWith` or `blobPathEndsWith`. | | **ignoreEmptyBlobs** | Whether or not zero-byte blobs will trigger a pipeline run. By default, this is set to true. | Boolean | true or false | No |
This section provides examples of storage event trigger settings.
| **Blob path ends with** | `/containername/blobs/file.txt` | Receives events for a blob named `file.txt` under container `containername`. | | **Blob path ends with** | `foldername/file.txt` | Receives events for a blob named `file.txt` in `foldername` folder under any container. |
+## Role-based access control
+
+Azure Data Factory uses Azure role-based access control (Azure RBAC) to ensure that unauthorized access to listen to, subscribe to updates from, and trigger pipelines linked to blob events, are strictly prohibited.
+
+* To successfully create a new or update an existing Storage Event Trigger, the Azure account signed into the Data Factory needs to have appropriate access to the relevant storage account. Otherwise, the operation with fail with _Access Denied_.
+* Data Factory needs no special permission to your Event Grid, and you do _not_ need to assign special RBAC permission to Data Factory service principal for the operation.
+
+Any of following RBAC settings works for storage event trigger:
+
+* Owner role to the storage account
+* Contributor role to the storage account
+* _Microsoft.EventGrid/EventSubscriptions/Write_ permission to storage account _/subscriptions/####/resourceGroups/####/providers/Microsoft.Storage/storageAccounts/storageAccountName_
+
+In order to understand how Azure Data Factory delivers the two promises, let's take back a step and take a sneak peek behind the scene. Here are the high-level work flows for integration among Data Factory, Storage, and Event Grid.
+
+### Create a new Storage Event Trigger
+
+This high-level work flow describes how Azure Data Factory interacts with Event Grid to create a Storage Event Trigger
++
+Two noticeable call outs from the work flows:
+
+* Azure Data Factory makes _no_ direct contact with Storage account. Request to create a subscription is instead relayed and processed by Event Grid. Hence, Data Factory needs no permission to Storage account for this step.
+
+* Access control and permission checking happen on Azure Data Factory side. Before ADF sends a request to subscribe to storage event, it checks the permission for the user. More specifically, it checks whether the Azure account signed in and attempting to create the Storage Event trigger has appropriate access to the relevant storage account. If the permission check fails, trigger creation also fails.
+
+### Storage event trigger Data Factory pipeline run
+
+This high-level work flows describe how Storage event triggers pipeline run through Event Grid
++
+When it comes to Event triggering pipeline in Data Factory, three noticeable call outs in the workflow:
+
+* Event Grid uses a Push model that it relays the message as soon as possible when storage drops the message into the system. This is different from messaging system, such as Kafka where a Pull system is used.
+* Event Trigger on Azure Data Factory serves as an active listener to the incoming message and it properly triggers the associated pipeline.
+* Storage Event Trigger itself makes no direct contact with Storage account
+
+ * That said, if you have a Copy or other activity inside the pipeline to process the data in Storage account, Data Factory will make direct contact with Storage, using the credentials stored in the Linked Service. Ensure that Linked Service is set up appropriately
+ * However, if you make no reference to the Storage account in the pipeline, you do not need to grant permission to Data Factory to access Storage account
+ ## Next steps * For detailed information about triggers, see [Pipeline execution and triggers](concepts-pipeline-execution-triggers.md#trigger-execution).
digital-twins How To Use Postman https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-use-postman.md
Otherwise, you can open an [Azure Cloud Shell](https://shell.azure.com) window i
az login ```
-1. Next, use the [az account get-access-token](/cli/azure/account#az_account_get_access_token) command to get a bearer token with access to the Azure Digital Twins service.
+1. Next, use the [az account get-access-token](/cli/azure/account#az_account_get_access_token) command to get a bearer token with access to the Azure Digital Twins service. In this command, you'll pass in the resource ID for the Azure Digital Twins service endpoint (a static value of `0b07f429-9f4b-4714-9392-cc5e8e80c8b0`), in order to get an access token that can access Azure Digital Twins resources.
```azurecli-interactive az account get-access-token --resource 0b07f429-9f4b-4714-9392-cc5e8e80c8b0
digital-twins Quickstart Adt Explorer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/quickstart-adt-explorer.md
Open a console window to the folder location **Azure_Digital_Twins__ADT__explore
1. Enter the Azure Digital Twins instance URL that you gathered earlier in the [Set up an Azure Digital Twins instance](#set-up-an-azure-digital-twins-instance) section, in the format *https://{instance host name}*.
->[!NOTE]
-> You can revisit or edit this information at any time by selecting the same icon to open the **Sign In** box again. It will keep the values that you passed in.
- > [!TIP] > If a `SignalRService.subscribe` error message appears when you connect, make sure that your Azure Digital Twins URL begins with *https://*.
+> [!TIP]
+> If an authentication error appears, you may want to check your environment variables to make sure any credentials included there are valid for Azure Digital Twins. The DefaultAzureCredential attempts to authenticate against [Credential Types](/dotnet/api/overview/azure/identity-readme#defaultazurecredential) in a specific order, and environment variables are evaluated first.
+ If you see a **Permissions requested** pop-up window from Microsoft, grant consent for this application and accept to continue.
+>[!NOTE]
+> You can revisit or edit this information at any time by selecting the same icon to open the **Sign In** box again. It will keep the values that you passed in.
+ ## Add the sample data Next, you'll import the sample scenario and graph into Azure Digital Twins Explorer. The sample scenario is also located in the **Azure_Digital_Twins__ADT__explorer** folder you downloaded earlier.
You may also want to delete the project folder from your local machine.
Next, continue on to the Azure Digital Twins tutorials to build out your own Azure Digital Twins scenario and interaction tools. > [!div class="nextstepaction"]
-> [Tutorial: Code a client app](tutorial-code.md)
+> [Tutorial: Code a client app](tutorial-code.md)
event-grid Event Filtering https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/event-filtering.md
FOR_EACH filter IN (a, b, c)
IF key CONTAINS filter FAIL_MATCH ```
+See [Limitations](#limitations) section for current limitation of this operator.
## StringBeginsWith The **StringBeginsWith** operator evaluates to true if the **key** value **begins with** any of the specified **filter** values. In the following example, it checks whether the value of the `key1` attribute in the `data` section begins with `event` or `grid`. For example, `event hubs` begins with `event`.
Advanced filtering has the following limitations:
* 5 advanced filters and 25 filter values across all the filters per event grid subscription * 512 characters per string value * Five values for **in** and **not in** operators
+* The `StringNotContains` operator is currently not available in the portal.
* Keys with **`.` (dot)** character in them. For example: `http://schemas.microsoft.com/claims/authnclassreference` or `john.doe@contoso.com`. Currently, there's no support for escape characters in keys. The same key can be used in more than one filter.
event-hubs Event Hubs Amqp Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-hubs/event-hubs-amqp-troubleshoot.md
+
+ Title: Troubleshoot AMQP errors in Azure Event Hubs | Microsoft Docs
+description: Provides a list of AMQP errors you may receive when using Azure Event Hubs, and cause of those errors.
+ Last updated : 06/23/2020+++
+# AMQP errors in Azure Event Hubs
+This article provides some of the errors you receive when using AMQP with Azure Event Hubs. They are all standard behaviors of the service. You can avoid them by making send/receive calls on the connection/link, which automatically recreates the connection/link.
+
+## Link is closed
+You see the following error when the AMQP connection and link are active but no calls (for example, send or receive) are made using the link for 10 minutes. So, the link is closed. The connection is still open.
+
+"
+AMQP:link: detach-forced: The link 'G2:7223832:user.tenant0.cud_00000000000-0000-0000-0000-00000000000000' is force detached by the broker because of errors occurred in publisher(link164614). Detach origin: AmqpMessagePublisher.IdleTimerExpired: Idle timeout: 00:10:00. TrackingId:00000000000000000000000000000000000000_G2_B3, SystemTracker:mynamespace:Topic:MyTopic, Timestamp: 2/16/2018 11:10:40 PM
+"
+
+## Connection is closed
+You see the following error on the AMQP connection when all links in the connection have been closed because there was no activity (idle) and a new link hasn't been created in 5 minutes.
+
+"
+Error{condition=amqp:connection:forced, description='The connection was inactive for more than the allowed 300000 milliseconds and is closed by container 'LinkTracker'. TrackingId:00000000000000000000000000000000000_G21, SystemTracker:gateway5, Timestamp:2019-03-06T17:32:00', info=null}
+"
+
+## Link isn't created
+You see this error when a new AMQP connection is created but a link isn't created within 1 minute of the creation of the AMQP Connection.
+
+"
+Error{condition=amqp:connection:forced, description='The connection was inactive for more than the allowed 60000 milliseconds and is closed by container 'LinkTracker'. TrackingId:0000000000000000000000000000000000000_G21, SystemTracker:gateway5, Timestamp:2019-03-06T18:41:51', info=null}
+"
+
+## Next steps
+To learn more about AMQP, see [AMQP 1.0 protocol guide](../service-bus-messaging/service-bus-amqp-protocol-guide.md).
expressroute Expressroute Faqs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-faqs.md
Previously updated : 03/07/2021 Last updated : 03/12/2021
If your ExpressRoute circuit is enabled for Azure Microsoft peering, you can acc
* [Windows Virtual Desktop](https://azure.microsoft.com/services/virtual-desktop/) * Multi-factor Authentication Server (legacy) * Traffic Manager
+* Logic Apps
### Public peering
firewall Premium Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall/premium-features.md
Previously updated : 03/08/2021 Last updated : 03/12/2021 + # Azure Firewall Premium Preview features
or
You're welcome to submit a request at [https://aka.ms/azfw-webcategories-request](https://aka.ms/azfw-webcategories-request).
+## Supported regions
+
+Azure Firewall Premium Preview is supported in the following regions:
+
+- West Europe (Public / Europe)
+- East US (Public / United States)
+- Australia East (Public / Australia)
+- Southeast Asia (Public / Asia Pacific)
+- UK South (Public / United Kingdom)
+- North Europe (Public / Europe)
+- East US 2 (Public / United States)
+- South Central US (Public / United States)
+- West US 2 (Public / United States)
+- West US (Public / United States)
+- Central US (Public / United States)
+- North Central US (Public / United States)
+- Japan East (Public / Japan)
+- East Asia (Public / Asia Pacific)
+- Canada Central (Public / Canada)
+- France Central (Public / France)
+- South Africa North (Public / South Africa)
+- UAE North (Public / UAE)
+- Switzerland North (Public / Switzerland)
+- Brazil South (Public / Brazil)
+- Norway East (Public / Norway)
+- Australia Central (Public / Australia)
+- Australia Central 2 (Public / Australia)
+- Australia Southeast (Public / Australia)
+- Canada East (Public / Canada)
+- Central US EUAP (Public / Canary (US))
+- France South (Public / France)
+- Japan West (Public / Japan)
+- Korea South (Public / Korea)
+- UAE Central (Public / UAE)
+- UK West (Public / United Kingdom)
+- West Central US (Public / United States)
+- West India (Public / India)
+ ## Known issues
governance Deploy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/azure-security-benchmark-foundation/deploy.md
Title: Deploy Azure Security Benchmark Foundation blueprint sample description: Deploy steps for the Azure Security Benchmark Foundation blueprint sample including blueprint artifact parameter details. Previously updated : 02/18/2020 Last updated : 03/12/2021 # Deploy the Azure Security Benchmark Foundation blueprint sample
governance Index https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/azure-security-benchmark-foundation/index.md
Title: Azure Security Benchmark Foundation blueprint sample overview description: Overview and architecture of the Azure Security Benchmark Foundation blueprint sample. Previously updated : 02/17/2020 Last updated : 03/12/2021 # Overview of the Azure Security Benchmark Foundation blueprint sample
governance Deploy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/caf-foundation/deploy.md
Title: Deploy CAF Foundation blueprint sample description: Deploy steps for the CAF Foundation blueprint sample including blueprint artifact parameter details. Previously updated : 05/06/2020 Last updated : 03/12/2021 # Deploy the Microsoft Cloud Adoption Framework for Azure Foundation blueprint sample
governance Index https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/caf-foundation/index.md
Title: CAF Foundation blueprint sample overview description: Overview and architecture of the Cloud Adoption Framework (CAF) for Azure Foundation blueprint sample. Previously updated : 09/14/2020 Last updated : 03/12/2021 # Overview of the Microsoft Cloud Adoption Framework for Azure Foundation blueprint sample
Additional articles about blueprints and how to use them:
- Understand how to use [static and dynamic parameters](../../concepts/parameters.md). - Learn to customize the [blueprint sequencing order](../../concepts/sequencing-order.md). - Find out how to make use of [blueprint resource locking](../../concepts/resource-locking.md).-- Learn how to [update existing assignments](../../how-to/update-existing-assignments.md).
+- Learn how to [update existing assignments](../../how-to/update-existing-assignments.md).
governance Deploy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/caf-migrate-landing-zone/deploy.md
Title: Deploy CAF Migration landing zone blueprint sample description: Deploy steps for the CAF Migration landing zone blueprint sample including blueprint artifact parameter details. Previously updated : 05/06/2020 Last updated : 03/12/2021 # Deploy the Microsoft Cloud Adoption Framework for Azure migrate landing zone blueprint sample
governance Index https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/caf-migrate-landing-zone/index.md
Title: CAF Migration landing zone blueprint sample overview description: Overview and architecture of the Cloud Adoption Framework (CAF) for Azure Migration landing zone blueprint sample. Previously updated : 09/14/2020 Last updated : 03/12/2021 # Overview of the Microsoft Cloud Adoption Framework for Azure Migration landing zone blueprint sample
governance Guest Configuration Baseline Linux https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/guest-configuration-baseline-linux.md
+
+ Title: Reference - Azure Policy Guest Configuration baseline for Linux
+description: Details of the Linux baseline on Azure implemented through Azure Policy Guest Configuration.
Last updated : 03/12/2021+++
+# Azure Policy Guest Configuration baseline for Linux
+
+The following article details what the **\[Preview\] Linux machines should meet requirements for the
+Azure security baseline** Guest Configuration policy definition audits. For more information, see
+[Azure Policy Guest Configuration](../concepts/guest-configuration.md) and
+[Overview of the Azure Security Benchmark (V2)](../../../security/benchmarks/overview.md).
+
+## General security controls
+
+|Name<br /><sub>(ID)</sub> |Details |Remediation check |
+||||
+|Ensure nodev option set on /home partition.<br /><sub>(1.1.4)</sub> |Description: An attacker could mount a special device (for example, block or character device) on the /home partition. |Edit the /etc/fstab file and nodev the fourth field (mounting options) for the /home partition. For more information, see the fstab(5) manual pages. |
+|Ensure nodev option set on /tmp partition.<br /><sub>(1.1.5)</sub> |Description: An attacker could mount a special device (for example, block or character device) on the /tmp partition. |Edit the /etc/fstab file and nodev the fourth field (mounting options) for the /tmp partition. For more information, see the fstab(5) manual pages. |
+|Ensure nodev option set on /var/tmp partition.<br /><sub>(1.1.6)</sub> |Description: An attacker could mount a special device (for example, block or character device) on the /var/tmp partition. |Edit the /etc/fstab file and nodev the fourth field (mounting options) for the /var/tmp partition. For more information, see the fstab(5) manual pages. |
+|Ensure nosuid option set on /tmp partition.<br /><sub>(1.1.7)</sub> |Description: Since the /tmp filesystem is only intended for temporary file storage, set this option to ensure that users cannot create setuid files in /var/tmp. |Edit the /etc/fstab file and nosuid the fourth field (mounting options) for the /tmp partition. For more information, see the fstab(5) manual pages. |
+|Ensure nosuid option set on /var/tmp partition.<br /><sub>(1.1.8)</sub> |Description: Since the /var/tmp filesystem is only intended for temporary file storage, set this option to ensure that users cannot create setuid files in /var/tmp. |Edit the /etc/fstab file and nosuid the fourth field (mounting options) for the /var/tmp partition. For more information, see the fstab(5) manual pages. |
+|Ensure noexec option set on /var/tmp partition.<br /><sub>(1.1.9)</sub> |Description: Since the `/var/tmp` filesystem is only intended for temporary file storage, set this option to ensure that users cannot run executable binaries from `/var/tmp` . |Edit the /etc/fstab file and add noexec to the fourth field (mounting options) for the /var/tmp partition. For more information, see the fstab(5) manual pages. |
+|Ensure noexec option set on /dev/shm partition.<br /><sub>(1.1.16)</sub> |Description: Setting this option on a file system prevents users from executing programs from shared memory. This deters users from introducing potentially malicious software on the system. |Edit the /etc/fstab file and add noexec to the fourth field (mounting options) for the /dev/shm partition. For more information, see the fstab(5) manual pages. |
+|Disable automounting<br /><sub>(1.1.21)</sub> |Description: With automounting enabled anyone with physical access could attach a USB drive or disc and have its contents available in system even if they lacked permissions to mount it themselves. |Disable the autofs service or run '/opt/microsoft/omsagent/plugin/omsremediate -r disable-autofs' |
+|Ensure mounting of USB storage devices is disabled<br /><sub>(1.1.21.1)</sub> |Description: Removing support for USB storage devices reduces the local attack surface of the server. |Edit or create a file in the `/etc/modprobe.d/` directory ending in .conf and add `install usb-storage /bin/true` then unload the usb-storage module or run '/opt/microsoft/omsagent/plugin/omsremediate -r disable-unnecessary-kernel-mods' |
+|Ensure core dumps are restricted.<br /><sub>(1.5.1)</sub> |Description: Setting a hard limit on core dumps prevents users from overriding the soft variable. If core dumps are required, consider setting limits for user groups (see `limits.conf(5)` ). In addition, setting the `fs.suid_dumpable` variable to 0 will prevent setuid programs from dumping core. |Add `hard core 0` to /etc/security/limits.conf or a file in the limits.d directory and set `fs.suid_dumpable = 0` in sysctl or run '/opt/microsoft/omsagent/plugin/omsremediate -r disable-core-dumps' |
+|Ensure prelink is disabled.<br /><sub>(1.5.4)</sub> |Description: The prelinking feature can interfere with the operation of AIDE, because it changes binaries. Prelinking can also increase the vulnerability of the system if a malicious user is able to compromise a common library such as libc. |uninstall `prelink` using your package manager or run '/opt/microsoft/omsagent/plugin/omsremediate -r remove-prelink' |
+|Ensure permissions on /etc/motd are configured.<br /><sub>(1.7.1.4)</sub> |Description: If the `/etc/motd` file does not have the correct ownership, it could be modified by unauthorized users with incorrect or misleading information. |Set the owner and group of /etc/motd to root and set permissions to 0644 or run '/opt/microsoft/omsagent/plugin/omsremediate -r file-permissions' |
+|Ensure permissions on /etc/issue are configured.<br /><sub>(1.7.1.5)</sub> |Description: If the `/etc/issue` file does not have the correct ownership, it could be modified by unauthorized users with incorrect or misleading information. |Set the owner and group of /etc/issue to root and set permissions to 0644 or run '/opt/microsoft/omsagent/plugin/omsremediate -r file-permissions' |
+|Ensure permissions on /etc/issue.net are configured.<br /><sub>(1.7.1.6)</sub> |Description: If the `/etc/issue.net` file does not have the correct ownership, it could be modified by unauthorized users with incorrect or misleading information. |Set the owner and group of /etc/issue.net to root and set permissions to 0644 or run '/opt/microsoft/omsagent/plugin/omsremediate -r file-permissions' |
+|The nodev option should be enabled for all removable media.<br /><sub>(2.1)</sub> |Description: An attacker could mount a special device (for example, block or character device) via removable media |Add the nodev option to the fourth field (mounting options) in /etc/fstab |
+|The noexec option should be enabled for all removable media.<br /><sub>(2.2)</sub> |Description: An attacker could load executable file via removable media |Add the noexec option to the fourth field (mounting options) in /etc/fstab |
+|The nosuid option should be enabled for all removable media.<br /><sub>(2.3)</sub> |Description: An attacker could load files that run with an elevated security context via removable media |Add the nosuid option to the fourth field (mounting options) in /etc/fstab |
+|Ensure talk client is not installed.<br /><sub>(2.3.3)</sub> |Description: The software presents a security risk as it uses unencrypted protocols for communication. |Uninstall `talk` or run '/opt/microsoft/omsagent/plugin/omsremediate -r remove-talk' |
+|Ensure permissions on /etc/hosts.allow are configured.<br /><sub>(3.4.4)</sub> |Description: It is critical to ensure that the `/etc/hosts.allow` file is protected from unauthorized write access. Although it is protected by default, the file permissions could be changed either inadvertently or through malicious actions. |Set the owner and group of /etc/hosts.allow to root and the permissions to 0644 or run '/opt/microsoft/omsagent/plugin/omsremediate -r file-permissions' |
+|Ensure permissions on /etc/hosts.deny are configured.<br /><sub>(3.4.5)</sub> |Description: It is critical to ensure that the `/etc/hosts.deny` file is protected from unauthorized write access. Although it is protected by default, the file permissions could be changed either inadvertently or through malicious actions. |Set the owner and group of /etc/hosts.deny to root and the permissions to 0644 or run '/opt/microsoft/omsagent/plugin/omsremediate -r file-permissions' |
+|Ensure default deny firewall policy<br /><sub>(3.6.2)</sub> |Description: With a default accept policy the firewall will accept any packet that is not configured to be denied. It is easier to maintain a secure firewall with a default DROP policy than it is with a default ALLOW policy. |Set the default policy for incoming, outgoing and routed traffic to `deny` or `reject` as appropriate using your firewall software |
+|The nodev/nosuid option should be enabled for all NFS mounts.<br /><sub>(5)</sub> |Description: An attacker could load files that run with an elevated security context or special devices via remote file system |Add the nosuid and nodev options to the fourth field (mounting options) in /etc/fstab |
+|Ensure password creation requirements are configured.<br /><sub>(5.3.1)</sub> |Description: Strong passwords protect systems from being hacked through brute force methods. |Set the following key/value pairs in the appropriate PAM for your distro: minlen=14, minclass = 4, dcredit = -1, ucredit = -1, ocredit = -1, lcredit = -1, or run '/opt/microsoft/omsagent/plugin/omsremediate -r enable-password-requirements' |
+|Ensure lockout for failed password attempts is configured.<br /><sub>(5.3.2)</sub> |Description: Locking out user IDs after `n` unsuccessful consecutive login attempts mitigates brute force password attacks against your systems. |for Ubuntu and Debian, add the pam_tally and pam_deny modules as appropriate. For all other distros, refer to your distro's documentation |
+|Disable the installation and use of file systems that are not required (cramfs)<br /><sub>(6.1)</sub> |Description: An attacker could use a vulnerability in cramfs to elevate privileges |Add a file to the /etc/modprob.d directory that disables cramfs or run '/opt/microsoft/omsagent/plugin/omsremediate -r disable-unnecessary-kernel-mods' |
+|Disable the installation and use of file systems that are not required (freevxfs)<br /><sub>(6.2)</sub> |Description: An attacker could use a vulnerability in freevxfs to elevate privileges |Add a file to the /etc/modprob.d directory that disables freevxfs or run '/opt/microsoft/omsagent/plugin/omsremediate -r disable-unnecessary-kernel-mods' |
+|Ensure all users' home directories exist<br /><sub>(6.2.7)</sub> |Description: If the user's home directory does not exist or is unassigned, the user will be placed in '/' and will not be able to write any files or have local environment variables set. |If any users' home directories do not exist, create them and make sure the respective user owns the directory. Users without an assigned home directory should be removed or assigned a home directory as appropriate. |
+|Ensure users own their home directories<br /><sub>(6.2.9)</sub> |Description: Since the user is accountable for files stored in the user home directory, the user must be the owner of the directory. |Change the ownership of any home directories that are not owned by the defined user to the correct user. |
+|Ensure users' dot files are not group or world writable.<br /><sub>(6.2.10)</sub> |Description: Group or world-writable user configuration files may enable malicious users to steal or modify other users' data or to gain another user's system privileges. |Making global modifications to users' files without alerting the user community can result in unexpected outages and unhappy users. Therefore, it is recommended that a monitoring policy be established to report user dot file permissions and determine the action to be taken in accordance with site policy. |
+|Ensure no users have .forward files<br /><sub>(6.2.11)</sub> |Description: Use of the `.forward` file poses a security risk in that sensitive data may be inadvertently transferred outside the organization. The `.forward` file also poses a risk as it can be used to execute commands that may perform unintended actions. |Making global modifications to users' files without alerting the user community can result in unexpected outages and unhappy users. Therefore, it is recommended that a monitoring policy be established to report user `.forward` files and determine the action to be taken in accordance with site policy. |
+|Ensure no users have .netrc files<br /><sub>(6.2.12)</sub> |Description: The `.netrc` file presents a significant security risk since it stores passwords in unencrypted form. Even if FTP is disabled, user accounts may have brought over `.netrc` files from other systems which could pose a risk to those systems |Making global modifications to users' files without alerting the user community can result in unexpected outages and unhappy users. Therefore, it is recommended that a monitoring policy be established to report user `.netrc` files and determine the action to be taken in accordance with site policy. |
+|Ensure no users have .rhosts files<br /><sub>(6.2.14)</sub> |Description: This action is only meaningful if `.rhosts` support is permitted in the file `/etc/pam.conf` . Even though the `.rhosts` files are ineffective if support is disabled in `/etc/pam.conf` , they may have been brought over from other systems and could contain information useful to an attacker for those other systems. |Making global modifications to users' files without alerting the user community can result in unexpected outages and unhappy users. Therefore, it is recommended that a monitoring policy be established to report user `.rhosts` files and determine the action to be taken in accordance with site policy. |
+|Ensure all groups in /etc/passwd exist in /etc/group<br /><sub>(6.2.15)</sub> |Description: Groups which are defined in the /etc/passwd file but not in the /etc/group file poses a threat to system security since group permissions are not properly managed. |For each group defined in /etc/passwd, ensure there is a corresponding group in /etc/group |
+|Ensure no duplicate UIDs exist<br /><sub>(6.2.16)</sub> |Description: Users must be assigned unique UIDs for accountability and to ensure appropriate access protections. |Establish unique UIDs and review all files owned by the shared UIDs to determine which UID they are supposed to belong to. |
+|Ensure no duplicate GIDs exist<br /><sub>(6.2.17)</sub> |Description: Groups must be assigned unique GIDs for accountability and to ensure appropriate access protections. |Establish unique GIDs and review all files owned by the shared GIDs to determine which GID they are supposed to belong to. |
+|Ensure no duplicate user names exist<br /><sub>(6.2.18)</sub> |Description: If a user is assigned a duplicate user name, it will create and have access to files with the first UID for that username in `/etc/passwd` . For example, if 'test4' has a UID of 1000 and a subsequent 'test4' entry has a UID of 2000, logging in as 'test4' will use UID 1000. Effectively, the UID is shared, which is a security problem. |Establish unique user names for the users. File ownerships will automatically reflect the change as long as the users have unique UIDs. |
+|Ensure no duplicate groups exist<br /><sub>(6.2.19)</sub> |Description: If a group is assigned a duplicate group name, it will create and have access to files with the first GID for that group in `/etc/group` . Effectively, the GID is shared, which is a security problem. |Establish unique names for the user groups. File group ownerships will automatically reflect the change as long as the groups have unique GIDs. |
+|Ensure shadow group is empty<br /><sub>(6.2.20)</sub> |Description: Any users assigned to the shadow group would be granted read access to the /etc/shadow file. If attackers can gain read access to the `/etc/shadow` file, they can easily run a password cracking program against the hashed passwords to break them. Other security information that is stored in the `/etc/shadow` file (such as expiration) could also be useful to subvert additional user accounts. |Remove all users form the shadow group |
+|Disable the installation and use of file systems that are not required (hfs)<br /><sub>(6.3)</sub> |Description: An attacker could use a vulnerability in hfs to elevate privileges |Add a file to the /etc/modprob.d directory that disables hfs or run '/opt/microsoft/omsagent/plugin/omsremediate -r disable-unnecessary-kernel-mods' |
+|Disable the installation and use of file systems that are not required (hfsplus)<br /><sub>(6.4)</sub> |Description: An attacker could use a vulnerability in hfsplus to elevate privileges |Add a file to the /etc/modprob.d directory that disables hfsplus or run '/opt/microsoft/omsagent/plugin/omsremediate -r disable-unnecessary-kernel-mods' |
+|Disable the installation and use of file systems that are not required (jffs2)<br /><sub>(6.5)</sub> |Description: An attacker could use a vulnerability in jffs2 to elevate privileges |Add a file to the /etc/modprob.d directory that disables jffs2 or run '/opt/microsoft/omsagent/plugin/omsremediate -r disable-unnecessary-kernel-mods' |
+|Kernels should only be compiled from approved sources.<br /><sub>(10)</sub> |Description: A kernel from an unapproved source could contain vulnerabilities or backdoors to grant access to an attacker. |Install the kernel that is provided by your distro vendor. |
+|/etc/shadow file permissions should be set to 0400<br /><sub>(11.1)</sub> |Description: An attacker that can retrieve or manipulate hashed passwords from /etc/shadow if it is not correctly secured. |Set the permissions and ownership of /etc/shadow* or run '/opt/microsoft/omsagent/plugin/omsremediate -r set-etc-shadow-perms' |
+|/etc/shadow- file permissions should be set to 0400<br /><sub>(11.2)</sub> |Description: An attacker that can retrieve or manipulate hashed passwords from /etc/shadow- if it is not correctly secured. |Set the permissions and ownership of /etc/shadow* or run '/opt/microsoft/omsagent/plugin/omsremediate -r set-etc-shadow-perms' |
+|/etc/gshadow file permissions should be set to 0400<br /><sub>(11.3)</sub> |Description: An attacker could join security groups if this file is not properly secured |Set the permissions and ownership of /etc/gshadow- or run '/opt/microsoft/omsagent/plugin/omsremediate -r set-etc-gshadow-perms' |
+|/etc/gshadow- file permissions should be set to 0400<br /><sub>(11.4)</sub> |Description: An attacker could join security groups if this file is not properly secured |Set the permissions and ownership of /etc/gshadow or run '/opt/microsoft/omsagent/plugin/omsremediate -r set-etc-gshadow-perms' |
+|/etc/passwd file permissions should be 0644<br /><sub>(12.1)</sub> |Description: An attacker could modify userIDs and login shells |Set the permissions and ownership of /etc/passwd or run '/opt/microsoft/omsagent/plugin/omsremediate -r set-etc-passwd-perms' |
+|/etc/group file permissions should be 0644<br /><sub>(12.2)</sub> |Description: An attacker could elevate privileges by modifying group membership |Set the permissions and ownership of /etc/group or run '/opt/microsoft/omsagent/plugin/omsremediate -r set-etc-group-perms |
+|/etc/passwd- file permissions should be set to 0600<br /><sub>(12.3)</sub> |Description: An attacker could join security groups if this file is not properly secured |Set the permissions and ownership of /etc/passwd- or run '/opt/microsoft/omsagent/plugin/omsremediate -r set-etc-passwd-perms |
+|/etc/group- file permissions should be 0644<br /><sub>(12.4)</sub> |Description: An attacker could elevate privileges by modifying group membership |Set the permissions and ownership of /etc/group- or run '/opt/microsoft/omsagent/plugin/omsremediate -r set-etc-group-perms |
+|Access to the root account via su should be restricted to the 'root' group<br /><sub>(21)</sub> |Description: An attacker could escalate permissions by password guessing if su is not restricted to users in the root group. |Run the command '/opt/microsoft/omsagent/plugin/omsremediate -r fix-su-permissions'. This will add the line 'auth required pam_wheel.so use_uid' to the file '/etc/pam.d/su' |
+|The 'root' group should exist, and contain all members who can su to root<br /><sub>(22)</sub> |Description: An attacker could escalate permissions by password guessing if su is not restricted to users in the root group. |Create the root group via the command 'groupadd -g 0 root' |
+|There are no accounts without passwords<br /><sub>(23.2)</sub> |Description: An attacker can login to accounts with no password and execute arbitrary commands. |Use the passwd command to set passwords for all accounts |
+|Accounts other than root must have unique UIDs greater than zero(0)<br /><sub>(24)</sub> |Description: If an account other than root has uid zero, an attacker could compromise the account and gain root privileges. |Assign unique, non-zero uids to all non-root accounts using 'usermod -u' |
+|Randomized placement of virtual memory regions should be enabled<br /><sub>(25)</sub> |Description: An attacker could write executable code to known regions in memory resulting in elevation of privilege |Add the value '1' or '2' to the file '/proc/sys/kernel/randomize_va_space' |
+|Kernel support for the XD/NX processor feature should be enabled<br /><sub>(26)</sub> |Description: An attacker could cause a system to executable code from data regions in memory resulting in elevation of privilege. |Confirm the file '/proc/cpuinfo' contains the flag 'nx' |
+|The '.' should not appear in root's $PATH<br /><sub>(27.1)</sub> |Description: An attacker could elevate privileges by placing a malicious file in root's $PATH |Modify the 'export PATH=' line in /root/.profile |
+|User home directories should be mode 750 or more restrictive<br /><sub>(28)</sub> |Description: An attacker could retrieve sensitive information from the home folders of other users. |Set home folder permissions to 750 or run '/opt/microsoft/omsagent/plugin/omsremediate -r fix-home-dir-permissions |
+|The default umask for all users should be set to 077 in login.defs<br /><sub>(29)</sub> |Description: An attacker could retrieve sensitive information from files owned by other users. |Run the command '/opt/microsoft/omsagent/plugin/omsremediate -r set-default-user-umask'. This will add the line 'UMASK 077' to the file '/etc/login.defs' |
+|All bootloaders should have password protection enabled.<br /><sub>(31)</sub> |Description: An attacker with physical access could modify bootloader options, yielding unrestricted system access |Add a boot loader password to the file '/boot/grub/grub.cfg' |
+|Ensure permissions on bootloader config are configured<br /><sub>(31.1)</sub> |Description: Setting the permissions to read and write for root only prevents non-root users from seeing the boot parameters or changing them. Non-root users who read the boot parameters may be able to identify weaknesses in security upon boot and be able to exploit them. |Set the owner and group of your bootloader to root:root and permissions to 0400 or run '/opt/microsoft/omsagent/plugin/omsremediate -r bootloader-permissions |
+|Ensure authentication required for single user mode.<br /><sub>(33)</sub> |Description: Requiring authentication in single user mode prevents an unauthorized user from rebooting the system into single user to gain root privileges without credentials. |run the following command to set a password for the root user: `passwd root` |
+|Ensure packet redirect sending is disabled.<br /><sub>(38.3)</sub> |Description: An attacker could use a compromised host to send invalid ICMP redirects to other router devices in an attempt to corrupt routing and have users access a system set up by the attacker as opposed to a valid system. |set the following parameters in /etc/sysctl.conf: 'net.ipv4.conf.all.send_redirects = 0' and 'net.ipv4.conf.default.send_redirects = 0' or run '/opt/microsoft/omsagent/plugin/omsremediate -r disable-send-redirects |
+|Sending ICMP redirects should be disabled for all interfaces. (net.ipv4.conf.default.accept_redirects = 0)<br /><sub>(38.4)</sub> |Description: An attacker could alter this system's routing table, redirecting traffic to an alternate destination |Run `sysctl -w key=value` and set to a compliant value or run '/opt/microsoft/omsagent/plugin/omsremediate -r disable-accept-redirects'. |
+|Sending ICMP redirects should be disabled for all interfaces. (net.ipv4.conf.default.secure_redirects = 0)<br /><sub>(38.5)</sub> |Description: An attacker could alter this system's routing table, redirecting traffic to an alternate destination |Run `sysctl -w key=value` and set to a compliant value or run '/opt/microsoft/omsagent/plugin/omsremediate -r disable-secure-redirects' |
+|Accepting source routed packets should be disabled for all interfaces. (net.ipv4.conf.all.accept_source_route = 0)<br /><sub>(40.1)</sub> |Description: An attacker could redirect traffic for malicious purposes. |Run `sysctl -w key=value` and set to a compliant value or run '/opt/microsoft/omsagent/plugin/omsremediate -r disable-accept-source-route' |
+|Accepting source routed packets should be disabled for all interfaces. (net.ipv6.conf.all.accept_source_route = 0) or run '/opt/microsoft/omsagent/plugin/omsremediate -r disable-accept-source-route'<br /><sub>(40.2)</sub> |Description: An attacker could redirect traffic for malicious purposes. |Run `sysctl -w key=value` and set to a compliant value. |
+|Ignoring bogus ICMP responses to broadcasts should be enabled. (net.ipv4.icmp_ignore_bogus_error_responses = 1)<br /><sub>(43)</sub> |Description: An attacker could perform an ICMP attack resulting in DoS |Run `sysctl -w key=value` and set to a compliant value or run '/opt/microsoft/omsagent/plugin/omsremediate -r enable-icmp-ignore-bogus-error-responses' |
+|Ignoring ICMP echo requests (pings) sent to broadcast / multicast addresses should be enabled. (net.ipv4.icmp_echo_ignore_broadcasts = 1)<br /><sub>(44)</sub> |Description: An attacker could perform an ICMP attack resulting in DoS |Run `sysctl -w key=value` and set to a compliant value or run '/opt/microsoft/omsagent/plugin/omsremediate -r enable-icmp-echo-ignore-broadcasts' |
+|Logging of martian packets (those with impossible addresses) should be enabled for all interfaces. (net.ipv4.conf.all.log_martians = 1)<br /><sub>(45.1)</sub> |Description: An attacker could send traffic from spoofed addresses without being detected |Run `sysctl -w key=value` and set to a compliant value or run '/opt/microsoft/omsagent/plugin/omsremediate -r enable-log-martians' |
+|Performing source validation by reverse path should be enabled for all interfaces. (net.ipv4.conf.all.rp_filter = 1)<br /><sub>(46.1)</sub> |Description: The system will accept traffic from addresses that are unroutable. |Run `sysctl -w key=value` and set to a compliant value or run '/opt/microsoft/omsagent/plugin/omsremediate -r enable-rp-filter' |
+|Performing source validation by reverse path should be enabled for all interfaces. (net.ipv4.conf.default.rp_filter = 1)<br /><sub>(46.2)</sub> |Description: The system will accept traffic from addresses that are unroutable. |Run `sysctl -w key=value` and set to a compliant value or run '/opt/microsoft/omsagent/plugin/omsremediate -r enable-rp-filter' |
+|TCP syncookies should be enabled. (net.ipv4.tcp_syncookies = 1)<br /><sub>(47)</sub> |Description: An attacker could perform a DoS over TCP |Run `sysctl -w key=value` and set to a compliant value or run '/opt/microsoft/omsagent/plugin/omsremediate -r enable-tcp-syncookies' |
+|The system should not act as a network sniffer.<br /><sub>(48)</sub> |Description: An attacker may use promiscuous interfaces to sniff network traffic |Promiscuous mode is enabled via a 'promisc' entry in '/etc/network/interfaces' or '/etc/rc.local.' Check both files and remove this entry. |
+|All wireless interfaces should be disabled.<br /><sub>(49)</sub> |Description: An attacker could create a fake AP to intercept transmissions. |Confirm all wireless interfaces are disabled in '/etc/network/interfaces' |
+|The IPv6 protocol should be enabled.<br /><sub>(50)</sub> |Description: This is necessary for communication on modern networks. |Open /etc/sysctl.conf and confirm that 'net.ipv6.conf.all.disable_ipv6' and 'net.ipv6.conf.default.disable_ipv6' are set to 0 |
+|Ensure DCCP is disabled<br /><sub>(54)</sub> |Description: If the protocol is not required, it is recommended that the drivers not be installed to reduce the potential attack surface. |Edit or create a file in the `/etc/modprobe.d/` directory ending in .conf and add `install dccp /bin/true` then unload the dccp module or run '/opt/microsoft/omsagent/plugin/omsremediate -r disable-unnecessary-kernel-mods' |
+|Ensure SCTP is disabled<br /><sub>(55)</sub> |Description: If the protocol is not required, it is recommended that the drivers not be installed to reduce the potential attack surface. |Edit or create a file in the `/etc/modprobe.d/` directory ending in .conf and add `install sctp /bin/true` then unload the sctp module or run '/opt/microsoft/omsagent/plugin/omsremediate -r disable-unnecessary-kernel-mods' |
+|Disable support for RDS.<br /><sub>(56)</sub> |Description: An attacker could use a vulnerability in RDS to compromise the system |Edit or create a file in the `/etc/modprobe.d/` directory ending in .conf and add `install rds /bin/true` then unload the rds module or run '/opt/microsoft/omsagent/plugin/omsremediate -r disable-unnecessary-kernel-mods' |
+|Ensure TIPC is disabled<br /><sub>(57)</sub> |Description: If the protocol is not required, it is recommended that the drivers not be installed to reduce the potential attack surface. |Edit or create a file in the `/etc/modprobe.d/` directory ending in .conf and add `install tipc /bin/true` then unload the tipc module or run '/opt/microsoft/omsagent/plugin/omsremediate -r disable-unnecessary-kernel-mods' |
+|Ensure logging is configured<br /><sub>(60)</sub> |Description: A great deal of important security-related information is sent via `rsyslog` (for example, successful and failed su attempts, failed login attempts, root login attempts, etc.). |Configure syslog, rsyslog or syslog-ng as appropriate |
+|The syslog, rsyslog, or syslog-ng package should be installed.<br /><sub>(61)</sub> |Description: Reliability and security issues will not be logged, preventing proper diagnosis. |Install the rsyslog package, or run '/opt/microsoft/omsagent/plugin/omsremediate -r install-rsyslog' |
+|Ensure a logging service is enabled<br /><sub>(62)</sub> |Description: It is imperative to have the ability to log events on a node. |Enable the rsyslog package or run '/opt/microsoft/omsagent/plugin/omsremediate -r enable-rsyslog' |
+|File permissions for all rsyslog log files should be set to 640 or 600.<br /><sub>(63)</sub> |Description: An attacker could cover up activity by manipulating logs |Add the line '$FileCreateMode 0640' to the file '/etc/rsyslog.conf' |
+|Ensure logger configuration files are restricted.<br /><sub>(63.1)</sub> |Description: It is important to ensure that log files exist and have the correct permissions to ensure that sensitive syslog data is archived and protected. |Set your logger's configuration files to 0640 or run '/opt/microsoft/omsagent/plugin/omsremediate -r logger-config-file-permissions' |
+|All rsyslog log files should be owned by the adm group.<br /><sub>(64)</sub> |Description: An attacker could cover up activity by manipulating logs |Add the line '$FileGroup adm' to the file '/etc/rsyslog.conf' |
+|All rsyslog log files should be owned by the syslog user.<br /><sub>(65)</sub> |Description: An attacker could cover up activity by manipulating logs |Add the line '$FileOwner syslog' to the file '/etc/rsyslog.conf' or run '/opt/microsoft/omsagent/plugin/omsremediate -r syslog-owner |
+|Rsyslog should not accept remote messages.<br /><sub>(67)</sub> |Description: An attacker could inject messages into syslog, causing a DoS or a distraction from other activity |Remove the lines '$ModLoad imudp' and '$ModLoad imtcp' from the file '/etc/rsyslog.conf' |
+|The logrotate (syslog rotater) service should be enabled.<br /><sub>(68)</sub> |Description: Logfiles could grow unbounded and consume all disk space |Install the logrotate package and confirm the logrotate cron entry is active (chmod 755 /etc/cron.daily/logrotate; chown root:root /etc/cron.daily/logrotate) |
+|The rlogin service should be disabled.<br /><sub>(69)</sub> |Description: An attacker could gain access, bypassing strict authentication requirements |Remove the inetd service. |
+|Disable inetd unless required. (inetd)<br /><sub>(70.1)</sub> |Description: An attacker could exploit a vulnerability in an inetd service to gain access |Uninstall the inetd service (apt-get remove inetd) |
+|Disable xinetd unless required. (xinetd)<br /><sub>(70.2)</sub> |Description: An attacker could exploit a vulnerability in a xinetd service to gain access |Uninstall the inetd service (apt-get remove xinetd) |
+|Install inetd only if appropriate and required by your distro. Secure according to current hardening standards. (if required)<br /><sub>(71.1)</sub> |Description: An attacker could exploit a vulnerability in an inetd service to gain access |Uninstall the inetd service (apt-get remove inetd) |
+|Install xinetd only if appropriate and required by your distro. Secure according to current hardening standards. (if required)<br /><sub>(71.2)</sub> |Description: An attacker could exploit a vulnerability in an xinetd service to gain access |Uninstall the inetd service (apt-get remove xinetd) |
+|The telnet service should be disabled.<br /><sub>(72)</sub> |Description: An attacker could eavesdrop or hijack unencrypted telnet sessions |Remove or comment out the telnet entry in the file '/etc/inetd.conf' |
+|All telnetd packages should be uninstalled.<br /><sub>(73)</sub> |Description: An attacker could eavesdrop or hijack unencrypted telnet sessions |Uninstall any telnetd packages |
+|The rcp/rsh service should be disabled.<br /><sub>(74)</sub> |Description: An attacker could eavesdrop or hijack unencrypted sessions |Remove or comment out the shell entry in the file '/etc/inetd.conf' |
+|The rsh-server package should be uninstalled.<br /><sub>(77)</sub> |Description: An attacker could eavesdrop or hijack unencrypted rsh sessions |Uninstall the rsh-server package (apt-get remove rsh-server) |
+|The ypbind service should be disabled.<br /><sub>(78)</sub> |Description: An attacker could retrieve sensitive information from the ypbind service |Uninstall the nis package (apt-get remove nis) |
+|The nis package should be uninstalled.<br /><sub>(79)</sub> |Description: An attacker could retrieve sensitive information from the NIS service |Uninstall the nis package (apt-get remove nis) |
+|The tftp service should be disabled.<br /><sub>(80)</sub> |Description: An attacker could eavesdrop or hijack an unencrypted session |Remove the tftp entry from the file '/etc/inetd.conf' |
+|The tftpd package should be uninstalled.<br /><sub>(81)</sub> |Description: An attacker could eavesdrop or hijack an unencrypted session |Uninstall the tftpd package (apt-get remove tftpd) |
+|The readahead-fedora package should be uninstalled.<br /><sub>(82)</sub> |Description: No substantial exposure, but also no substantial benefit |Uninstall the readahead-fedora package (apt-get remove readahead-fedora) |
+|The bluetooth/hidd service should be disabled.<br /><sub>(84)</sub> |Description: An attacker could intercept or manipulate wireless communications. |Uninstall the bluetooth package (apt-get remove bluetooth) |
+|The isdn service should be disabled.<br /><sub>(86)</sub> |Description: An attacker could use a modem to gain unauthorized access |Uninstall the isdnutils-base package (apt-get remove isdnutils-base) |
+|The isdnutils-base package should be uninstalled.<br /><sub>(87)</sub> |Description: An attacker could use a modem to gain unauthorized access |Uninstall the isdnutils-base package (apt-get remove isdnutils-base) |
+|The kdump service should be disabled.<br /><sub>(88)</sub> |Description: An attacker could analyze a previous system crash to retrieve sensitive information |Uninstall the kdump-tools package (apt-get remove kdump-tools) |
+|Zeroconf networking should be disabled.<br /><sub>(89)</sub> |Description: An attacker could use abuse this to gain information on network systems, or spoof DNS requests due to flaws in its trust model |For RedHat, CentOS, and Oracle: Add `NOZEROCONF=yes or no` to /etc/sysconfig/network. For all other distros: Remove any 'ipv4ll' entries in the file '/etc/network/interfaces' or run '/opt/microsoft/omsagent/plugin/omsremediate -r disable-zeroconf' |
+|The crond service should be enabled.<br /><sub>(90)</sub> |Description: Cron is required by almost all systems for regular maintenance tasks |Install the cron package (apt-get install -y cron) and confirm the file '/etc/init/cron.conf' contains the line 'start on runlevel [2345]' |
+|File permissions for /etc/anacrontab should be set to root:root 600.<br /><sub>(91)</sub> |Description: An attacker could manipulate this file to prevent scheduled tasks or execute malicious tasks |Set the ownership and permissions on /etc/anacrontab or run '/opt/microsoft/omsagent/plugin/omsremediate -r fix-anacrontab-perms' |
+|Ensure permissions on /etc/cron.d are configured.<br /><sub>(93)</sub> |Description: Granting write access to this directory for non-privileged users could provide them the means for gaining unauthorized elevated privileges. Granting read access to this directory could give an unprivileged user insight in how to gain elevated privileges or circumvent auditing controls. |Set the owner and group of /etc/chron.d to root and permissions to 0700 or run '/opt/microsoft/omsagent/plugin/omsremediate -r fix-cron-file-perms' |
+|Ensure permissions on /etc/cron.daily are configured.<br /><sub>(94)</sub> |Description: Granting write access to this directory for non-privileged users could provide them the means for gaining unauthorized elevated privileges. Granting read access to this directory could give an unprivileged user insight in how to gain elevated privileges or circumvent auditing controls. |Set the owner and group of /etc/chron.daily to root and permissions to 0700 or run '/opt/microsoft/omsagent/plugin/omsremediate -r fix-cron-file-perms |
+|Ensure permissions on /etc/cron.hourly are configured.<br /><sub>(95)</sub> |Description: Granting write access to this directory for non-privileged users could provide them the means for gaining unauthorized elevated privileges. Granting read access to this directory could give an unprivileged user insight in how to gain elevated privileges or circumvent auditing controls. |Set the owner and group of /etc/chron.hourly to root and permissions to 0700 or run '/opt/microsoft/omsagent/plugin/omsremediate -r fix-cron-file-perms |
+|Ensure permissions on /etc/cron.monthly are configured.<br /><sub>(96)</sub> |Description: Granting write access to this directory for non-privileged users could provide them the means for gaining unauthorized elevated privileges. Granting read access to this directory could give an unprivileged user insight in how to gain elevated privileges or circumvent auditing controls. |Set the owner and group of /etc/chron.monthly to root and permissions to 0700 or run '/opt/microsoft/omsagent/plugin/omsremediate -r fix-cron-file-perms |
+|Ensure permissions on /etc/cron.weekly are configured.<br /><sub>(97)</sub> |Description: Granting write access to this directory for non-privileged users could provide them the means for gaining unauthorized elevated privileges. Granting read access to this directory could give an unprivileged user insight in how to gain elevated privileges or circumvent auditing controls. |Set the owner and group of /etc/chron.weekly to root and permissions to 0700 or run '/opt/microsoft/omsagent/plugin/omsremediate -r fix-cron-file-perms |
+|Ensure at/cron is restricted to authorized users<br /><sub>(98)</sub> |Description: On many systems, only the system administrator is authorized to schedule `cron` jobs. Using the `cron.allow` file to control who can run `cron` jobs enforces this policy. It is easier to manage an allow list than a deny list. In a deny list, you could potentially add a user ID to the system and forget to add it to the deny files. |replace /etc/cron.deny and /etc/at.deny with their respective `allow` files |
+|Ensure remote login warning banner is configured properly.<br /><sub>(111)</sub> |Description: Warning messages inform users who are attempting to login to the system of their legal status regarding the system and must include the name of the organization that owns the system and any monitoring policies that are in place. Displaying OS and patch level information in login banners also has the side effect of providing detailed system information to attackers attempting to target specific exploits of a system. Authorized users can easily get this information by running the `uname -a`command once they have logged in. |Remove any instances of \m \r \s and \v from the /etc/issue.net file |
+|Ensure local login warning banner is configured properly.<br /><sub>(111.1)</sub> |Description: Warning messages inform users who are attempting to login to the system of their legal status regarding the system and must include the name of the organization that owns the system and any monitoring policies that are in place. Displaying OS and patch level information in login banners also has the side effect of providing detailed system information to attackers attempting to target specific exploits of a system. Authorized users can easily get this information by running the `uname -a`command once they have logged in. |Remove any instances of \m \r \s and \v from the /etc/issue file |
+|The avahi-daemon service should be disabled.<br /><sub>(114)</sub> |Description: An attacker could use a vulnerability in the avahi daemon to gain access |Disable the avahi-daemon service or run '/opt/microsoft/omsagent/plugin/omsremediate -r disable-avahi-daemon' |
+|The cups service should be disabled.<br /><sub>(115)</sub> |Description: An attacker could use a flaw in the cups service to elevate privileges |Disable the cups service or run '/opt/microsoft/omsagent/plugin/omsremediate -r disable-cups' |
+|The isc-dhcpd service should be disabled.<br /><sub>(116)</sub> |Description: An attacker could use dhcpd to provide faulty information to clients, interfering with normal operation. |Remove the isc-dhcp-server package (apt-get remove isc-dhcp-server) |
+|The isc-dhcp-server package should be uninstalled.<br /><sub>(117)</sub> |Description: An attacker could use dhcpd to provide faulty information to clients, interfering with normal operation. |Remove the isc-dhcp-server package (apt-get remove isc-dhcp-server) |
+|The sendmail package should be uninstalled.<br /><sub>(120)</sub> |Description: An attacker could use this system to send emails with malicious content to other users |Uninstall the sendmail package (apt-get remove sendmail) |
+|The postfix package should be uninstalled.<br /><sub>(121)</sub> |Description: An attacker could use this system to send emails with malicious content to other users |Uninstall the postfix package (apt-get remove postfix) or run '/opt/microsoft/omsagent/plugin/omsremediate -r remove-postfix' |
+|Postfix network listening should be disabled as appropriate.<br /><sub>(122)</sub> |Description: An attacker could use this system to send emails with malicious content to other users |Add the line 'inet_interfaces localhost' to the file '/etc/postfix/main.cf' |
+|The ldap service should be disabled.<br /><sub>(124)</sub> |Description: An attacker could manipulate the LDAP service on this host to distribute false data to LDAP clients |Uninstall the slapd package (apt-get remove slapd) |
+|The rpcgssd service should be disabled.<br /><sub>(126)</sub> |Description: An attacker could use a flaw in rpcgssd/nfs to gain access |Disable the rpcgssd service or run '/opt/microsoft/omsagent/plugin/omsremediate -r disable-rpcgssd' |
+|The rpcidmapd service should be disabled.<br /><sub>(127)</sub> |Description: An attacker could use a flaw in idmapd/nfs to gain access |Disable the rpcidmapd service or run '/opt/microsoft/omsagent/plugin/omsremediate -r disable-rpcidmapd' |
+|The portmap service should be disabled.<br /><sub>(129)</sub> |Description: An attacker could use a flaw in portmap to gain access |Disable the rpcbind service or run '/opt/microsoft/omsagent/plugin/omsremediate -r disable-rpcbind' |
+|The rpcsvcgssd service should be disabled.<br /><sub>(130)</sub> |Description: An attacker could use a flaw in rpcsvcgssd to gain access |Remove the line 'NEED_SVCGSSD = yes' from the file '/etc/inetd.conf' |
+|The named service should be disabled.<br /><sub>(131)</sub> |Description: An attacker could use the DNS service to distribute false data to clients |Uninstall the bind9 package (apt-get remove bind9) |
+|The bind package should be uninstalled.<br /><sub>(132)</sub> |Description: An attacker could use the DNS service to distribute false data to clients |Uninstall the bind9 package (apt-get remove bind9) |
+|The dovecot service should be disabled.<br /><sub>(137)</sub> |Description: The system could be used as an IMAP/POP3 server |Uninstall the dovecot-core package (apt-get remove dovecot-core) |
+|The dovecot package should be uninstalled.<br /><sub>(138)</sub> |Description: The system could be used as an IMAP/POP3 server |Uninstall the dovecot-core package (apt-get remove dovecot-core) |
+|Ensure no legacy `+` entries exist in /etc/passwd<br /><sub>(156.1)</sub> |Description: An attacker could gain access by using the username '+' with no password |Remove any entries in /etc/passwd that begin with '+:' |
+|Ensure no legacy `+` entries exist in /etc/shadow<br /><sub>(156.2)</sub> |Description: An attacker could gain access by using the username '+' with no password |Remove any entries in /etc/shadow that begin with '+:' |
+|Ensure no legacy `+` entries exist in /etc/group<br /><sub>(156.3)</sub> |Description: An attacker could gain access by using the username '+' with no password |Remove any entries in /etc/group that begin with '+:' |
+|Ensure password expiration is 365 days or less.<br /><sub>(157.1)</sub> |Description: The window of opportunity for an attacker to leverage compromised credentials or successfully compromise credentials via an online brute force attack is limited by the age of the password. Therefore, reducing the maximum age of a password also reduces an attacker's window of opportunity. |Set the `PASS_MAX_DAYS` parameter to no more than 365 in `/etc/login.defs` or run '/opt/microsoft/omsagent/plugin/omsremediate -r configure-password-policy-max-days' |
+|Ensure password expiration warning days is 7 or more.<br /><sub>(157.2)</sub> |Description: Providing an advance warning that a password will be expiring gives users time to think of a secure password. Users caught unaware may choose a simple password or write it down where it may be discovered. |Set the `PASS_WARN_AGE` parameter to 7 in `/etc/login.defs` or run '/opt/microsoft/omsagent/plugin/omsremediate -r configure-password-policy-warn-age' |
+|Ensure password reuse is limited.<br /><sub>(157.5)</sub> |Description: Forcing users not to reuse their past 5 passwords make it less likely that an attacker will be able to guess the password. |Ensure the 'remember' option is set to at least 5 in either /etc/pam.d/common-password or both /etc/pam.d/password_auth and /etc/pam.d/system_auth or run '/opt/microsoft/omsagent/plugin/omsremediate -r configure-password-policy-history' |
+|Ensure password hashing algorithm is SHA-512<br /><sub>(157.11)</sub> |Description: The SHA-512 algorithm provides much stronger hashing than MD5, thus providing additional protection to the system by increasing the level of effort for an attacker to successfully determine passwords. Note: These changes only apply to accounts configured on the local system. |Set password hashing algorithm to sha512. Many distributions provide tools for updating PAM configuration, consult your documentation for details. If no tooling is provided edit the appropriate `/etc/pam.d/` configuration file and add or modify the `pam_unix.so` lines to include the sha512 option: ``` password sufficient pam_unix.so sha512 ``` |
+|Ensure minimum days between password changes is 7 or more.<br /><sub>(157.12)</sub> |Description: By restricting the frequency of password changes, an administrator can prevent users from repeatedly changing their password in an attempt to circumvent password reuse controls. |Set the `PASS_MIN_DAYS` parameter to 7 in `/etc/login.defs`: `PASS_MIN_DAYS 7`. Modify user parameters for all users with a password set to match: `chage --mindays 7` or run '/opt/microsoft/omsagent/plugin/omsremediate -r set-pass-min-days' |
+|Ensure all users last password change date is in the past<br /><sub>(157.14)</sub> |Description: If a users recorded password change date is in the future, then they could bypass any set password expiration. |Ensure inactive password lock is 30 days or less Run the following command to set the default password inactivity period to 30 days: ``` # useradd -D -f 30 ``` Modify user parameters for all users with a password set to match: ``` # chage --inactive 30 ``` |
+|Ensure system accounts are non-login<br /><sub>(157.15)</sub> |Description: It is important to make sure that accounts that are not being used by regular users are prevented from being used to provide an interactive shell. By default, Ubuntu sets the password field for these accounts to an invalid string, but it is also recommended that the shell field in the password file be set to `/usr/sbin/nologin`. This prevents the account from potentially being used to run any commands. |Set the shell for any accounts returned by the audit script to `/sbin/nologin` |
+|Ensure default group for the root account is GID 0<br /><sub>(157.16)</sub> |Description: Using GID 0 for the `_root_ `account helps prevent `_root_`-owned files from accidentally becoming accessible to non-privileged users. |Run the following command to set the `root` user default group to GID `0` : ``` # usermod -g 0 root ``` |
+|Ensure root is the only UID 0 account<br /><sub>(157.18)</sub> |Description: This access must be limited to only the default `root `account and only from the system console. Administrative access must be through an unprivileged account using an approved mechanism. |Remove any users other than `root` with UID `0` or assign them a new UID if appropriate. |
+|Remove unnecessary packages<br /><sub>(158)</sub> |Description: |Run '/opt/microsoft/omsagent/plugin/omsremediate -r remove-landscape-common |
+|Remove unnecessary accounts<br /><sub>(159)</sub> |Description: For compliance |Remove the unnecessary accounts |
+|Ensure auditd service is enabled<br /><sub>(162)</sub> |Description: The capturing of system events provides system administrators with information to allow them to determine if unauthorized access to their system is occurring. |Install audit package (systemctl enable auditd) |
+|Run AuditD service<br /><sub>(163)</sub> |Description: The capturing of system events provides system administrators with information to allow them to determine if unauthorized access to their system is occurring. |Run AuditD service (systemctl start auditd) |
+|Ensure SNMP Server is not enabled<br /><sub>(179)</sub> |Description: The SNMP server can communicate using SNMP v1, which transmits data in the clear and does not require authentication to execute commands. Unless absolutely necessary, it is recommended that the SNMP service not be used. If SNMP is required the server should be configured to disallow SNMP v1. |Run one of the following commands to disable `snmpd`: ``` # chkconfig snmpd off ``` ``` # systemctl disable snmpd ``` ``` # update-rc.d snmpd disable ``` |
+|Ensure rsync service is not enabled<br /><sub>(181)</sub> |Description: The `rsyncd` service presents a security risk as it uses unencrypted protocols for communication. |Run one of the following commands to disable `rsyncd` : `chkconfig rsyncd off`, `systemctl disable rsyncd`, `update-rc.d rsyncd disable` or run '/opt/microsoft/omsagent/plugin/omsremediate -r disable-rsysnc' |
+|Ensure NIS server is not enabled<br /><sub>(182)</sub> |Description: The NIS service is inherently an insecure system that has been vulnerable to DOS attacks, buffer overflows and has poor authentication for querying NIS maps. NIS generally been replaced by such protocols as Lightweight Directory Access Protocol (LDAP). It is recommended that the service be disabled and other, more secure services be used |Run one of the following commands to disable `ypserv` : ``` # chkconfig ypserv off ``` ``` # systemctl disable ypserv ``` ``` # update-rc.d ypserv disable ``` |
+|Ensure rsh client is not installed<br /><sub>(183)</sub> |Description: These legacy clients contain numerous security exposures and have been replaced with the more secure SSH package. Even if the server is removed, it is best to ensure the clients are also removed to prevent users from inadvertently attempting to use these commands and therefore exposing their credentials. Note that removing the `rsh `package removes the clients for `rsh`, `rcp `and `rlogin`. |Uninstall `rsh` using the appropriate package manager or manual installation: ``` yum remove rsh ``` ``` apt-get remove rsh ``` ``` zypper remove rsh ``` |
+|Disable SMB V1 with Samba<br /><sub>(185)</sub> |Description: SMB v1 has well-known, serious vulnerabilities and does not encrypt data in transit. If it must be used for overriding business reasons, it is strongly recommended that other mitigations be identified to compensate for the use of this protocol. |If Samba is not running, remove package, otherwise there should be a line in the [global] section of /etc/samba/smb.conf: min protocol = SMB2 or run '/opt/microsoft/omsagent/plugin/omsremediate -r set-smb-min-version |
+
+> [!NOTE]
+> Availability of specific Azure Policy Guest Configuration settings may vary in Azure Government
+> and other national clouds.
+
+## Next steps
+
+Additional articles about Azure Policy and Guest Configuration:
+
+- [Azure Policy Guest Configuration](../concepts/guest-configuration.md).
+- [Regulatory Compliance](../concepts/regulatory-compliance.md) overview.
+- Review other examples at [Azure Policy samples](./index.md).
+- Review [Understanding policy effects](../concepts/effects.md).
+- Learn how to [remediate non-compliant resources](../how-to/remediate-resources.md).
governance Guest Configuration Baseline Windows https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/guest-configuration-baseline-windows.md
+
+ Title: Reference - Azure Policy Guest Configuration baseline for Windows
+description: Details of the Windows baseline on Azure implemented through Azure Policy Guest Configuration.
Last updated : 03/11/2021+++
+# Azure Policy Guest Configuration baseline for Windows
+
+The following article details what the **\[Preview\] Windows machines should meet requirements for
+the Azure security baseline** Guest Configuration policy initiative audits. For more information,
+see [Azure Policy Guest Configuration](../concepts/guest-configuration.md) and
+[Overview of the Azure Security Benchmark (V2)](../../../security/benchmarks/overview.md).
+
+## Administrative Templates - Control Panel
+
+|Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity |
+|||||
+|Allow Input Personalization<br /><sub>(AZ-WIN-00168)</sub> |**Description**: This policy enables the automatic learning component of input personalization that includes speech, inking, and typing. Automatic learning enables the collection of speech and handwriting patterns, typing history, contacts, and recent calendar information. It is required for the use of Cortana. Some of this collected information may be stored on the user's OneDrive, in the case of inking and typing; some of the information will be uploaded to Microsoft to personalize speech. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\InputPersonalization\AllowInputPersonalization<br />**OS**: WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Warning |
+|Prevent enabling lock screen camera<br /><sub>(CCE-38347-1)</sub> |**Description**: Disables the lock screen camera toggle switch in PC Settings and prevents a camera from being invoked on the lock screen. By default, users can enable invocation of an available camera on the lock screen. If you enable this setting, users will no longer be able to enable or disable lock screen camera access in PC Settings, and the camera cannot be invoked on the lock screen.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\Personalization\NoLockScreenCamera<br />**OS**: WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Warning |
+|Prevent enabling lock screen slide show<br /><sub>(CCE-38348-9)</sub> |**Description**: Disables the lock screen slide show settings in PC Settings and prevents a slide show from playing on the lock screen. By default, users can enable a slide show that will run after they lock the machine. If you enable this setting, users will no longer be able to modify slide show settings in PC Settings, and no slide show will ever start.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\Personalization\NoLockScreenSlideshow<br />**OS**: WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Warning |
+
+## Administrative Templates - Network
+
+|Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity |
+|||||
+|Enable insecure guest logons<br /><sub>(AZ-WIN-00171)</sub> |**Description**: This policy setting determines if the SMB client will allow insecure guest logons to an SMB server. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\LanmanWorkstation\AllowInsecureGuestAuth<br />**OS**: WS2016<br />**Server Type**: Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Critical |
+|Minimize the number of simultaneous connections to the Internet or a Windows Domain<br /><sub>(CCE-38338-0)</sub> |**Description**: This policy setting prevents computers from connecting to both a domain based network and a non-domain based network at the same time. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\WcmSvc\GroupPolicy\fMinimizeConnections<br />**OS**: WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Warning |
+|Prohibit installation and configuration of Network Bridge on your DNS domain network<br /><sub>(CCE-38002-2)</sub> |**Description**: You can use this procedure to controls user's ability to install and configure a network bridge. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\Network Connections\NC_AllowNetBridge_NLA<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Warning |
+|Prohibit use of Internet Connection Sharing on your DNS domain network<br /><sub>(AZ-WIN-00172)</sub> |**Description**: Although this "legacy" setting traditionally applied to the use of Internet Connection Sharing (ICS) in Windows 2000, Windows XP & Server 2003, this setting now freshly applies to the Mobile Hotspot feature in Windows 10 & Server 2016. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\Network Connections\NC_PersonalFirewallConfig<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Warning |
+|Turn off multicast name resolution<br /><sub>(AZ-WIN-00145)</sub> |**Description**: LLMNR is a secondary name resolution protocol. With LLMNR, queries are sent using multicast over a local network link on a single subnet from a client computer to another client computer on the same subnet that also has LLMNR enabled. LLMNR does not require a DNS server or DNS client configuration, and provides name resolution in scenarios in which conventional DNS name resolution is not possible. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows NT\DNSClient\EnableMulticast<br />**OS**: WS2016<br />**Server Type**: Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Warning |
+
+## Administrative Templates - System
+
+|Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity |
+|||||
+|Block user from showing account details on sign-in<br /><sub>(AZ-WIN-00138)</sub> |**Description**: This policy prevents the user from showing account details (email address or user name) on the sign-in screen. If you enable this policy setting, the user cannot choose to show account details on the sign-in screen. If you disable or do not configure this policy setting, the user may choose to show account details on the sign-in screen.<br />**Key Path**: Software\Policies\Microsoft\Windows\System\BlockUserFromShowingAccountDetailsOnSignin<br />**OS**: WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Warning |
+|Boot-Start Driver Initialization Policy<br /><sub>(CCE-37912-3)</sub> |**Description**: This policy setting allows you to specify which boot-start drivers are initialized based on a classification determined by an Early Launch Antimalware boot-start driver. The Early Launch Antimalware boot-start driver can return the following classifications for each boot-start driver: - Good: The driver has been signed and has not been tampered with. - Bad: The driver has been identified as malware. It is recommended that you do not allow known bad drivers to be initialized. - Bad, but required for boot: The driver has been identified as malware, but the computer cannot successfully boot without loading this driver. - Unknown: This driver has not been attested to by your malware detection application and has not been classified by the Early Launch Antimalware boot-start driver. If you enable this policy setting you will be able to choose which boot-start drivers to initialize the next time the computer is started. If you disable or do not configure this policy setting, the boot start drivers determined to be Good, Unknown or Bad but Boot Critical are initialized and the initialization of drivers determined to be Bad is skipped. If your malware detection application does not include an Early Launch Antimalware boot-start driver or if your Early Launch Antimalware boot-start driver has been disabled, this setting has no effect and all boot-start drivers are initialized.<br />**Key Path**: SYSTEM\CurrentControlSet\Policies\EarlyLaunch\DriverLoadPolicy<br />**OS**: WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 3<br /><sub>(Registry)</sub> |Warning |
+|Configure Offer Remote Assistance<br /><sub>(CCE-36388-7)</sub> |**Description**: This policy setting allows you to turn on or turn off Offer (Unsolicited) Remote Assistance on this computer. Help desk and support personnel will not be able to proactively offer assistance, although they can still respond to user assistance requests. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows NT\Terminal Services\fAllowUnsolicited<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Warning |
+|Configure Solicited Remote Assistance<br /><sub>(CCE-37281-3)</sub> |**Description**: This policy setting allows you to turn on or turn off Solicited (Ask for) Remote Assistance on this computer. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows NT\Terminal Services\fAllowToGetHelp<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Critical |
+|Do not display network selection UI<br /><sub>(CCE-38353-9)</sub> |**Description**: This policy setting allows you to control whether anyone can interact with available networks UI on the logon screen. If you enable this policy setting, the PC's network connectivity state cannot be changed without signing into Windows. If you disable or don't configure this policy setting, any user can disconnect the PC from the network or can connect the PC to other available networks without signing into Windows.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\System\DontDisplayNetworkSelectionUI<br />**OS**: WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Warning |
+|Enable RPC Endpoint Mapper Client Authentication<br /><sub>(CCE-37346-4)</sub> |**Description**: This policy setting controls whether RPC clients authenticate with the Endpoint Mapper Service when the call they are making contains authentication information. The Endpoint Mapper Service on computers running Windows NT4 (all service packs) cannot process authentication information supplied in this manner. If you disable this policy setting, RPC clients will not authenticate to the Endpoint Mapper Service, but they will be able to communicate with the Endpoint Mapper Service on Windows NT4 Server. If you enable this policy setting, RPC clients will authenticate to the Endpoint Mapper Service for calls that contain authentication information. Clients making such calls will not be able to communicate with the Windows NT4 Server Endpoint Mapper Service. If you do not configure this policy setting, it remains disabled. RPC clients will not authenticate to the Endpoint Mapper Service, but they will be able to communicate with the Windows NT4 Server Endpoint Mapper Service. Note: This policy will not be applied until the system is rebooted.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows NT\Rpc\EnableAuthEpResolution<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |
+|Enable Windows NTP Client<br /><sub>(CCE-37843-0)</sub> |**Description**: This policy setting specifies whether the Windows NTP Client is enabled. Enabling the Windows NTP Client allows your computer to synchronize its computer clock with other NTP servers. You might want to disable this service if you decide to use a third-party time provider. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SYSTEM\CurrentControlSet\Services\W32Time\TimeProviders\NtpClient\Enabled<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |
+|Ensure 'Continue experiences on this device' is set to 'Disabled'<br /><sub>(AZ-WIN-00170)</sub> |**Description**: This policy setting determines whether the Windows device is allowed to participate in cross-device experiences (continue experiences). The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\System\EnableCdp<br />**OS**: WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Warning |
+|Include command line in process creation events<br /><sub>(CCE-36925-6)</sub> |**Description**: This policy setting determines what information is logged in security audit events when a new process has been created. This setting only applies when the Audit Process Creation policy is enabled. If you enable this policy setting the command line information for every process will be logged in plain text in the security event log as part of the Audit Process Creation event 4688, "a new process has been created," on the workstations and servers on which this policy setting is applied. If you disable or do not configure this policy setting, the process's command line information will not be included in Audit Process Creation events. Default: Not configured Note: When this policy setting is enabled, any user with access to read the security events will be able to read the command line arguments for any successfully created process. Command line arguments can contain sensitive or private information such as passwords or user data.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\Audit\ProcessCreationIncludeCmdLine_Enabled<br />**OS**: WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |
+|Turn off app notifications on the lock screen<br /><sub>(CCE-35893-7)</sub> |**Description**: This policy setting allows you to prevent app notifications from appearing on the lock screen. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\System\DisableLockScreenAppNotifications<br />**OS**: WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Warning |
+|Turn off downloading of print drivers over HTTP<br /><sub>(CCE-36625-2)</sub> |**Description**: This policy setting controls whether the computer can download print driver packages over HTTP. To set up HTTP printing, printer drivers that are not available in the standard operating system installation might need to be downloaded over HTTP. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows NT\Printers\DisableWebPnPDownload<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Warning |
+|Turn off Internet Connection Wizard if URL connection is referring to Microsoft.com<br /><sub>(CCE-37163-3)</sub> |**Description**: This policy setting specifies whether the Internet Connection Wizard can connect to Microsoft to download a list of Internet Service Providers (ISPs). The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\Internet Connection Wizard\ExitOnMSICW<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Warning |
+|Turn on convenience PIN sign-in<br /><sub>(CCE-37528-7)</sub> |**Description**: This policy setting allows you to control whether a domain user can sign in using a convenience PIN. In Windows 10, convenience PIN was replaced with Passport, which has stronger security properties. To configure Passport for domain users, use the policies under Computer configuration\\Administrative Templates\\Windows Components\\Microsoft Passport for Work. **Note:** The user's domain password will be cached in the system vault when using this feature. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\System\AllowDomainPINLogon<br />**OS**: WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Warning |
+
+## Security Options - Accounts
+
+|Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity |
+|||||
+|Accounts: Guest account status<br /><sub>(CCE-37432-2)</sub> |**Description**: This policy setting determines whether the Guest account is enabled or disabled. The Guest account allows unauthenticated network users to gain access to the system. The recommended state for this setting is: `Disabled`. **Note:** This setting will have no impact when applied to the domain controller organizational unit via group policy because domain controllers have no local account database. It can be configured at the domain level via group policy, similar to account lockout and password policy settings.<br />**Key Path**: [System Access]EnableGuestAccount<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Policy)</sub> |Critical |
+|Accounts: Limit local account use of blank passwords to console logon only<br /><sub>(CCE-37615-2)</sub> |**Description**: This policy setting determines whether local accounts that are not password protected can be used to log on from locations other than the physical computer console. If you enable this policy setting, local accounts that have blank passwords will not be able to log on to the network from remote client computers. Such accounts will only be able to log on at the keyboard of the computer. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SYSTEM\CurrentControlSet\Control\Lsa\LimitBlankPasswordUse<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Critical |
+
+## Security Options - Audit
+
+|Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity |
+|||||
+|Audit: Force audit policy subcategory settings (Windows Vista or later) to override audit policy category settings<br /><sub>(CCE-37850-5)</sub> |**Description**: This policy setting allows administrators to enable the more precise auditing capabilities present in Windows Vista. The Audit Policy settings available in Windows Server 2003 Active Directory do not yet contain settings for managing the new auditing subcategories. To properly apply the auditing policies prescribed in this baseline, the Audit: Force audit policy subcategory settings (Windows Vista or later) to override audit policy category settings setting needs to be configured to Enabled.<br />**Key Path**: SYSTEM\CurrentControlSet\Control\Lsa\SCENoApplyLegacyAuditPolicy<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Critical |
+|Audit: Shut down system immediately if unable to log security audits<br /><sub>(CCE-35907-5)</sub> |**Description**: This policy setting determines whether the system shuts down if it is unable to log Security events. It is a requirement for Trusted Computer System Evaluation Criteria (TCSEC)-C2 and Common Criteria certification to prevent auditable events from occurring if the audit system is unable to log them. Microsoft has chosen to meet this requirement by halting the system and displaying a stop message if the auditing system experiences a failure. When this policy setting is enabled, the system will be shut down if a security audit cannot be logged for any reason. If the Audit: Shut down system immediately if unable to log security audits setting is enabled, unplanned system failures can occur. The administrative burden can be significant, especially if you also configure the Retention method for the Security log to Do not overwrite events (clear log manually). This configuration causes a repudiation threat (a backup operator could deny that they backed up or restored data) to become a denial of service (DoS) vulnerability, because a server could be forced to shut down if it is overwhelmed with logon events and other security events that are written to the Security log. Also, because the shutdown is not graceful, it is possible that irreparable damage to the operating system, applications, or data could result. Although the NTFS file system guarantees its integrity when an ungraceful computer shutdown occurs, it cannot guarantee that every data file for every application will still be in a usable form when the computer restarts. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SYSTEM\CurrentControlSet\Control\Lsa\CrashOnAuditFail<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Critical |
+
+## Security Options - Devices
+
+|Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity |
+|||||
+|Devices: Allow undock without having to log on<br /><sub>(AZ-WIN-00120)</sub> |**Description**: This policy setting determines whether a portable computer can be undocked if the user does not log on to the system. Enable this policy setting to eliminate a Logon requirement and allow use of an external hardware eject button to undock the computer. If you disable this policy setting, a user must log on and have been assigned the Remove computer from docking station user right to undock the computer.<br />**Key Path**: Software\Microsoft\Windows\CurrentVersion\Policies\System\UndockWithoutLogon<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Informational |
+|Devices: Allowed to format and eject removable media<br /><sub>(CCE-37701-0)</sub> |**Description**: This policy setting determines who is allowed to format and eject removable media. You can use this policy setting to prevent unauthorized users from removing data on one computer to access it on another computer on which they have local administrator privileges.<br />**Key Path**: SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon\AllocateDASD<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Warning |
+|Devices: Prevent users from installing printer drivers<br /><sub>(CCE-37942-0)</sub> |**Description**: For a computer to print to a shared printer, the driver for that shared printer must be installed on the local computer. This security setting determines who is allowed to install a printer driver as part of connecting to a shared printer. The recommended state for this setting is: `Enabled`. **Note:** This setting does not affect the ability to add a local printer. This setting does not affect Administrators.<br />**Key Path**: SYSTEM\CurrentControlSet\Control\Print\Providers\LanMan Print Services\Servers\AddPrinterDrivers<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Warning |
+
+## Security Options - Interactive Logon
+
+|Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity |
+|||||
+|Interactive logon: Do not display last user name<br /><sub>(CCE-36056-0)</sub> |**Description**: This policy setting determines whether the account name of the last user to log on to the client computers in your organization will be displayed in each computer's respective Windows logon screen. Enable this policy setting to prevent intruders from collecting account names visually from the screens of desktop or laptop computers in your organization. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\DontDisplayLastUserName<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |
+|Interactive logon: Do not require CTRL+ALT+DEL<br /><sub>(CCE-37637-6)</sub> |**Description**: This policy setting determines whether users must press CTRL+ALT+DEL before they log on. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\DisableCAD<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Critical |
+
+## Security Options - Microsoft Network Client
+
+|Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity |
+|||||
+|Microsoft network client: Digitally sign communications (always)<br /><sub>(CCE-36325-9)</sub> |**Description**: <p><span>This policy setting determines whether packet signing is required by the SMB client component. **Note:** When Windows Vista-based computers have this policy setting enabled and they connect to file or print shares on remote servers, it is important that the setting is synchronized with its companion setting, **Microsoft network server: Digitally sign communications (always)**, on those servers. For more information about these settings, see the &quot;Microsoft network client and server: Digitally sign communications (four related settings)&quot; section in Chapter 5 of the Threats and Countermeasures guide. The recommended state for this setting is: 'Enabled'.</span></p><br />**Key Path**: SYSTEM\CurrentControlSet\Services\LanmanWorkstation\Parameters\RequireSecuritySignature<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |
+|Microsoft network client: Digitally sign communications (if server agrees)<br /><sub>(CCE-36269-9)</sub> |**Description**: This policy setting determines whether the SMB client will attempt to negotiate SMB packet signing. **Note:** Enabling this policy setting on SMB clients on your network makes them fully effective for packet signing with all clients and servers in your environment. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SYSTEM\CurrentControlSet\Services\LanmanWorkstation\Parameters\EnableSecuritySignature<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Critical |
+|Microsoft network client: Send unencrypted password to third-party SMB servers<br /><sub>(CCE-37863-8)</sub> |**Description**: <p><span>This policy setting determines whether the SMB redirector will send plaintext passwords during authentication to third-party SMB servers that do not support password encryption. It is recommended that you disable this policy setting unless there is a strong business case to enable it. If this policy setting is enabled, unencrypted passwords will be allowed across the network. The recommended state for this setting is: 'Disabled'.</span></p><br />**Key Path**: SYSTEM\CurrentControlSet\Services\LanmanWorkstation\Parameters\EnablePlainTextPassword<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Critical |
+|Microsoft network server: Amount of idle time required before suspending session<br /><sub>(CCE-38046-9)</sub> |**Description**: This policy setting allows you to specify the amount of continuous idle time that must pass in an SMB session before the session is suspended because of inactivity. Administrators can use this policy setting to control when a computer suspends an inactive SMB session. If client activity resumes, the session is automatically reestablished. A value of 0 appears to allow sessions to persist indefinitely. The maximum value is 99999, which is over 69 days; in effect, this value disables the setting. The recommended state for this setting is: `15 or fewer minute(s), but not 0`.<br />**Key Path**: SYSTEM\CurrentControlSet\Services\LanManServer\Parameters\AutoDisconnect<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |In 1-15<br /><sub>(Registry)</sub> |Critical |
+|Microsoft network server: Digitally sign communications (always)<br /><sub>(CCE-37864-6)</sub> |**Description**: This policy setting determines whether packet signing is required by the SMB server component. Enable this policy setting in a mixed environment to prevent downstream clients from using the workstation as a network server. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SYSTEM\CurrentControlSet\Services\LanManServer\Parameters\RequireSecuritySignature<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |
+|Microsoft network server: Digitally sign communications (if client agrees)<br /><sub>(CCE-35988-5)</sub> |**Description**: This policy setting determines whether the SMB server will negotiate SMB packet signing with clients that request it. If no signing request comes from the client, a connection will be allowed without a signature if the **Microsoft network server: Digitally sign communications (always)** setting is not enabled. **Note:** Enable this policy setting on SMB clients on your network to make them fully effective for packet signing with all clients and servers in your environment. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SYSTEM\CurrentControlSet\Services\LanManServer\Parameters\EnableSecuritySignature<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |
+|Microsoft network server: Disconnect clients when logon hours expire<br /><sub>(CCE-37972-7)</sub> |**Description**: This security setting determines whether to disconnect users who are connected to the local computer outside their user account's valid logon hours. This setting affects the Server Message Block (SMB) component. If you enable this policy setting you should also enable **Network security: Force logoff when logon hours expire** (Rule 2.3.11.6). If your organization configures logon hours for users, this policy setting is necessary to ensure they are effective. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SYSTEM\CurrentControlSet\Services\LanManServer\Parameters\EnableForcedLogoff<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Critical |
+
+## Security Options - Microsoft Network Server
+
+|Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity |
+|||||
+|Disable SMB v1 server<br /><sub>(AZ-WIN-00175)</sub> |**Description**: Disabling this setting disables server-side processing of the SMBv1 protocol. (Recommended.) Enabling this setting enables server-side processing of the SMBv1 protocol. (Default.) Changes to this setting require a reboot to take effect. For more information, see https://support.microsoft.com/kb/2696547<br />**Key Path**: SYSTEM\CurrentControlSet\Services\LanmanServer\Parameters\SMB1<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Critical |
+
+## Security Options - Network Access
+
+|Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity |
+|||||
+|Network access: Do not allow anonymous enumeration of SAM accounts<br /><sub>(CCE-36316-8)</sub> |**Description**: This policy setting controls the ability of anonymous users to enumerate the accounts in the Security Accounts Manager (SAM). If you enable this policy setting, users with anonymous connections will not be able to enumerate domain account user names on the systems in your environment. This policy setting also allows additional restrictions on anonymous connections. The recommended state for this setting is: `Enabled`. **Note:** This policy has no effect on domain controllers.<br />**Key Path**: SYSTEM\CurrentControlSet\Control\Lsa\RestrictAnonymousSAM<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Critical |
+|Network access: Do not allow anonymous enumeration of SAM accounts and shares<br /><sub>(CCE-36077-6)</sub> |**Description**: This policy setting controls the ability of anonymous users to enumerate SAM accounts as well as shares. If you enable this policy setting, anonymous users will not be able to enumerate domain account user names and network share names on the systems in your environment. The recommended state for this setting is: `Enabled`. **Note:** This policy has no effect on domain controllers.<br />**Key Path**: SYSTEM\CurrentControlSet\Control\Lsa\RestrictAnonymous<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |
+|Network access: Let Everyone permissions apply to anonymous users<br /><sub>(CCE-36148-5)</sub> |**Description**: This policy setting determines what additional permissions are assigned for anonymous connections to the computer. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SYSTEM\CurrentControlSet\Control\Lsa\EveryoneIncludesAnonymous<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Critical |
+|Network access: Remotely accessible registry paths<br /><sub>(CCE-37194-8)</sub> |**Description**: This policy setting determines which registry paths will be accessible after referencing the WinReg key to determine access permissions to the paths. Note: This setting does not exist in Windows XP. There was a setting with that name in Windows XP, but it is called "Network access: Remotely accessible registry paths and subpaths" in Windows Server 2003, Windows Vista, and Windows Server 2008. Note: When you configure this setting you specify a list of one or more objects. The delimiter used when entering the list is a line feed or carriage return, that is, type the first object on the list, press the Enter button, type the next object, press Enter again, etc. The setting value is stored as a comma-delimited list in group policy security templates. It is also rendered as a comma-delimited list in Group Policy Editor's display pane and the Resultant Set of Policy console. It is recorded in the registry as a line-feed delimited list in a REG_MULTI_SZ value.<br />**Key Path**: SYSTEM\CurrentControlSet\Control\SecurePipeServers\Winreg\AllowedExactPaths\Machine<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= System\CurrentControlSet\Control\ProductOptions\0System\CurrentControlSet\Control\Server Applications\0Software\Microsoft\Windows NT\CurrentVersion\0\0<br /><sub>(Registry)</sub> |Critical |
+|Network access: Remotely accessible registry paths and sub-paths<br /><sub>(CCE-36347-3)</sub> |**Description**: This policy setting determines which registry paths and sub-paths will be accessible when an application or process references the WinReg key to determine access permissions. Note: In Windows XP this setting is called "Network access: Remotely accessible registry paths," the setting with that same name in Windows Vista, Windows Server 2008, and Windows Server 2003 does not exist in Windows XP. Note: When you configure this setting you specify a list of one or more objects. The delimiter used when entering the list is a line feed or carriage return, that is, type the first object on the list, press the Enter button, type the next object, press Enter again, etc. The setting value is stored as a comma-delimited list in group policy security templates. It is also rendered as a comma-delimited list in Group Policy Editor's display pane and the Resultant Set of Policy console. It is recorded in the registry as a line-feed delimited list in a REG_MULTI_SZ value.<br />**Key Path**: SYSTEM\CurrentControlSet\Control\SecurePipeServers\Winreg\AllowedPaths\Machine<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Member, Workgroup Member |Doesn't exist or \= System\CurrentControlSet\Control\Print\Printers\0System\CurrentControlSet\Services\Eventlog\0Software\Microsoft\OLAP Server\0Software\Microsoft\Windows NT\CurrentVersion\Print\0Software\Microsoft\Windows NT\CurrentVersion\Windows\0System\CurrentControlSet\Control\ContentIndex\0System\CurrentControlSet\Control\Terminal Server\0System\CurrentControlSet\Control\Terminal Server\UserConfig\0System\CurrentControlSet\Control\Terminal Server\DefaultUserConfiguration\0Software\Microsoft\Windows NT\CurrentVersion\Perflib\0System\CurrentControlSet\Services\SysmonLog\0\0<br /><sub>(Registry)</sub> |Critical |
+|Network access: Restrict anonymous access to Named Pipes and Shares<br /><sub>(CCE-36021-4)</sub> |**Description**: When enabled, this policy setting restricts anonymous access to only those shares and pipes that are named in the `Network access: Named pipes that can be accessed anonymously` and `Network access: Shares that can be accessed anonymously` settings. This policy setting controls null session access to shares on your computers by adding `RestrictNullSessAccess` with the value `1` in the `HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\LanManServer\Parameters` registry key. This registry value toggles null session shares on or off to control whether the server service restricts unauthenticated clients' access to named resources. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SYSTEM\CurrentControlSet\Services\LanManServer\Parameters\RestrictNullSessAccess<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Critical |
+|Network access: Restrict clients allowed to make remote calls to SAM<br /><sub>(AZ-WIN-00142)</sub> |**Description**: This policy setting allows you to restrict remote RPC connections to SAM. If not selected, the default security descriptor will be used. This policy is supported on at least Windows Server 2016.<br />**Key Path**: SYSTEM\CurrentControlSet\Control\Lsa\RestrictRemoteSAM<br />**OS**: WS2016<br />**Server Type**: Domain Member, Workgroup Member |Doesn't exist or \= O:BAG:BAD:(A;;RC;;;BA)<br /><sub>(Registry)</sub> |Critical |
+|Network access: Shares that can be accessed anonymously<br /><sub>(CCE-38095-6)</sub> |**Description**: This policy setting determines which network shares can be accessed by anonymous users. The default configuration for this policy setting has little effect because all users have to be authenticated before they can access shared resources on the server. Note: It can be very dangerous to add other shares to this Group Policy setting. Any network user can access any shares that are listed, which could exposure or corrupt sensitive data. Note: When you configure this setting you specify a list of one or more objects. The delimiter used when entering the list is a line feed or carriage return, that is, type the first object on the list, press the Enter button, type the next object, press Enter again, etc. The setting value is stored as a comma-delimited list in group policy security templates. It is also rendered as a comma-delimited list in Group Policy Editor's display pane and the Resultant Set of Policy console. It is recorded in the registry as a line-feed delimited list in a REG_MULTI_SZ value.<br />**Key Path**: SYSTEM\CurrentControlSet\Services\LanManServer\Parameters\NullSessionShares<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= <br /><sub>(Registry)</sub> |Critical |
+|Network access: Sharing and security model for local accounts<br /><sub>(CCE-37623-6)</sub> |**Description**: This policy setting determines how network logons that use local accounts are authenticated. The Classic option allows precise control over access to resources, including the ability to assign different types of access to different users for the same resource. The Guest only option allows you to treat all users equally. In this context, all users authenticate as Guest only to receive the same access level to a given resource. The recommended state for this setting is: `Classic - local users authenticate as themselves`. **Note:** This setting does not affect interactive logons that are performed remotely by using such services as Telnet or Remote Desktop Services (formerly called Terminal Services).<br />**Key Path**: SYSTEM\CurrentControlSet\Control\Lsa\ForceGuest<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Critical |
+
+## Security Options - Network Security
+
+|Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity |
+|||||
+|Network security: Allow Local System to use computer identity for NTLM<br /><sub>(CCE-38341-4)</sub> |**Description**: When enabled, this policy setting causes Local System services that use Negotiate to use the computer identity when NTLM authentication is selected by the negotiation. This policy is supported on at least Windows 7 or Windows Server 2008 R2.<br />**Key Path**: SYSTEM\CurrentControlSet\Control\Lsa\UseMachineId<br />**OS**: WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |
+|Network security: Allow LocalSystem NULL session fallback<br /><sub>(CCE-37035-3)</sub> |**Description**: This policy setting determines whether NTLM is allowed to fall back to a NULL session when used with LocalSystem. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SYSTEM\CurrentControlSet\Control\Lsa\MSV1_0\AllowNullSessionFallback<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Critical |
+|Network Security: Allow PKU2U authentication requests to this computer to use online identities<br /><sub>(CCE-38047-7)</sub> |**Description**: This setting determines if online identities are able to authenticate to this computer. The Public Key Cryptography Based User-to-User (PKU2U) protocol introduced in Windows 7 and Windows Server 2008 R2 is implemented as a security support provider (SSP). The SSP enables peer-to-peer authentication, particularly through the Windows 7 media and file sharing feature called Homegroup, which permits sharing between computers that are not members of a domain. With PKU2U, a new extension was introduced to the Negotiate authentication package, `Spnego.dll`. In previous versions of Windows, Negotiate decided whether to use Kerberos or NTLM for authentication. The extension SSP for Negotiate, `Negoexts.dll`, which is treated as an authentication protocol by Windows, supports Microsoft SSPs including PKU2U. When computers are configured to accept authentication requests by using online IDs, `Negoexts.dll` calls the PKU2U SSP on the computer that is used to log on. The PKU2U SSP obtains a local certificate and exchanges the policy between the peer computers. When validated on the peer computer, the certificate within the metadata is sent to the logon peer for validation and associates the user's certificate to a security token and the logon process completes. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SYSTEM\CurrentControlSet\Control\Lsa\pku2u\AllowOnlineID<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Warning |
+|Network Security: Configure encryption types allowed for Kerberos<br /><sub>(CCE-37755-6)</sub> |**Description**: This policy setting allows you to set the encryption types that Kerberos is allowed to use. This policy is supported on at least Windows 7 or Windows Server 2008 R2.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\Kerberos\Parameters\SupportedEncryptionTypes<br />**OS**: WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 2147483644<br /><sub>(Registry)</sub> |Critical |
+|Network security: Do not store LAN Manager hash value on next password change<br /><sub>(CCE-36326-7)</sub> |**Description**: This policy setting determines whether the LAN Manager (LM) hash value for the new password is stored when the password is changed. The LM hash is relatively weak and prone to attack compared to the cryptographically stronger Microsoft Windows NT hash. Since LM hashes are stored on the local computer in the security database, passwords can then be easily compromised if the database is attacked. **Note:** Older operating systems and some third-party applications may fail when this policy setting is enabled. Also, note that the password will need to be changed on all accounts after you enable this setting to gain the proper benefit. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SYSTEM\CurrentControlSet\Control\Lsa\NoLMHash<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Critical |
+|Network security: LAN Manager authentication level<br /><sub>(CCE-36173-3)</sub> |**Description**: LAN Manager (LM) is a family of early Microsoft client/server software that allows users to link personal computers together on a single network. Network capabilities include transparent file and print sharing, user security features, and network administration tools. In Active Directory domains, the Kerberos protocol is the default authentication protocol. However, if the Kerberos protocol is not negotiated for some reason, Active Directory will use LM, NTLM, or NTLMv2. LAN Manager authentication includes the LM, NTLM, and NTLM version 2 (NTLMv2) variants, and is the protocol that is used to authenticate all Windows clients when they perform the following operations: - Join a domain - Authenticate between Active Directory forests - Authenticate to down-level domains - Authenticate to computers that do not run Windows 2000, Windows Server 2003, or Windows XP) - Authenticate to computers that are not in the domain The possible values for the Network security: LAN Manager authentication level settings are: - Send LM & NTLM responses - Send LM & NTLM ΓÇö use NTLMv2 session security if negotiated - Send NTLM responses only - Send NTLMv2 responses only - Send NTLMv2 responses only\refuse LM - Send NTLMv2 responses only\refuse LM & NTLM - Not Defined The Network security: LAN Manager authentication level setting determines which challenge/response authentication protocol is used for network logons. This choice affects the authentication protocol level that clients use, the session security level that the computers negotiate, and the authentication level that servers accept as follows: - Send LM & NTLM responses. Clients use LM and NTLM authentication and never use NTLMv2 session security. Domain controllers accept LM, NTLM, and NTLMv2 authentication. - Send LM & NTLM ΓÇö use NTLMv2 session security if negotiated. Clients use LM and NTLM authentication and use NTLMv2 session security if the server supports it. Domain controllers accept LM, NTLM, and NTLMv2 authentication. - Send NTLM response only. Clients use NTLM authentication only and use NTLMv2 session security if the server supports it. Domain controllers accept LM, NTLM, and NTLMv2 authentication. - Send NTLMv2 response only. Clients use NTLMv2 authentication only and use NTLMv2 session security if the server supports it. Domain controllers accept LM, NTLM, and NTLMv2 authentication. - Send NTLMv2 response only\refuse LM. Clients use NTLMv2 authentication only and use NTLMv2 session security if the server supports it. Domain controllers refuse LM (accept only NTLM and NTLMv2 authentication). - Send NTLMv2 response only\refuse LM & NTLM. Clients use NTLMv2 authentication only and use NTLMv2 session security if the server supports it. Domain controllers refuse LM and NTLM (accept only NTLMv2 authentication). These settings correspond to the levels discussed in other Microsoft documents as follows: - Level 0 ΓÇö Send LM and NTLM response; never use NTLMv2 session security. Clients use LM and NTLM authentication, and never use NTLMv2 session security. Domain controllers accept LM, NTLM, and NTLMv2 authentication. - Level 1 ΓÇö Use NTLMv2 session security if negotiated. Clients use LM and NTLM authentication, and use NTLMv2 session security if the server supports it. Domain controllers accept LM, NTLM, and NTLMv2 authentication. - Level 2 ΓÇö Send NTLM response only. Clients use only NTLM authentication, and use NTLMv2 session security if the server supports it. Domain controllers accept LM, NTLM, and NTLMv2 authentication. - Level 3 ΓÇö Send NTLMv2 response only. Clients use NTLMv2 authentication, and use NTLMv2 session security if the server supports it. Domain controllers accept LM, NTLM, and NTLMv2 authentication. - Level 4 ΓÇö Domain controllers refuse LM responses. Clients use NTLM authentication, and use NTLMv2 session security if the server supports it. Domain controllers refuse LM authentication, that is, they accept NTLM and NTLMv2. - Level 5 ΓÇö Domain controllers refuse LM and NTLM responses (accept only NTLMv2). Clients use NTLMv2 authentication, use and NTLMv2 session security if the server supports it. Domain controllers refuse NTLM and LM authentication (they accept only NTLMv2).<br />**Key Path**: SYSTEM\CurrentControlSet\Control\Lsa\LmCompatibilityLevel<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 5<br /><sub>(Registry)</sub> |Critical |
+|Network security: LDAP client signing requirements<br /><sub>(CCE-36858-9)</sub> |**Description**: This policy setting determines the level of data signing that is requested on behalf of clients that issue LDAP BIND requests. **Note:** This policy setting does not have any impact on LDAP simple bind (`ldap_simple_bind`) or LDAP simple bind through SSL (`ldap_simple_bind_s`). No Microsoft LDAP clients that are included with Windows XP Professional use ldap_simple_bind or ldap_simple_bind_s to communicate with a domain controller. The recommended state for this setting is: `Negotiate signing`. Configuring this setting to `Require signing` also conforms with the benchmark.<br />**Key Path**: SYSTEM\CurrentControlSet\Services\LDAP\LDAPClientIntegrity<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Critical |
+|Network security: Minimum session security for NTLM SSP based (including secure RPC) clients<br /><sub>(CCE-37553-5)</sub> |**Description**: This policy setting determines which behaviors are allowed by clients for applications using the NTLM Security Support Provider (SSP). The SSP Interface (SSPI) is used by applications that need authentication services. The setting does not modify how the authentication sequence works but instead require certain behaviors in applications that use the SSPI. The recommended state for this setting is: `Require NTLMv2 session security, Require 128-bit encryption`. **Note:** These values are dependent on the _Network security: LAN Manager Authentication Level_ security setting value.<br />**Key Path**: SYSTEM\CurrentControlSet\Control\Lsa\MSV1_0\NTLMMinClientSec<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 537395200<br /><sub>(Registry)</sub> |Critical |
+|Network security: Minimum session security for NTLM SSP based (including secure RPC) servers<br /><sub>(CCE-37835-6)</sub> |**Description**: This policy setting determines which behaviors are allowed by servers for applications using the NTLM Security Support Provider (SSP). The SSP Interface (SSPI) is used by applications that need authentication services. The setting does not modify how the authentication sequence works but instead require certain behaviors in applications that use the SSPI. The recommended state for this setting is: `Require NTLMv2 session security, Require 128-bit encryption`. **Note:** These values are dependent on the _Network security: LAN Manager Authentication Level_ security setting value.<br />**Key Path**: SYSTEM\CurrentControlSet\Control\Lsa\MSV1_0\NTLMMinServerSec<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 537395200<br /><sub>(Registry)</sub> |Critical |
+
+## Security Options - Recovery console
+
+|Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity |
+|||||
+|Recovery console: Allow floppy copy and access to all drives and all folders<br /><sub>(AZ-WIN-00180)</sub> |**Description**: This policy setting makes the Recovery Console SET command available, which allows you to set the following recovery console environment variables: ΓÇó AllowWildCards. Enables wildcard support for some commands (such as the DEL command). ΓÇó AllowAllPaths. Allows access to all files and folders on the computer. ΓÇó AllowRemovableMedia. Allows files to be copied to removable media, such as a floppy disk. ΓÇó NoCopyPrompt. Does not prompt when overwriting an existing file.<br />**Key Path**: Software\Microsoft\Windows NT\CurrentVersion\Setup\RecoveryConsole\setcommand<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Warning |
+
+## Security Options - Shutdown
+
+|Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity |
+|||||
+|Shutdown: Allow system to be shut down without having to log on<br /><sub>(CCE-36788-8)</sub> |**Description**: This policy setting determines whether a computer can be shut down when a user is not logged on. If this policy setting is enabled, the shutdown command is available on the Windows logon screen. It is recommended to disable this policy setting to restrict the ability to shut down the computer to users with credentials on the system. The recommended state for this setting is: `Disabled`. **Note:** In Server 2008 R2 and older versions, this setting had no impact on Remote Desktop (RDP) / Terminal Services sessions - it only affected the local console. However, Microsoft changed the behavior in Windows Server 2012 (non-R2) and above, where if set to Enabled, RDP sessions are also allowed to shut down or restart the server.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\ShutdownWithoutLogon<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Warning |
+|Shutdown: Clear virtual memory pagefile<br /><sub>(AZ-WIN-00181)</sub> |**Description**: This policy setting determines whether the virtual memory pagefile is cleared when the system is shut down. When this policy setting is enabled, the system pagefile is cleared each time that the system shuts down properly. If you enable this security setting, the hibernation file (Hiberfil.sys) is zeroed out when hibernation is disabled on a portable computer system. It will take longer to shut down and restart the computer, and will be especially noticeable on computers with large paging files.<br />**Key Path**: System\CurrentControlSet\Control\Session Manager\Memory Management\ClearPageFileAtShutdown<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Critical |
+
+## Security Options - System objects
+
+|Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity |
+|||||
+|System objects: Require case insensitivity for non-Windows subsystems<br /><sub>(CCE-37885-1)</sub> |**Description**: This policy setting determines whether case insensitivity is enforced for all subsystems. The Microsoft Win32 subsystem is case insensitive. However, the kernel supports case sensitivity for other subsystems, such as the Portable Operating System Interface for UNIX (POSIX). Because Windows is case insensitive (but the POSIX subsystem will support case sensitivity), failure to enforce this policy setting makes it possible for a user of the POSIX subsystem to create a file with the same name as another file by using mixed case to label it. Such a situation can block access to these files by another user who uses typical Win32 tools, because only one of the files will be available. The recommended state for this setting is: `Enabled`.<br />**Key Path**: System\CurrentControlSet\Control\Session Manager\Kernel\ObCaseInsensitive<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Warning |
+|System objects: Strengthen default permissions of internal system objects (e.g. Symbolic Links)<br /><sub>(CCE-37644-2)</sub> |**Description**: This policy setting determines the strength of the default discretionary access control list (DACL) for objects. Active Directory maintains a global list of shared system resources, such as DOS device names, mutexes, and semaphores. In this way, objects can be located and shared among processes. Each type of object is created with a default DACL that specifies who can access the objects and what permissions are granted. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SYSTEM\CurrentControlSet\Control\Session Manager\ProtectionMode<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Critical |
+
+## Security Options - System settings
+
+|Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity |
+|||||
+|System settings: Use Certificate Rules on Windows Executables for Software Restriction Policies<br /><sub>(AZ-WIN-00155)</sub> |**Description**: This policy setting determines whether digital certificates are processed when software restriction policies are enabled and a user or process attempts to run software with an .exe file name extension. It enables or disables certificate rules (a type of software restriction policies rule). With software restriction policies, you can create a certificate rule that will allow or disallow the execution of Authenticode ®-signed software, based on the digital certificate that is associated with the software. For certificate rules to take effect in software restriction policies, you must enable this policy setting.<br />**Key Path**: Software\Policies\Microsoft\Windows\Safer\CodeIdentifiers\AuthenticodeEnabled<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Warning |
+
+## Security Options - User Account Control
+
+|Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity |
+|||||
+|User Account Control: Admin Approval Mode for the Built-in Administrator account<br /><sub>(CCE-36494-3)</sub> |**Description**: This policy setting controls the behavior of Admin Approval Mode for the built-in Administrator account. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\FilterAdministratorToken<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |
+|User Account Control: Allow UIAccess applications to prompt for elevation without using the secure desktop<br /><sub>(CCE-36863-9)</sub> |**Description**: This policy setting controls whether User Interface Accessibility (UIAccess or UIA) programs can automatically disable the secure desktop for elevation prompts used by a standard user. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\EnableUIADesktopToggle<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Critical |
+|User Account Control: Behavior of the elevation prompt for administrators in Admin Approval Mode<br /><sub>(CCE-37029-6)</sub> |**Description**: This policy setting controls the behavior of the elevation prompt for administrators. The recommended state for this setting is: `Prompt for consent on the secure desktop`.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\ConsentPromptBehaviorAdmin<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 2<br /><sub>(Registry)</sub> |Critical |
+|User Account Control: Behavior of the elevation prompt for standard users<br /><sub>(CCE-36864-7)</sub> |**Description**: This policy setting controls the behavior of the elevation prompt for standard users. The recommended state for this setting is: `Automatically deny elevation requests`.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\ConsentPromptBehaviorUser<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Critical |
+|User Account Control: Detect application installations and prompt for elevation<br /><sub>(CCE-36533-8)</sub> |**Description**: This policy setting controls the behavior of application installation detection for the computer. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\EnableInstallerDetection<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |
+|User Account Control: Only elevate UIAccess applications that are installed in secure locations<br /><sub>(CCE-37057-7)</sub> |**Description**: This policy setting controls whether applications that request to run with a User Interface Accessibility (UIAccess) integrity level must reside in a secure location in the file system. Secure locations are limited to the following: - `…\Program Files\`, including subfolders - `…\Windows\system32\` - `…\Program Files (x86)\`, including subfolders for 64-bit versions of Windows **Note:** Windows enforces a public key infrastructure (PKI) signature check on any interactive application that requests to run with a UIAccess integrity level regardless of the state of this security setting. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\EnableSecureUIAPaths<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Critical |
+|User Account Control: Run all administrators in Admin Approval Mode<br /><sub>(CCE-36869-6)</sub> |**Description**: This policy setting controls the behavior of all User Account Control (UAC) policy settings for the computer. If you change this policy setting, you must restart your computer. The recommended state for this setting is: `Enabled`. **Note:** If this policy setting is disabled, the Security Center notifies you that the overall security of the operating system has been reduced.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\EnableLUA<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Critical |
+|User Account Control: Switch to the secure desktop when prompting for elevation<br /><sub>(CCE-36866-2)</sub> |**Description**: This policy setting controls whether the elevation request prompt is displayed on the interactive user's desktop or the secure desktop. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\PromptOnSecureDesktop<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Critical |
+|User Account Control: Virtualize file and registry write failures to per-user locations<br /><sub>(CCE-37064-3)</sub> |**Description**: This policy setting controls whether application write failures are redirected to defined registry and file system locations. This policy setting mitigates applications that run as administrator and write run-time application data to: - `%ProgramFiles%`, - `%Windir%`, - `%Windir%\system32`, or - `HKEY_LOCAL_MACHINE\Software`. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\EnableVirtualization<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Critical |
+
+## Security Settings - Account Policies
+
+|Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity |
+|||||
+|Enforce password history<br /><sub>(CCE-37166-6)</sub> |**Description**: <p><span>This policy setting determines the number of renewed, unique passwords that have to be associated with a user account before you can reuse an old password. The value for this policy setting must be between 0 and 24 passwords. The default value for Windows Vista is 0 passwords, but the default setting in a domain is 24 passwords. To maintain the effectiveness of this policy setting, use the Minimum password age setting to prevent users from repeatedly changing their password. The recommended state for this setting is: '24 or more password(s)'.</span></p><br />**Key Path**: [System Access]PasswordHistorySize<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= 24<br /><sub>(Policy)</sub> |Critical |
+|Maximum password age<br /><sub>(CCE-37167-4)</sub> |**Description**: This policy setting defines how long a user can use their password before it expires. Values for this policy setting range from 0 to 999 days. If you set the value to 0, the password will never expire. Because attackers can crack passwords, the more frequently you change the password the less opportunity an attacker has to use a cracked password. However, the lower this value is set, the higher the potential for an increase in calls to help desk support due to users having to change their password or forgetting which password is current. The recommended state for this setting is `60 or fewer days, but not 0`.<br />**Key Path**: [System Access]MaximumPasswordAge<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |In 1-70<br /><sub>(Policy)</sub> |Critical |
+|Minimum password age<br /><sub>(CCE-37073-4)</sub> |**Description**: This policy setting determines the number of days that you must use a password before you can change it. The range of values for this policy setting is between 1 and 999 days. (You may also set the value to 0 to allow immediate password changes.) The default value for this setting is 0 days. The recommended state for this setting is: `1 or more day(s)`.<br />**Key Path**: [System Access]MinimumPasswordAge<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= 1<br /><sub>(Policy)</sub> |Critical |
+|Minimum password length<br /><sub>(CCE-36534-6)</sub> |**Description**: This policy setting determines the least number of characters that make up a password for a user account. There are many different theories about how to determine the best password length for an organization, but perhaps "pass phrase" is a better term than "password." In Microsoft Windows 2000 or later, pass phrases can be quite long and can include spaces. Therefore, a phrase such as "I want to drink a $5 milkshake" is a valid pass phrase; it is a considerably stronger password than an 8 or 10 character string of random numbers and letters, and yet is easier to remember. Users must be educated about the proper selection and maintenance of passwords, especially with regard to password length. In enterprise environments, the ideal value for the Minimum password length setting is 14 characters, however you should adjust this value to meet your organization's business requirements. The recommended state for this setting is: `14 or more character(s)`.<br />**Key Path**: [System Access]MinimumPasswordLength<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= 14<br /><sub>(Policy)</sub> |Critical |
+|Password must meet complexity requirements<br /><sub>(CCE-37063-5)</sub> |**Description**: This policy setting checks all new passwords to ensure that they meet basic requirements for strong passwords. When this policy is enabled, passwords must meet the following minimum requirements: - Not contain the user's account name or parts of the user's full name that exceed two consecutive characters - Be at least six characters in length - Contain characters from three of the following four categories: - English uppercase characters (A through Z) - English lowercase characters (a through z) - Base 10 digits (0 through 9) - Non-alphabetic characters (for example, !, $, #, %) - A catch-all category of any Unicode character that does not fall under the previous four categories. This fifth category can be regionally specific. Each additional character in a password increases its complexity exponentially. For instance, a seven-character, all lower-case alphabetic password would have 267 (approximately 8 x 109 or 8 billion) possible combinations. At 1,000,000 attempts per second (a capability of many password-cracking utilities), it would only take 133 minutes to crack. A seven-character alphabetic password with case sensitivity has 527 combinations. A seven-character case-sensitive alphanumeric password without punctuation has 627 combinations. An eight-character password has 268 (or 2 x 1011) possible combinations. Although this might seem to be a large number, at 1,000,000 attempts per second it would take only 59 hours to try all possible passwords. Remember, these times will significantly increase for passwords that use ALT characters and other special keyboard characters such as "!" or "@". Proper use of the password settings can help make it difficult to mount a brute force attack. The recommended state for this setting is: `Enabled`.<br />**Key Path**: [System Access]PasswordComplexity<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= true<br /><sub>(Policy)</sub> |Critical |
+|Store passwords using reversible encryption<br /><sub>(CCE-36286-3)</sub> |**Description**: This policy setting determines whether the operating system stores passwords in a way that uses reversible encryption, which provides support for application protocols that require knowledge of the user's password for authentication purposes. Passwords that are stored with reversible encryption are essentially the same as plaintext versions of the passwords. The recommended state for this setting is: `Disabled`.<br />**Key Path**: [System Access]ClearTextPassword<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Policy)</sub> |Critical |
+
+## System Audit Policies - Account Logon
+
+|Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity |
+|||||
+|Audit Credential Validation<br /><sub>(CCE-37741-6)</sub> |**Description**: <p><span>This subcategory reports the results of validation tests on credentials submitted for a user account logon request. These events occur on the computer that is authoritative for the credentials. For domain accounts, the domain controller is authoritative, whereas for local accounts, the local computer is authoritative. In domain environments, most of the Account Logon events occur in the Security log of the domain controllers that are authoritative for the domain accounts. However, these events can occur on other computers in the organization when local accounts are used to log on. Events for this subcategory include: - 4774: An account was mapped for logon. - 4775: An account could not be mapped for logon. - 4776: The domain controller attempted to validate the credentials for an account. - 4777: The domain controller failed to validate the credentials for an account. The recommended state for this setting is: 'Success and Failure'.</span></p><br />**Key Path**: {0CCE923F-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Member, Workgroup Member |\= Success and Failure<br /><sub>(Audit)</sub> |Critical |
+
+## System Audit Policies - Account Management
+
+|Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity |
+|||||
+|Audit Other Account Management Events<br /><sub>(CCE-37855-4)</sub> |**Description**: This subcategory reports other account management events. Events for this subcategory include: ΓÇö 4782: The password hash an account was accessed. ΓÇö 4793: The Password Policy Checking API was called. Refer to the Microsoft Knowledgebase article "Description of security events in Windows Vista and in Windows Server 2008" for the most recent information about this setting: https://support.microsoft.com/kb/947226.<br />**Key Path**: {0CCE923A-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Member, Workgroup Member |\>\= Success<br /><sub>(Audit)</sub> |Critical |
+|Audit Security Group Management<br /><sub>(CCE-38034-5)</sub> |**Description**: This subcategory reports each event of security group management, such as when a security group is created, changed, or deleted or when a member is added to or removed from a security group. If you enable this Audit policy setting, administrators can track events to detect malicious, accidental, and authorized creation of security group accounts. Events for this subcategory include: - 4727: A security-enabled global group was created. - 4728: A member was added to a security-enabled global group. - 4729: A member was removed from a security-enabled global group. - 4730: A security-enabled global group was deleted. - 4731: A security-enabled local group was created. - 4732: A member was added to a security-enabled local group. - 4733: A member was removed from a security-enabled local group. - 4734: A security-enabled local group was deleted. - 4735: A security-enabled local group was changed. - 4737: A security-enabled global group was changed. - 4754: A security-enabled universal group was created. - 4755: A security-enabled universal group was changed. - 4756: A member was added to a security-enabled universal group. - 4757: A member was removed from a security-enabled universal group. - 4758: A security-enabled universal group was deleted. - 4764: A group's type was changed. The recommended state for this setting is: `Success and Failure`.<br />**Key Path**: {0CCE9237-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= Success<br /><sub>(Audit)</sub> |Critical |
+|Audit User Account Management<br /><sub>(CCE-37856-2)</sub> |**Description**: This subcategory reports each event of user account management, such as when a user account is created, changed, or deleted; a user account is renamed, disabled, or enabled; or a password is set or changed. If you enable this Audit policy setting, administrators can track events to detect malicious, accidental, and authorized creation of user accounts. Events for this subcategory include: - 4720: A user account was created. - 4722: A user account was enabled. - 4723: An attempt was made to change an account's password. - 4724: An attempt was made to reset an account's password. - 4725: A user account was disabled. - 4726: A user account was deleted. - 4738: A user account was changed. - 4740: A user account was locked out. - 4765: SID History was added to an account. - 4766: An attempt to add SID History to an account failed. - 4767: A user account was unlocked. - 4780: The ACL was set on accounts which are members of administrators groups. - 4781: The name of an account was changed: - 4794: An attempt was made to set the Directory Services Restore Mode. - 5376: Credential Manager credentials were backed up. - 5377: Credential Manager credentials were restored from a backup. The recommended state for this setting is: `Success and Failure`.<br />**Key Path**: {0CCE9235-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= Success and Failure<br /><sub>(Audit)</sub> |Critical |
+
+## System Audit Policies - Detailed Tracking
+
+|Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity |
+|||||
+|Audit PNP Activity<br /><sub>(AZ-WIN-00182)</sub> |**Description**: This policy setting allows you to audit when plug and play detects an external device. The recommended state for this setting is: `Success`. **Note:** A Windows 10, Server 2016 or higher OS is required to access and set this value in Group Policy.<br />**Key Path**: {0CCE9248-69AE-11D9-BED3-505054503030}<br />**OS**: WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= Success<br /><sub>(Audit)</sub> |Critical |
+|Audit Process Creation<br /><sub>(CCE-36059-4)</sub> |**Description**: This subcategory reports the creation of a process and the name of the program or user that created it. Events for this subcategory include: - 4688: A new process has been created. - 4696: A primary token was assigned to process. Refer to Microsoft Knowledge Base article 947226: [Description of security events in Windows Vista and in Windows Server 2008](https://support.microsoft.com/en-us/kb/947226) for the most recent information about this setting. The recommended state for this setting is: `Success`.<br />**Key Path**: {0CCE922B-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= Success<br /><sub>(Audit)</sub> |Critical |
+
+## System Audit Policies - Logon-Logoff
+
+|Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity |
+|||||
+|Audit Account Lockout<br /><sub>(CCE-37133-6)</sub> |**Description**: This subcategory reports when a user's account is locked out as a result of too many failed logon attempts. Events for this subcategory include: ΓÇö 4625: An account failed to log on. Refer to the Microsoft Knowledgebase article 'Description of security events in Windows Vista and in Windows Server 2008' for the most recent information about this setting: https://support.microsoft.com/kb/947226.<br />**Key Path**: {0CCE9217-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= Success and Failure<br /><sub>(Audit)</sub> |Critical |
+|Audit Group Membership<br /><sub>(AZ-WIN-00026)</sub> |**Description**: Audit Group Membership enables you to audit group memberships when they are enumerated on the client computer. This policy allows you to audit the group membership information in the user's logon token. Events in this subcategory are generated on the computer on which a logon session is created. For an interactive logon, the security audit event is generated on the computer that the user logged on to. For a network logon, such as accessing a shared folder on the network, the security audit event is generated on the computer hosting the resource. You must also enable the Audit Logon subcategory. Multiple events are generated if the group membership information cannot fit in a single security audit event. The events that are audited include the following: - 4627(S): Group membership information.<br />**Key Path**: {0CCE9249-69AE-11D9-BED3-505054503030}<br />**OS**: WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= Success<br /><sub>(Audit)</sub> |Critical |
+|Audit Logoff<br /><sub>(CCE-38237-4)</sub> |**Description**: <p><span>This subcategory reports when a user logs off from the system. These events occur on the accessed computer. For interactive logons, the generation of these events occurs on the computer that is logged on to. If a network logon takes place to access a share, these events generate on the computer that hosts the accessed resource. If you configure this setting to No auditing, it is difficult or impossible to determine which user has accessed or attempted to access organization computers. Events for this subcategory include: - 4634: An account was logged off. - 4647: User initiated logoff. The recommended state for this setting is: 'Success'.</span></p><br />**Key Path**: {0CCE9216-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= Success<br /><sub>(Audit)</sub> |Critical |
+|Audit Logon<br /><sub>(CCE-38036-0)</sub> |**Description**: <p><span>This subcategory reports when a user attempts to log on to the system. These events occur on the accessed computer. For interactive logons, the generation of these events occurs on the computer that is logged on to. If a network logon takes place to access a share, these events generate on the computer that hosts the accessed resource. If you configure this setting to No auditing, it is difficult or impossible to determine which user has accessed or attempted to access organization computers. Events for this subcategory include: - 4624: An account was successfully logged on. - 4625: An account failed to log on. - 4648: A logon was attempted using explicit credentials. - 4675: SIDs were filtered. The recommended state for this setting is: 'Success and Failure'.</span></p><br />**Key Path**: {0CCE9215-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= Success and Failure<br /><sub>(Audit)</sub> |Critical |
+|Audit Other Logon/Logoff Events<br /><sub>(CCE-36322-6)</sub> |**Description**: This subcategory reports other logon/logoff-related events, such as Terminal Services session disconnects and reconnects, using RunAs to run processes under a different account, and locking and unlocking a workstation. Events for this subcategory include: ΓÇö 4649: A replay attack was detected. ΓÇö 4778: A session was reconnected to a Window Station. ΓÇö 4779: A session was disconnected from a Window Station. ΓÇö 4800: The workstation was locked. ΓÇö 4801: The workstation was unlocked. ΓÇö 4802: The screen saver was invoked. ΓÇö 4803: The screen saver was dismissed. ΓÇö 5378: The requested credentials delegation was disallowed by policy. ΓÇö 5632: A request was made to authenticate to a wireless network. ΓÇö 5633: A request was made to authenticate to a wired network. Refer to the Microsoft Knowledgebase article "Description of security events in Windows Vista and in Windows Server 2008" for the most recent information about this setting: https://support.microsoft.com/kb/947226.<br />**Key Path**: {0CCE921C-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= Success and Failure<br /><sub>(Audit)</sub> |Critical |
+|Audit Special Logon<br /><sub>(CCE-36266-5)</sub> |**Description**: This subcategory reports when a special logon is used. A special logon is a logon that has administrator-equivalent privileges and can be used to elevate a process to a higher level. Events for this subcategory include: - 4964 : Special groups have been assigned to a new logon. The recommended state for this setting is: `Success`.<br />**Key Path**: {0CCE921B-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= Success<br /><sub>(Audit)</sub> |Critical |
+
+## System Audit Policies - Object Access
+
+|Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity |
+|||||
+|Audit Other Object Access Events<br /><sub>(AZ-WIN-00113)</sub> |**Description**: This subcategory reports other object access-related events such as Task Scheduler jobs and COM+ objects. Events for this subcategory include: ΓÇö 4671: An application attempted to access a blocked ordinal through the TBS. ΓÇö 4691: Indirect access to an object was requested. ΓÇö 4698: A scheduled task was created. ΓÇö 4699 : A scheduled task was deleted. ΓÇö 4700 : A scheduled task was enabled. ΓÇö 4701: A scheduled task was disabled. ΓÇö 4702 : A scheduled task was updated. ΓÇö 5888: An object in the COM+ Catalog was modified. ΓÇö 5889: An object was deleted from the COM+ Catalog. ΓÇö 5890: An object was added to the COM+ Catalog. Refer to the Microsoft Knowledgebase article "Description of security events in Windows Vista and in Windows Server 2008" for the most recent information about this setting: https://support.microsoft.com/kb/947226.<br />**Key Path**: {0CCE9227-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= Success and Failure<br /><sub>(Audit)</sub> |Critical |
+|Audit Removable Storage<br /><sub>(CCE-37617-8)</sub> |**Description**: This policy setting allows you to audit user attempts to access file system objects on a removable storage device. A security audit event is generated only for all objects for all types of access requested. If you configure this policy setting, an audit event is generated each time an account accesses a file system object on a removable storage. Success audits record successful attempts and Failure audits record unsuccessful attempts. If you do not configure this policy setting, no audit event is generated when an account accesses a file system object on a removable storage. The recommended state for this setting is: `Success and Failure`. **Note:** A Windows 8, Server 2012 (non-R2) or higher OS is required to access and set this value in Group Policy.<br />**Key Path**: {0CCE9245-69AE-11D9-BED3-505054503030}<br />**OS**: WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= Success and Failure<br /><sub>(Audit)</sub> |Critical |
+
+## System Audit Policies - Policy Change
+
+|Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity |
+|||||
+|Audit Authentication Policy Change<br /><sub>(CCE-38327-3)</sub> |**Description**: This subcategory reports changes in authentication policy. Events for this subcategory include: ΓÇö 4706: A new trust was created to a domain. ΓÇö 4707: A trust to a domain was removed. ΓÇö 4713: Kerberos policy was changed. ΓÇö 4716: Trusted domain information was modified. ΓÇö 4717: System security access was granted to an account. ΓÇö 4718: System security access was removed from an account. ΓÇö 4739: Domain Policy was changed. ΓÇö 4864: A namespace collision was detected. ΓÇö 4865: A trusted forest information entry was added. ΓÇö 4866: A trusted forest information entry was removed. ΓÇö 4867: A trusted forest information entry was modified. Refer to the Microsoft Knowledgebase article "Description of security events in Windows Vista and in Windows Server 2008" for the most recent information about this setting: https://support.microsoft.com/kb/947226.<br />**Key Path**: {0CCE9230-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= Success<br /><sub>(Audit)</sub> |Critical |
+|Audit MPSSVC Rule-Level Policy Change<br /><sub>(AZ-WIN-00111)</sub> |**Description**: This subcategory reports changes in policy rules used by the Microsoft Protection Service (MPSSVC.exe). This service is used by Windows Firewall and by Microsoft OneCare. Events for this subcategory include: ΓÇö 4944: The following policy was active when the Windows Firewall started. ΓÇö 4945: A rule was listed when the Windows Firewall started. ΓÇö 4946: A change has been made to Windows Firewall exception list. A rule was added. ΓÇö 4947: A change has been made to Windows Firewall exception list. A rule was modified. ΓÇö 4948: A change has been made to Windows Firewall exception list. A rule was deleted. ΓÇö 4949: Windows Firewall settings were restored to the default values. ΓÇö 4950: A Windows Firewall setting has changed. ΓÇö 4951: A rule has been ignored because its major version number was not recognized by Windows Firewall. ΓÇö 4952 : Parts of a rule have been ignored because its minor version number was not recognized by Windows Firewall. The other parts of the rule will be enforced. ΓÇö 4953: A rule has been ignored by Windows Firewall because it could not parse the rule. ΓÇö 4954: Windows Firewall Group Policy settings have changed. The new settings have been applied. ΓÇö 4956: Windows Firewall has changed the active profile. ΓÇö 4957: Windows Firewall did not apply the following rule: ΓÇö 4958: Windows Firewall did not apply the following rule because the rule referred to items not configured on this computer: Refer to the Microsoft Knowledgebase article "Description of security events in Windows Vista and in Windows Server 2008" for the most recent information about this setting: https://support.microsoft.com/kb/947226.<br />**Key Path**: {0CCE9232-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= Success and Failure<br /><sub>(Audit)</sub> |Critical |
+|Audit Policy Change<br /><sub>(CCE-38028-7)</sub> |**Description**: This subcategory reports changes in audit policy including SACL changes. Events for this subcategory include: ΓÇö 4715: The audit policy (SACL) on an object was changed. ΓÇö 4719: System audit policy was changed. ΓÇö 4902: The Per-user audit policy table was created. ΓÇö 4904: An attempt was made to register a security event source. ΓÇö 4905: An attempt was made to unregister a security event source. ΓÇö 4906: The CrashOnAuditFail value has changed. ΓÇö 4907: Auditing settings on object were changed. ΓÇö 4908: Special Groups Logon table modified. ΓÇö 4912: Per User Audit Policy was changed. Refer to the Microsoft Knowledgebase article "Description of security events in Windows Vista and in Windows Server 2008" for the most recent information about this setting: https://support.microsoft.com/kb/947226.<br />**Key Path**: {0CCE922F-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= Success<br /><sub>(Audit)</sub> |Critical |
+
+## System Audit Policies - Privilege Use
+
+|Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity |
+|||||
+|Audit Sensitive Privilege Use<br /><sub>(CCE-36267-3)</sub> |**Description**: This subcategory reports when a user account or service uses a sensitive privilege. A sensitive privilege includes the following user rights: Act as part of the operating system, Back up files and directories, Create a token object, Debug programs, Enable computer and user accounts to be trusted for delegation, Generate security audits, Impersonate a client after authentication, Load and unload device drivers, Manage auditing and security log, Modify firmware environment values, Replace a process-level token, Restore files and directories, and Take ownership of files or other objects. Auditing this subcategory will create a high volume of events. Events for this subcategory include: ΓÇö 4672: Special privileges assigned to new logon. ΓÇö 4673: A privileged service was called. ΓÇö 4674: An operation was attempted on a privileged object. Refer to the Microsoft Knowledgebase article 'Description of security events in Windows Vista and in Windows Server 2008' for the most recent information about this setting: https://support.microsoft.com/kb/947226.<br />**Key Path**: {0CCE9228-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= Success and Failure<br /><sub>(Audit)</sub> |Critical |
+
+## System Audit Policies - System
+
+|Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity |
+|||||
+|Audit Security State Change<br /><sub>(CCE-38114-5)</sub> |**Description**: This subcategory reports changes in security state of the system, such as when the security subsystem starts and stops. Events for this subcategory include: ΓÇö 4608: Windows is starting up. ΓÇö 4609: Windows is shutting down. ΓÇö 4616: The system time was changed. ΓÇö 4621: Administrator recovered system from CrashOnAuditFail. Users who are not administrators will now be allowed to log on. Some auditable activity might not have been recorded. Refer to the Microsoft Knowledgebase article 'Description of security events in Windows Vista and in Windows Server 2008' for the most recent information about this setting: https://support.microsoft.com/kb/947226.<br />**Key Path**: {0CCE9210-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= Success<br /><sub>(Audit)</sub> |Critical |
+|Audit Security System Extension<br /><sub>(CCE-36144-4)</sub> |**Description**: This subcategory reports the loading of extension code such as authentication packages by the security subsystem. Events for this subcategory include: ΓÇö 4610: An authentication package has been loaded by the Local Security Authority. ΓÇö 4611: A trusted logon process has been registered with the Local Security Authority. ΓÇö 4614: A notification package has been loaded by the Security Account Manager. ΓÇö 4622: A security package has been loaded by the Local Security Authority. ΓÇö 4697: A service was installed in the system. Refer to the Microsoft Knowledgebase article "Description of security events in Windows Vista and in Windows Server 2008" for the most recent information about this setting: https://support.microsoft.com/kb/947226.<br />**Key Path**: {0CCE9211-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= Success<br /><sub>(Audit)</sub> |Critical |
+|Audit System Integrity<br /><sub>(CCE-37132-8)</sub> |**Description**: This subcategory reports on violations of integrity of the security subsystem. Events for this subcategory include: ΓÇö 4612 : Internal resources allocated for the queuing of audit messages have been exhausted, leading to the loss of some audits. ΓÇö 4615 : Invalid use of LPC port. ΓÇö 4618 : A monitored security event pattern has occurred. ΓÇö 4816 : RPC detected an integrity violation while decrypting an incoming message. ΓÇö 5038 : Code integrity determined that the image hash of a file is not valid. The file could be corrupt due to unauthorized modification or the invalid hash could indicate a potential disk device error. ΓÇö 5056: A cryptographic self test was performed. ΓÇö 5057: A cryptographic primitive operation failed. ΓÇö 5060: Verification operation failed. ΓÇö 5061: Cryptographic operation. ΓÇö 5062: A kernel-mode cryptographic self test was performed. Refer to the Microsoft Knowledgebase article 'Description of security events in Windows Vista and in Windows Server 2008' for the most recent information about this setting: https://support.microsoft.com/kb/947226.<br />**Key Path**: {0CCE9212-69AE-11D9-BED3-505054503030}<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= Success and Failure<br /><sub>(Audit)</sub> |Critical |
+
+## User Rights Assignment
+
+|Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity |
+|||||
+|Access Credential Manager as a trusted caller<br /><sub>(CCE-37056-9)</sub> |**Description**: This security setting is used by Credential Manager during Backup and Restore. No accounts should have this user right, as it is only assigned to Winlogon. Users' saved credentials might be compromised if this user right is assigned to other entities. The recommended state for this setting is: `No One`.<br />**Key Path**: [Privilege Rights]SeTrustedCredManAccessPrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= No One<br /><sub>(Policy)</sub> |Warning |
+|Access this computer from the network<br /><sub>(CCE-35818-4)</sub> |**Description**: <p><span>This policy setting allows other users on the network to connect to the computer and is required by various network protocols that include Server Message Block (SMB) based protocols, NetBIOS, Common Internet File System (CIFS), and Component Object Model Plus (COM+). - *Level 1 - Domain Controller.* The recommended state for this setting is: 'Administrators, Authenticated Users, ENTERPRISE DOMAIN CONTROLLERS'. - *Level 1 - Member Server.* The recommended state for this setting is: 'Administrators, Authenticated Users'.</span></p><br />**Key Path**: [Privilege Rights]SeNetworkLogonRight<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Member, Workgroup Member | Administrators, Authenticated Users<br /><sub>(Policy)</sub> |Critical |
+|Act as part of the operating system<br /><sub>(CCE-36876-1)</sub> |**Description**: This policy setting allows a process to assume the identity of any user and thus gain access to the resources that the user is authorized to access. The recommended state for this setting is: `No One`.<br />**Key Path**: [Privilege Rights]SeTcbPrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= No One<br /><sub>(Policy)</sub> |Critical |
+|Allow log on locally<br /><sub>(CCE-37659-0)</sub> |**Description**: This policy setting determines which users can interactively log on to computers in your environment. Logons that are initiated by pressing the CTRL+ALT+DEL key sequence on the client computer keyboard require this user right. Users who attempt to log on through Terminal Services or IIS also require this user right. The Guest account is assigned this user right by default. Although this account is disabled by default, Microsoft recommends that you enable this setting through Group Policy. However, this user right should generally be restricted to the Administrators and Users groups. Assign this user right to the Backup Operators group if your organization requires that they have this capability. When configuring a user right in the SCM enter a comma delimited list of accounts. Accounts can be either local or located in Active Directory, they can be groups, users, or computers.<br />**Key Path**: [Privilege Rights]SeInteractiveLogonRight<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Member, Workgroup Member |\= Administrators<br /><sub>(Policy)</sub> |Critical |
+|Allow log on through Remote Desktop Services<br /><sub>(CCE-37072-6)</sub> |**Description**: <p><span>This policy setting determines which users or groups have the right to log on as a Terminal Services client. Remote desktop users require this user right. If your organization uses Remote Assistance as part of its help desk strategy, create a group and assign it this user right through Group Policy. If the help desk in your organization does not use Remote Assistance, assign this user right only to the Administrators group or use the restricted groups feature to ensure that no user accounts are part of the Remote Desktop Users group. Restrict this user right to the Administrators group, and possibly the Remote Desktop Users group, to prevent unwanted users from gaining access to computers on your network by means of the Remote Assistance feature. - **Level 1 - Domain Controller.** The recommended state for this setting is: 'Administrators'. - **Level 1 - Member Server.** The recommended state for this setting is: 'Administrators, Remote Desktop Users'. **Note:** A Member Server that holds the _Remote Desktop Services_ Role with _Remote Desktop Connection Broker_ Role Service will require a special exception to this recommendation, to allow the 'Authenticated Users' group to be granted this user right. **Note 2:** The above lists are to be treated as allow lists, which implies that the above principals need not be present for assessment of this recommendation to pass.</span></p><br />**Key Path**: [Privilege Rights]SeRemoteInteractiveLogonRight<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member | Administrators, Remote Desktop Users<br /><sub>(Policy)</sub> |Critical |
+|Back up files and directories<br /><sub>(CCE-35912-5)</sub> |**Description**: This policy setting allows users to circumvent file and directory permissions to back up the system. This user right is enabled only when an application (such as NTBACKUP) attempts to access a file or directory through the NTFS file system backup application programming interface (API). Otherwise, the assigned file and directory permissions apply. The recommended state for this setting is: `Administrators`.<br />**Key Path**: [Privilege Rights]SeBackupPrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Member, Workgroup Member | Administrators, Backup Operators, Server Operators<br /><sub>(Policy)</sub> |Critical |
+|Bypass traverse checking<br /><sub>(AZ-WIN-00184)</sub> |**Description**: This policy setting allows users who do not have the Traverse Folder access permission to pass through folders when they browse an object path in the NTFS file system or the registry. This user right does not allow users to list the contents of a folder. When configuring a user right in the SCM enter a comma delimited list of accounts. Accounts can be either local or located in Active Directory, they can be groups, users, or computers.<br />**Key Path**: [Privilege Rights]SeChangeNotifyPrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Member, Workgroup Member | Administrators, Authenticated Users, Backup Operators, Local Service, Network Service<br /><sub>(Policy)</sub> |Critical |
+|Change the system time<br /><sub>(CCE-37452-0)</sub> |**Description**: This policy setting determines which users and groups can change the time and date on the internal clock of the computers in your environment. Users who are assigned this user right can affect the appearance of event logs. When a computer's time setting is changed, logged events reflect the new time, not the actual time that the events occurred. When configuring a user right in the SCM enter a comma delimited list of accounts. Accounts can be either local or located in Active Directory, they can be groups, users, or computers. **Note:** Discrepancies between the time on the local computer and on the domain controllers in your environment may cause problems for the Kerberos authentication protocol, which could make it impossible for users to log on to the domain or obtain authorization to access domain resources after they are logged on. Also, problems will occur when Group Policy is applied to client computers if the system time is not synchronized with the domain controllers. The recommended state for this setting is: `Administrators, LOCAL SERVICE`.<br />**Key Path**: [Privilege Rights]SeSystemtimePrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Member, Workgroup Member | Administrators, Server Operators, LOCAL SERVICE<br /><sub>(Policy)</sub> |Critical |
+|Change the time zone<br /><sub>(CCE-37700-2)</sub> |**Description**: This setting determines which users can change the time zone of the computer. This ability holds no great danger for the computer and may be useful for mobile workers. The recommended state for this setting is: `Administrators, LOCAL SERVICE`.<br />**Key Path**: [Privilege Rights]SeTimeZonePrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member | Administrators, LOCAL SERVICE<br /><sub>(Policy)</sub> |Critical |
+|Create a pagefile<br /><sub>(CCE-35821-8)</sub> |**Description**: This policy setting allows users to change the size of the pagefile. By making the pagefile extremely large or extremely small, an attacker could easily affect the performance of a compromised computer. The recommended state for this setting is: `Administrators`.<br />**Key Path**: [Privilege Rights]SeCreatePagefilePrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= Administrators<br /><sub>(Policy)</sub> |Critical |
+|Create a token object<br /><sub>(CCE-36861-3)</sub> |**Description**: This policy setting allows a process to create an access token, which may provide elevated rights to access sensitive data. The recommended state for this setting is: `No One`.<br />**Key Path**: [Privilege Rights]SeCreateTokenPrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= No One<br /><sub>(Policy)</sub> |Warning |
+|Create global objects<br /><sub>(CCE-37453-8)</sub> |**Description**: This policy setting determines whether users can create global objects that are available to all sessions. Users can still create objects that are specific to their own session if they do not have this user right. Users who can create global objects could affect processes that run under other users' sessions. This capability could lead to a variety of problems, such as application failure or data corruption. The recommended state for this setting is: `Administrators, LOCAL SERVICE, NETWORK SERVICE, SERVICE`. **Note:** A Member Server with Microsoft SQL Server _and_ its optional "Integration Services" component installed will require a special exception to this recommendation for additional SQL-generated entries to be granted this user right.<br />**Key Path**: [Privilege Rights]SeCreateGlobalPrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member | Administrators, SERVICE, LOCAL SERVICE, NETWORK SERVICE<br /><sub>(Policy)</sub> |Warning |
+|Create permanent shared objects<br /><sub>(CCE-36532-0)</sub> |**Description**: This user right is useful to kernel-mode components that extend the object namespace. However, components that run in kernel mode have this user right inherently. Therefore, it is typically not necessary to specifically assign this user right. The recommended state for this setting is: `No One`.<br />**Key Path**: [Privilege Rights]SeCreatePermanentPrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= No One<br /><sub>(Policy)</sub> |Warning |
+|Create symbolic links<br /><sub>(CCE-35823-4)</sub> |**Description**: <p><span>This policy setting determines which users can create symbolic links. In Windows Vista, existing NTFS file system objects, such as files and folders, can be accessed by referring to a new kind of file system object called a symbolic link. A symbolic link is a pointer (much like a shortcut or .lnk file) to another file system object, which can be a file, folder, shortcut or another symbolic link. The difference between a shortcut and a symbolic link is that a shortcut only works from within the Windows shell. To other programs and applications, shortcuts are just another file, whereas with symbolic links, the concept of a shortcut is implemented as a feature of the NTFS file system. Symbolic links can potentially expose security vulnerabilities in applications that are not designed to use them. For this reason, the privilege for creating symbolic links should only be assigned to trusted users. By default, only Administrators can create symbolic links. - **Level 1 - Domain Controller.** The recommended state for this setting is: 'Administrators'. - **Level 1 - Member Server.** The recommended state for this setting is: 'Administrators' and (when the _Hyper-V_ Role is installed) 'NT VIRTUAL MACHINE\Virtual Machines'.</span></p><br />**Key Path**: [Privilege Rights]SeCreateSymbolicLinkPrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member | Administrators, NT VIRTUAL MACHINE\Virtual Machines<br /><sub>(Policy)</sub> |Critical |
+|Deny access to this computer from the network<br /><sub>(CCE-37954-5)</sub> |**Description**: <p><span>This policy setting prohibits users from connecting to a computer from across the network, which would allow users to access and potentially modify data remotely. In high security environments, there should be no need for remote users to access data on a computer. Instead, file sharing should be accomplished through the use of network servers. - **Level 1 - Domain Controller.** The recommended state for this setting is to include: 'Guests, Local account'. - **Level 1 - Member Server.** The recommended state for this setting is to include: 'Guests, Local account and member of Administrators group'. **Caution:** Configuring a standalone (non-domain-joined) server as described above may result in an inability to remotely administer the server. **Note:** Configuring a member server or standalone server as described above may adversely affect applications that create a local service account and place it in the Administrators group - in which case you must either convert the application to use a domain-hosted service account, or remove Local account and member of Administrators group from this User Right Assignment. Using a domain-hosted service account is strongly preferred over making an exception to this rule, where possible.</span></p><br />**Key Path**: [Privilege Rights]SeDenyNetworkLogonRight<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= Guests<br /><sub>(Policy)</sub> |Critical |
+|Deny log on as a batch job<br /><sub>(CCE-36923-1)</sub> |**Description**: This policy setting determines which accounts will not be able to log on to the computer as a batch job. A batch job is not a batch (.bat) file, but rather a batch-queue facility. Accounts that use the Task Scheduler to schedule jobs need this user right. The **Deny log on as a batch job** user right overrides the **Log on as a batch job** user right, which could be used to allow accounts to schedule jobs that consume excessive system resources. Such an occurrence could cause a DoS condition. Failure to assign this user right to the recommended accounts can be a security risk. The recommended state for this setting is to include: `Guests`.<br />**Key Path**: [Privilege Rights]SeDenyBatchLogonRight<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= Guests<br /><sub>(Policy)</sub> |Critical |
+|Deny log on as a service<br /><sub>(CCE-36877-9)</sub> |**Description**: This security setting determines which service accounts are prevented from registering a process as a service. This policy setting supersedes the **Log on as a service** policy setting if an account is subject to both policies. The recommended state for this setting is to include: `Guests`. **Note:** This security setting does not apply to the System, Local Service, or Network Service accounts.<br />**Key Path**: [Privilege Rights]SeDenyServiceLogonRight<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= Guests<br /><sub>(Policy)</sub> |Critical |
+|Deny log on locally<br /><sub>(CCE-37146-8)</sub> |**Description**: This security setting determines which users are prevented from logging on at the computer. This policy setting supersedes the **Allow log on locally** policy setting if an account is subject to both policies. **Important:** If you apply this security policy to the Everyone group, no one will be able to log on locally. The recommended state for this setting is to include: `Guests`.<br />**Key Path**: [Privilege Rights]SeDenyInteractiveLogonRight<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= Guests<br /><sub>(Policy)</sub> |Critical |
+|Deny log on through Remote Desktop Services<br /><sub>(CCE-36867-0)</sub> |**Description**: This policy setting determines whether users can log on as Terminal Services clients. After the baseline member server is joined to a domain environment, there is no need to use local accounts to access the server from the network. Domain accounts can access the server for administration and end-user processing. The recommended state for this setting is to include: `Guests, Local account`. **Caution:** Configuring a standalone (non-domain-joined) server as described above may result in an inability to remotely administer the server.<br />**Key Path**: [Privilege Rights]SeDenyRemoteInteractiveLogonRight<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Workgroup Member |\>\= Guests<br /><sub>(Policy)</sub> |Critical |
+|Enable computer and user accounts to be trusted for delegation<br /><sub>(CCE-36860-5)</sub> |**Description**: <p><span>This policy setting allows users to change the Trusted for Delegation setting on a computer object in Active Directory. Abuse of this privilege could allow unauthorized users to impersonate other users on the network. - **Level 1 - Domain Controller.** The recommended state for this setting is: 'Administrators' - **Level 1 - Member Server.** The recommended state for this setting is: 'No One'.</span></p><br />**Key Path**: [Privilege Rights]SeEnableDelegationPrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Member, Workgroup Member |\= No One<br /><sub>(Policy)</sub> |Critical |
+|Force shutdown from a remote system<br /><sub>(CCE-37877-8)</sub> |**Description**: This policy setting allows users to shut down Windows Vista-based computers from remote locations on the network. Anyone who has been assigned this user right can cause a denial of service (DoS) condition, which would make the computer unavailable to service user requests. Therefore, it is recommended that only highly trusted administrators be assigned this user right. The recommended state for this setting is: `Administrators`.<br />**Key Path**: [Privilege Rights]SeRemoteShutdownPrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= Administrators<br /><sub>(Policy)</sub> |Critical |
+|Generate security audits<br /><sub>(CCE-37639-2)</sub> |**Description**: This policy setting determines which users or processes can generate audit records in the Security log. The recommended state for this setting is: `LOCAL SERVICE, NETWORK SERVICE`. **Note:** A Member Server that holds the _Web Server (IIS)_ Role with _Web Server_ Role Service will require a special exception to this recommendation, to allow IIS application pool(s) to be granted this user right. **Note #2:** A Member Server that holds the _Active Directory Federation Services_ Role will require a special exception to this recommendation, to allow the `NT SERVICE\ADFSSrv` and `NT SERVICE\DRS` services, as well as the associated Active Directory Federation Services service account, to be granted this user right.<br />**Key Path**: [Privilege Rights]SeAuditPrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member | Local Service, Network Service, IIS APPPOOL\DefaultAppPool<br /><sub>(Policy)</sub> |Critical |
+|Increase a process working set<br /><sub>(AZ-WIN-00185)</sub> |**Description**: This privilege determines which user accounts can increase or decrease the size of a processΓÇÖs working set. The working set of a process is the set of memory pages currently visible to the process in physical RAM memory. These pages are resident and available for an application to use without triggering a page fault. The minimum and maximum working set sizes affect the virtual memory paging behavior of a process. When configuring a user right in the SCM enter a comma delimited list of accounts. Accounts can be either local or located in Active Directory, they can be groups, users, or computers.<br />**Key Path**: [Privilege Rights]SeIncreaseWorkingSetPrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Member, Workgroup Member | Administrators, Local Service<br /><sub>(Policy)</sub> |Warning |
+|Increase scheduling priority<br /><sub>(CCE-38326-5)</sub> |**Description**: This policy setting determines whether users can increase the base priority class of a process. (It is not a privileged operation to increase relative priority within a priority class.) This user right is not required by administrative tools that are supplied with the operating system but might be required by software development tools. The recommended state for this setting is: `Administrators`.<br />**Key Path**: [Privilege Rights]SeIncreaseBasePriorityPrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= Administrators<br /><sub>(Policy)</sub> |Warning |
+|Load and unload device drivers<br /><sub>(CCE-36318-4)</sub> |**Description**: This policy setting allows users to dynamically load a new device driver on a system. An attacker could potentially use this capability to install malicious code that appears to be a device driver. This user right is required for users to add local printers or printer drivers in Windows Vista. The recommended state for this setting is: `Administrators`.<br />**Key Path**: [Privilege Rights]SeLoadDriverPrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member | Administrators, Print Operators<br /><sub>(Policy)</sub> |Warning |
+|Lock pages in memory<br /><sub>(CCE-36495-0)</sub> |**Description**: This policy setting allows a process to keep data in physical memory, which prevents the system from paging the data to virtual memory on disk. If this user right is assigned, significant degradation of system performance can occur. The recommended state for this setting is: `No One`.<br />**Key Path**: [Privilege Rights]SeLockMemoryPrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= No One<br /><sub>(Policy)</sub> |Warning |
+|Manage auditing and security log<br /><sub>(CCE-35906-7)</sub> |**Description**: <p><span>This policy setting determines which users can change the auditing options for files and directories and clear the Security log. For environments running Microsoft Exchange Server, the 'Exchange Servers' group must possess this privilege on Domain Controllers to properly function. Given this, DCs granting the 'Exchange Servers' group this privilege do conform with this benchmark. If the environment does not use Microsoft Exchange Server, then this privilege should be limited to only 'Administrators' on DCs. - **Level 1 - Domain Controller.** The recommended state for this setting is: 'Administrators and (when Exchange is running in the environment) 'Exchange Servers'. - **Level 1 - Member Server.** The recommended state for this setting is: 'Administrators'</span></p><br />**Key Path**: [Privilege Rights]SeSecurityPrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= Administrators<br /><sub>(Policy)</sub> |Critical |
+|Modify an object label<br /><sub>(CCE-36054-5)</sub> |**Description**: This privilege determines which user accounts can modify the integrity label of objects, such as files, registry keys, or processes owned by other users. Processes running under a user account can modify the label of an object owned by that user to a lower level without this privilege. The recommended state for this setting is: `No One`.<br />**Key Path**: [Privilege Rights]SeRelabelPrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= No One<br /><sub>(Policy)</sub> |Warning |
+|Modify firmware environment values<br /><sub>(CCE-38113-7)</sub> |**Description**: This policy setting allows users to configure the system-wide environment variables that affect hardware configuration. This information is typically stored in the Last Known Good Configuration. Modification of these values and could lead to a hardware failure that would result in a denial of service condition. The recommended state for this setting is: `Administrators`.<br />**Key Path**: [Privilege Rights]SeSystemEnvironmentPrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= Administrators<br /><sub>(Policy)</sub> |Warning |
+|Perform volume maintenance tasks<br /><sub>(CCE-36143-6)</sub> |**Description**: This policy setting allows users to manage the system's volume or disk configuration, which could allow a user to delete a volume and cause data loss as well as a denial-of-service condition. The recommended state for this setting is: `Administrators`.<br />**Key Path**: [Privilege Rights]SeManageVolumePrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= Administrators<br /><sub>(Policy)</sub> |Warning |
+|Profile single process<br /><sub>(CCE-37131-0)</sub> |**Description**: This policy setting determines which users can use tools to monitor the performance of non-system processes. Typically, you do not need to configure this user right to use the Microsoft Management Console (MMC) Performance snap-in. However, you do need this user right if System Monitor is configured to collect data using Windows Management Instrumentation (WMI). Restricting the Profile single process user right prevents intruders from gaining additional information that could be used to mount an attack on the system. The recommended state for this setting is: `Administrators`.<br />**Key Path**: [Privilege Rights]SeProfileSingleProcessPrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= Administrators<br /><sub>(Policy)</sub> |Warning |
+|Profile system performance<br /><sub>(CCE-36052-9)</sub> |**Description**: This policy setting allows users to use tools to view the performance of different system processes, which could be abused to allow attackers to determine a system's active processes and provide insight into the potential attack surface of the computer. The recommended state for this setting is: `Administrators, NT SERVICE\WdiServiceHost`.<br />**Key Path**: [Privilege Rights]SeSystemProfilePrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member | Administrators, NT SERVICE\WdiServiceHost<br /><sub>(Policy)</sub> |Warning |
+|Replace a process level token<br /><sub>(CCE-37430-6)</sub> |**Description**: This policy setting allows one process or service to start another service or process with a different security access token, which can be used to modify the security access token of that sub-process and result in the escalation of privileges. The recommended state for this setting is: `LOCAL SERVICE, NETWORK SERVICE`. **Note:** A Member Server that holds the _Web Server (IIS)_ Role with _Web Server_ Role Service will require a special exception to this recommendation, to allow IIS application pool(s) to be granted this user right. **Note #2:** A Member Server with Microsoft SQL Server installed will require a special exception to this recommendation for additional SQL-generated entries to be granted this user right.<br />**Key Path**: [Privilege Rights]SeAssignPrimaryTokenPrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member | LOCAL SERVICE, NETWORK SERVICE<br /><sub>(Policy)</sub> |Warning |
+|Restore files and directories<br /><sub>(CCE-37613-7)</sub> |**Description**: This policy setting determines which users can bypass file, directory, registry, and other persistent object permissions when restoring backed up files and directories on computers that run Windows Vista in your environment. This user right also determines which users can set valid security principals as object owners; it is similar to the Back up files and directories user right. The recommended state for this setting is: `Administrators`.<br />**Key Path**: [Privilege Rights]SeRestorePrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Member, Workgroup Member | Administrators, Backup Operators<br /><sub>(Policy)</sub> |Warning |
+|Shut down the system<br /><sub>(CCE-38328-1)</sub> |**Description**: This policy setting determines which users who are logged on locally to the computers in your environment can shut down the operating system with the Shut Down command. Misuse of this user right can result in a denial of service condition. The recommended state for this setting is: `Administrators`.<br />**Key Path**: [Privilege Rights]SeShutdownPrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= Administrators<br /><sub>(Policy)</sub> |Warning |
+|Take ownership of files or other objects<br /><sub>(CCE-38325-7)</sub> |**Description**: This policy setting allows users to take ownership of files, folders, registry keys, processes, or threads. This user right bypasses any permissions that are in place to protect objects to give ownership to the specified user. The recommended state for this setting is: `Administrators`.<br />**Key Path**: [Privilege Rights]SeTakeOwnershipPrivilege<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= Administrators<br /><sub>(Policy)</sub> |Critical |
+
+## Windows Components
+
+|Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity |
+|||||
+|Allow Basic authentication<br /><sub>(CCE-36254-1)</sub> |**Description**: This policy setting allows you to manage whether the Windows Remote Management (WinRM) service accepts Basic authentication from a remote client. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\WinRM\Client\AllowBasic<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Critical |
+|Allow Cortana<br /><sub>(AZ-WIN-00131)</sub> |**Description**: This policy setting specifies whether Cortana is allowed on the device.   If you enable or don't configure this setting, Cortana will be allowed on the device. If you disable this setting, Cortana will be turned off.   When Cortana is off, users will still be able to use search to find things on the device and on the Internet.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\Windows Search\AllowCortana<br />**OS**: WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Warning |
+|Allow Cortana above lock screen<br /><sub>(AZ-WIN-00130)</sub> |**Description**: This policy setting determines whether or not the user can interact with Cortana using speech while the system is locked. If you enable or don't configure this setting, the user can interact with Cortana using speech while the system is locked. If you disable this setting, the system will need to be unlocked for the user to interact with Cortana using speech.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\Windows Search\AllowCortanaAboveLock<br />**OS**: WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Warning |
+|Allow indexing of encrypted files<br /><sub>(CCE-38277-0)</sub> |**Description**: This policy setting controls whether encrypted items are allowed to be indexed. When this setting is changed, the index is rebuilt completely. Full volume encryption (such as BitLocker Drive Encryption or a non-Microsoft solution) must be used for the location of the index to maintain security for encrypted files. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\Windows Search\AllowIndexingEncryptedStoresOrItems<br />**OS**: WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Warning |
+|Allow Microsoft accounts to be optional<br /><sub>(CCE-38354-7)</sub> |**Description**: This policy setting lets you control whether Microsoft accounts are optional for Windows Store apps that require an account to sign in. This policy only affects Windows Store apps that support it. If you enable this policy setting, Windows Store apps that typically require a Microsoft account to sign in will allow users to sign in with an enterprise account instead. If you disable or do not configure this policy setting, users will need to sign in with a Microsoft account.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\MSAOptional<br />**OS**: WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Warning |
+|Allow search and Cortana to use location<br /><sub>(AZ-WIN-00133)</sub> |**Description**: This policy setting specifies whether search and Cortana can provide location aware search and Cortana results.   If this is enabled, search and Cortana can access location information.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\Windows Search\AllowSearchToUseLocation<br />**OS**: WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Warning |
+|Allow Telemetry<br /><sub>(AZ-WIN-00169)</sub> |**Description**: This policy setting determines the amount of diagnostic and usage data reported to Microsoft. A value of 0 will send minimal data to Microsoft. This data includes Malicious Software Removal Tool (MSRT) & Windows Defender data, if enabled, and telemetry client settings. Setting a value of 0 applies to enterprise, EDU, IoT and server devices only. Setting a value of 0 for other devices is equivalent to choosing a value of 1. A value of 1 sends only a basic amount of diagnostic and usage data. Note that setting values of 0 or 1 will degrade certain experiences on the device. A value of 2 sends enhanced diagnostic and usage data. A value of 3 sends the same data as a value of 2, plus additional diagnostics data, including the files and content that may have caused the problem. Windows 10 telemetry settings apply to the Windows operating system and some first party apps. This setting does not apply to third party apps running on Windows 10. The recommended state for this setting is: `Enabled: 0 - Security [Enterprise Only]`. **Note:** If the "Allow Telemetry" setting is configured to "0 - Security [Enterprise Only]", then the options in Windows Update to defer upgrades and updates will have no effect.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\DataCollection\AllowTelemetry<br />**OS**: WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= 0<br /><sub>(Registry)</sub> |Warning |
+|Allow unencrypted traffic<br /><sub>(CCE-38223-4)</sub> |**Description**: This policy setting allows you to manage whether the Windows Remote Management (WinRM) service sends and receives unencrypted messages over the network. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\WinRM\Client\AllowUnencryptedTraffic<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Critical |
+|Allow user control over installs<br /><sub>(CCE-36400-0)</sub> |**Description**: Permits users to change installation options that typically are available only to system administrators. The security features of Windows Installer prevent users from changing installation options typically reserved for system administrators, such as specifying the directory to which files are installed. If Windows Installer detects that an installation package has permitted the user to change a protected option, it stops the installation and displays a message. These security features operate only when the installation program is running in a privileged security context in which it has access to directories denied to the user. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\Installer\EnableUserControl<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Critical |
+|Always install with elevated privileges<br /><sub>(CCE-37490-0)</sub> |**Description**: This setting controls whether or not Windows Installer should use system permissions when it installs any program on the system. **Note:** This setting appears both in the Computer Configuration and User Configuration folders. To make this setting effective, you must enable the setting in both folders. **Caution:** If enabled, skilled users can take advantage of the permissions this setting grants to change their privileges and gain permanent access to restricted files and folders. Note that the User Configuration version of this setting is not guaranteed to be secure. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\Installer\AlwaysInstallElevated<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Warning |
+|Always prompt for password upon connection<br /><sub>(CCE-37929-7)</sub> |**Description**: This policy setting specifies whether Terminal Services always prompts the client computer for a password upon connection. You can use this policy setting to enforce a password prompt for users who log on to Terminal Services, even if they already provided the password in the Remote Desktop Connection client. By default, Terminal Services allows users to automatically log on if they enter a password in the Remote Desktop Connection client. Note If you do not configure this policy setting, the local computer administrator can use the Terminal Services Configuration tool to either allow or prevent passwords from being automatically sent.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows NT\Terminal Services\fPromptForPassword<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |
+|Application: Control Event Log behavior when the log file reaches its maximum size<br /><sub>(CCE-37775-4)</sub> |**Description**: This policy setting controls Event Log behavior when the log file reaches its maximum size. If you enable this policy setting and a log file reaches its maximum size, new events are not written to the log and are lost. If you disable or do not configure this policy setting and a log file reaches its maximum size, new events overwrite old events. Note: Old events may or may not be retained according to the "Backup log automatically when full"  policy setting.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\EventLog\Application\Retention<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Critical |
+|Application: Specify the maximum log file size (KB)<br /><sub>(CCE-37948-7)</sub> |**Description**: This policy setting specifies the maximum size of the log file in kilobytes. If you enable this policy setting, you can configure the maximum log file size to be between 1 megabyte (1024 kilobytes) and 2 terabytes (2147483647 kilobytes) in kilobyte increments. If you disable or do not configure this policy setting, the maximum size of the log file will be set to the locally configured value. This value can be changed by the local administrator using the Log Properties dialog and it defaults to 20 megabytes.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\EventLog\Application\MaxSize<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= 32768<br /><sub>(Registry)</sub> |Critical |
+|Configure local setting override for reporting to Microsoft MAPS<br /><sub>(AZ-WIN-00173)</sub> |**Description**: This policy setting configures a local override for the configuration to join Microsoft MAPS. This setting can only be set by Group Policy. If you enable this setting the local preference setting will take priority over Group Policy. If you disable or do not configure this setting Group Policy will take priority over the local preference setting.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows Defender\SpyNet\LocalSettingOverrideSpynetReporting<br />**OS**: WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Warning |
+|Configure Windows SmartScreen<br /><sub>(CCE-35859-8)</sub> |**Description**: This policy setting allows you to manage the behavior of Windows SmartScreen. Windows SmartScreen helps keep PCs safer by warning users before running unrecognized programs downloaded from the Internet. Some information is sent to Microsoft about files and programs run on PCs with this feature enabled. If you enable this policy setting, Windows SmartScreen behavior may be controlled by setting one of the following options: ΓÇó Give user a warning before running downloaded unknown software ΓÇó Turn off SmartScreen If you disable or do not configure this policy setting, Windows SmartScreen behavior is managed by administrators on the PC by using Windows SmartScreen Settings in Security and Maintenance. Options: ΓÇó Give user a warning before running downloaded unknown software ΓÇó Turn off SmartScreen<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\System\EnableSmartScreen<br />**OS**: WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |In 1-2<br /><sub>(Registry)</sub> |Warning |
+|Detect change from default RDP port<br /><sub>(AZ-WIN-00156)</sub> |**Description**: This setting determines whether the network port that listens for Remote Desktop Connections has been changed from the default 3389<br />**Key Path**: System\CurrentControlSet\Control\Terminal Server\WinStations\RDP-Tcp\PortNumber<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 3389<br /><sub>(Registry)</sub> |Critical |
+|Disable Windows Search Service<br /><sub>(AZ-WIN-00176)</sub> |**Description**: This registry setting disables the Windows Search Service<br />**Key Path**: System\CurrentControlSet\Services\Wsearch\Start<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 4<br /><sub>(Registry)</sub> |Critical |
+|Disallow Autoplay for non-volume devices<br /><sub>(CCE-37636-8)</sub> |**Description**: This policy setting disallows AutoPlay for MTP devices like cameras or phones. If you enable this policy setting, AutoPlay is not allowed for MTP devices like cameras or phones. If you disable or do not configure this policy setting, AutoPlay is enabled for non-volume devices.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\Explorer\NoAutoplayfornonVolume<br />**OS**: WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |
+|Disallow Digest authentication<br /><sub>(CCE-38318-2)</sub> |**Description**: This policy setting allows you to manage whether the Windows Remote Management (WinRM) client will not use Digest authentication. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\WinRM\Client\AllowDigest<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Critical |
+|Disallow WinRM from storing RunAs credentials<br /><sub>(CCE-36000-8)</sub> |**Description**: This policy setting allows you to manage whether the Windows Remote Management (WinRM) service will not allow RunAs credentials to be stored for any plug-ins. If you enable this policy setting, the WinRM service will not allow the RunAsUser or RunAsPassword configuration values to be set for any plug-ins. If a plug-in has already set the RunAsUser and RunAsPassword configuration values, the RunAsPassword configuration value will be erased from the credential store on this computer. If you disable or do not configure this policy setting, the WinRM service will allow the RunAsUser and RunAsPassword configuration values to be set for plug-ins and the RunAsPassword value will be stored securely. If you enable and then disable this policy setting,any values that were previously configured for RunAsPassword will need to be reset.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\WinRM\Service\DisableRunAs<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |
+|Do not allow passwords to be saved<br /><sub>(CCE-36223-6)</sub> |**Description**: This policy setting helps prevent Terminal Services clients from saving passwords on a computer. Note If this policy setting was previously configured as Disabled or Not configured, any previously saved passwords will be deleted the first time a Terminal Services client disconnects from any server.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows NT\Terminal Services\DisablePasswordSaving<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |
+|Do not delete temp folders upon exit<br /><sub>(CCE-37946-1)</sub> |**Description**: This policy setting specifies whether Remote Desktop Services retains a user's per-session temporary folders at logoff. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows NT\Terminal Services\DeleteTempDirsOnExit<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Warning |
+|Do not display the password reveal button<br /><sub>(CCE-37534-5)</sub> |**Description**: This policy setting allows you to configure the display of the password reveal button in password entry user experiences. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\CredUI\DisablePasswordReveal<br />**OS**: WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Warning |
+|Do not show feedback notifications<br /><sub>(AZ-WIN-00140)</sub> |**Description**: This policy setting allows an organization to prevent its devices from showing feedback questions from Microsoft. If you enable this policy setting, users will no longer see feedback notifications through the Windows Feedback app. If you disable or do not configure this policy setting, users may see notifications through the Windows Feedback app asking users for feedback. Note: If you disable or do not configure this policy setting, users can control how often they receive feedback questions.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\DataCollection\DoNotShowFeedbackNotifications<br />**OS**: WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |
+|Do not use temporary folders per session<br /><sub>(CCE-38180-6)</sub> |**Description**: By default, Remote Desktop Services creates a separate temporary folder on the RD Session Host server for each active session that a user maintains on the RD Session Host server. The temporary folder is created on the RD Session Host server in a Temp folder under the user's profile folder and is named with the "sessionid." This temporary folder is used to store individual temporary files. To reclaim disk space, the temporary folder is deleted when the user logs off from a session. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows NT\Terminal Services\PerSessionTempDir<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Critical |
+|Enumerate administrator accounts on elevation<br /><sub>(CCE-36512-2)</sub> |**Description**: This policy setting controls whether administrator accounts are displayed when a user attempts to elevate a running application. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\CredUI\EnumerateAdministrators<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Warning |
+|Prevent downloading of enclosures<br /><sub>(CCE-37126-0)</sub> |**Description**: This policy setting prevents the user from having enclosures (file attachments) downloaded from a feed to the user's computer. The recommended state for this setting is: `Enabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Internet Explorer\Feeds\DisableEnclosureDownload<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Warning |
+|Require secure RPC communication<br /><sub>(CCE-37567-5)</sub> |**Description**: Specifies whether a Remote Desktop Session Host server requires secure RPC communication with all clients or allows unsecured communication. You can use this setting to strengthen the security of RPC communication with clients by allowing only authenticated and encrypted requests. If the status is set to Enabled, Remote Desktop Services accepts requests from RPC clients that support secure requests, and does not allow unsecured communication with untrusted clients. If the status is set to Disabled, Remote Desktop Services always requests security for all RPC traffic. However, unsecured communication is allowed for RPC clients that do not respond to the request. If the status is set to Not Configured, unsecured communication is allowed. Note: The RPC interface is used for administering and configuring Remote Desktop Services.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows NT\Terminal Services\fEncryptRPCTraffic<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |
+|Require user authentication for remote connections by using Network Level Authentication<br /><sub>(AZ-WIN-00149)</sub> |**Description**: Require user authentication for remote connections by using Network Level Authentication<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows NT\Terminal Services\UserAuthentication<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Critical |
+|Scan removable drives<br /><sub>(AZ-WIN-00177)</sub> |**Description**: This policy setting allows you to manage whether or not to scan for malicious software and unwanted software in the contents of removable drives such as USB flash drives when running a full scan. If you enable this setting removable drives will be scanned during any type of scan. If you disable or do not configure this setting removable drives will not be scanned during a full scan. Removable drives may still be scanned during quick scan and custom scan.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows Defender\Scan\DisableRemovableDriveScanning<br />**OS**: WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Critical |
+|Security: Control Event Log behavior when the log file reaches its maximum size<br /><sub>(CCE-37145-0)</sub> |**Description**: This policy setting controls Event Log behavior when the log file reaches its maximum size. If you enable this policy setting and a log file reaches its maximum size, new events are not written to the log and are lost. If you disable or do not configure this policy setting and a log file reaches its maximum size, new events overwrite old events. Note: Old events may or may not be retained according to the "Backup log automatically when full"  policy setting.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\EventLog\Security\Retention<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Critical |
+|Security: Specify the maximum log file size (KB)<br /><sub>(CCE-37695-4)</sub> |**Description**: This policy setting specifies the maximum size of the log file in kilobytes. If you enable this policy setting, you can configure the maximum log file size to be between 1 megabyte (1024 kilobytes) and 2 terabytes (2147483647 kilobytes) in kilobyte increments. If you disable or do not configure this policy setting, the maximum size of the log file will be set to the locally configured value. This value can be changed by the local administrator using the Log Properties dialog and it defaults to 20 megabytes.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\EventLog\Security\MaxSize<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= 196608<br /><sub>(Registry)</sub> |Critical |
+|Send file samples when further analysis is required<br /><sub>(AZ-WIN-00126)</sub> |**Description**: This policy setting configures behavior of samples submission when opt-in for MAPS telemetry is set. Possible options are: (0x0) Always prompt (0x1) Send safe samples automatically (0x2) Never send (0x3) Send all samples automatically<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows Defender\SpyNet\SubmitSamplesConsent<br />**OS**: WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Warning |
+|Set client connection encryption level<br /><sub>(CCE-36627-8)</sub> |**Description**: This policy setting specifies whether the computer that is about to host the remote connection will enforce an encryption level for all data sent between it and the client computer for the remote session.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows NT\Terminal Services\MinEncryptionLevel<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 3<br /><sub>(Registry)</sub> |Critical |
+|Set the default behavior for AutoRun<br /><sub>(CCE-38217-6)</sub> |**Description**: This policy setting sets the default behavior for Autorun commands. Autorun commands are generally stored in autorun.inf files. They often launch the installation program or other routines. Prior to Windows Vista, when media containing an autorun command is inserted, the system will automatically execute the program without user intervention. This creates a major security concern as code may be executed without user's knowledge. The default behavior starting with Windows Vista is to prompt the user whether autorun command is to be run. The autorun command is represented as a handler in the Autoplay dialog. If you enable this policy setting, an Administrator can change the default Windows Vista or later behavior for autorun to: a) Completely disable autorun commands, or b) Revert back to pre-Windows Vista behavior of automatically executing the autorun command. If you disable or not configure this policy setting, Windows Vista or later will prompt the user whether autorun command is to be run.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\Explorer\NoAutorun<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |
+|Setup: Control Event Log behavior when the log file reaches its maximum size<br /><sub>(CCE-38276-2)</sub> |**Description**: This policy setting controls Event Log behavior when the log file reaches its maximum size. If you enable this policy setting and a log file reaches its maximum size, new events are not written to the log and are lost. If you disable or do not configure this policy setting and a log file reaches its maximum size, new events overwrite old events. Note: Old events may or may not be retained according to the "Backup log automatically when full"  policy setting.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\EventLog\Setup\Retention<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Critical |
+|Setup: Specify the maximum log file size (KB)<br /><sub>(CCE-37526-1)</sub> |**Description**: This policy setting specifies the maximum size of the log file in kilobytes. If you enable this policy setting, you can configure the maximum log file size to be between 1 megabyte (1024 kilobytes) and 2 terabytes (2147483647 kilobytes) in kilobyte increments. If you disable or do not configure this policy setting, the maximum size of the log file will be set to the locally configured value. This value can be changed by the local administrator using the Log Properties dialog and it defaults to 20 megabytes.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\EventLog\Setup\MaxSize<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= 32768<br /><sub>(Registry)</sub> |Critical |
+|Sign-in last interactive user automatically after a system-initiated restart<br /><sub>(CCE-36977-7)</sub> |**Description**: This policy setting controls whether a device will automatically sign-in the last interactive user after Windows Update restarts the system. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\DisableAutomaticRestartSignOn<br />**OS**: WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |
+|Specify the interval to check for definition updates<br /><sub>(AZ-WIN-00152)</sub> |**Description**: This policy setting allows you to specify an interval at which to check for definition updates. The time value is represented as the number of hours between update checks. Valid values range from 1 (every hour) to 24 (once per day). If you enable this setting, checking for definition updates will occur at the interval specified. If you disable or do not configure this setting, checking for definition updates will occur at the default interval.<br />**Key Path**: SOFTWARE\Microsoft\Microsoft Antimalware\Signature Updates\SignatureUpdateInterval<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 8<br /><sub>(Registry)</sub> |Critical |
+|System: Control Event Log behavior when the log file reaches its maximum size<br /><sub>(CCE-36160-0)</sub> |**Description**: This policy setting controls Event Log behavior when the log file reaches its maximum size. If you enable this policy setting and a log file reaches its maximum size, new events are not written to the log and are lost. If you disable or do not configure this policy setting and a log file reaches its maximum size, new events overwrite old events. Note: Old events may or may not be retained according to the "Backup log automatically when full"  policy setting.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\EventLog\System\Retention<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Critical |
+|System: Specify the maximum log file size (KB)<br /><sub>(CCE-36092-5)</sub> |**Description**: This policy setting specifies the maximum size of the log file in kilobytes. If you enable this policy setting, you can configure the maximum log file size to be between 1 megabyte (1024 kilobytes) and 2 terabytes (2147483647 kilobytes) in kilobyte increments. If you disable or do not configure this policy setting, the maximum size of the log file will be set to the locally configured value. This value can be changed by the local administrator using the Log Properties dialog and it defaults to 20 megabytes.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\EventLog\System\MaxSize<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\>\= 32768<br /><sub>(Registry)</sub> |Critical |
+|Turn off Autoplay<br /><sub>(CCE-36875-3)</sub> |**Description**: Autoplay starts to read from a drive as soon as you insert media in the drive, which causes the setup file for programs or audio media to start immediately. An attacker could use this feature to launch a program to damage the computer or data on the computer. You can enable the Turn off Autoplay setting to disable the Autoplay feature. Autoplay is disabled by default on some removable drive types, such as floppy disk and network drives, but not on CD-ROM drives. Note You cannot use this policy setting to enable Autoplay on computer drives in which it is disabled by default, such as floppy disk and network drives.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\Explorer\NoDriveTypeAutoRun<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 255<br /><sub>(Registry)</sub> |Critical |
+|Turn off Data Execution Prevention for Explorer<br /><sub>(CCE-37809-1)</sub> |**Description**: Disabling data execution prevention can allow certain legacy plug-in applications to function without terminating Explorer. The recommended state for this setting is: `Disabled`. **Note:** Some legacy plug-in applications and other software may not function with Data Execution Prevention and will require an exception to be defined for that specific plug-in/software.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\Explorer\NoDataExecutionPrevention<br />**OS**: WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Critical |
+|Turn off heap termination on corruption<br /><sub>(CCE-36660-9)</sub> |**Description**: Without heap termination on corruption, legacy plug-in applications may continue to function when a File Explorer session has become corrupt. Ensuring that heap termination on corruption is active will prevent this. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\Explorer\NoHeapTerminationOnCorruption<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Critical |
+|Turn off Microsoft consumer experiences<br /><sub>(AZ-WIN-00144)</sub> |**Description**: This policy setting turns off experiences that help consumers make the most of their devices and Microsoft account. If you enable this policy setting, users will no longer see personalized recommendations from Microsoft and notifications about their Microsoft account. If you disable or do not configure this policy setting, users may see suggestions from Microsoft and notifications about their Microsoft account. Note: This setting only applies to Enterprise and Education SKUs.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows\CloudContent\DisableWindowsConsumerFeatures<br />**OS**: WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Warning |
+|Turn off shell protocol protected mode<br /><sub>(CCE-36809-2)</sub> |**Description**: This policy setting allows you to configure the amount of functionality that the shell protocol can have. When using the full functionality of this protocol applications can open folders and launch files. The protected mode reduces the functionality of this protocol allowing applications to only open a limited set of folders. Applications are not able to open files with this protocol when it is in the protected mode. It is recommended to leave this protocol in the protected mode to increase the security of Windows. The recommended state for this setting is: `Disabled`.<br />**Key Path**: SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\Explorer\PreXPSP2ShellProtocolBehavior<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Warning |
+|Turn on behavior monitoring<br /><sub>(AZ-WIN-00178)</sub> |**Description**: This policy setting allows you to configure behavior monitoring. If you enable or do not configure this setting behavior monitoring will be enabled. If you disable this setting behavior monitoring will be disabled.<br />**Key Path**: SOFTWARE\Policies\Microsoft\Windows Defender\Real-Time Protection\DisableBehaviorMonitoring<br />**OS**: WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 0<br /><sub>(Registry)</sub> |Warning |
+
+## Windows Firewall Properties
+
+|Name<br /><sub>(ID)</sub> |Details |Expected value<br /><sub>(Type)</sub> |Severity |
+|||||
+|Windows Firewall: Domain: Allow unicast response<br /><sub>(AZ-WIN-00088)</sub> |**Description**: <p><span>This option is useful if you need to control whether this computer receives unicast responses to its outgoing multicast or broadcast messages.  </span></p><p><span>We recommend this setting to ‘Yes’ for Private and Domain profiles, this will set the registry value to 0.</span></p><br />**Key Path**: Software\Policies\Microsoft\WindowsFirewall\DomainProfile\DisableUnicastResponsesToMulticastBroadcast<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member |\= 0<br /><sub>(Registry)</sub> |Warning |
+|Windows Firewall: Domain: Firewall state<br /><sub>(CCE-36062-8)</sub> |**Description**: Select On (recommended) to have Windows Firewall with Advanced Security use the settings for this profile to filter network traffic. If you select Off, Windows Firewall with Advanced Security will not use any of the firewall rules or connection security rules for this profile.<br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\DomainProfile\EnableFirewall<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |
+|Windows Firewall: Domain: Outbound connections<br /><sub>(CCE-36146-9)</sub> |**Description**: This setting determines the behavior for outbound connections that do not match an outbound firewall rule. In Windows Vista, the default behavior is to allow connections unless there are firewall rules that block the connection.<br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\DomainProfile\DefaultOutboundAction<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member |\= 0<br /><sub>(Registry)</sub> |Critical |
+|Windows Firewall: Domain: Settings: Apply local connection security rules<br /><sub>(CCE-38040-2)</sub> |**Description**: <p><span>This setting controls whether local administrators are allowed to create local connection rules that apply together with firewall rules configured by Group Policy. The recommended state for this setting is ‘Yes’, this will set the registry value to 1.</span></p><br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\DomainProfile\AllowLocalIPsecPolicyMerge<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member |\= 1<br /><sub>(Registry)</sub> |Critical |
+|Windows Firewall: Domain: Settings: Apply local firewall rules<br /><sub>(CCE-37860-4)</sub> |**Description**: <p><span>This setting controls whether local administrators are allowed to create local firewall rules that apply together with firewall rules configured by Group Policy.</span></p><p><span>The recommended state for this setting is Yes, this will set the registry value to 1. </span></p><br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\DomainProfile\AllowLocalPolicyMerge<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Critical |
+|Windows Firewall: Domain: Settings: Display a notification<br /><sub>(CCE-38041-0)</sub> |**Description**: <p><span>By selecting this option, no notification is displayed to the user when a program is blocked from receiving inbound connections. In a server environment, the popups are not useful as the users is not logged in, popups are not necessary and can add confusion for the administrator.  </span></p><p><span>Configure this policy setting to ‘No’, this will set the registry value to 1.  Windows Firewall will not display a notification when a program is blocked from receiving inbound connections.</span></p><br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\DomainProfile\DisableNotifications<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member |\= 1<br /><sub>(Registry)</sub> |Warning |
+|Windows Firewall: Private: Allow unicast response<br /><sub>(AZ-WIN-00089)</sub> |**Description**: <p><span>This option is useful if you need to control whether this computer receives unicast responses to its outgoing multicast or broadcast messages.  </span></p><p><span>We recommend this setting to ‘Yes’ for Private and Domain profiles, this will set the registry value to 0.</span></p><br />**Key Path**: Software\Policies\Microsoft\WindowsFirewall\PrivateProfile\DisableUnicastResponsesToMulticastBroadcast<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Warning |
+|Windows Firewall: Private: Firewall state<br /><sub>(CCE-38239-0)</sub> |**Description**: Select On (recommended) to have Windows Firewall with Advanced Security use the settings for this profile to filter network traffic. If you select Off, Windows Firewall with Advanced Security will not use any of the firewall rules or connection security rules for this profile.<br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PrivateProfile\EnableFirewall<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |
+|Windows Firewall: Private: Outbound connections<br /><sub>(CCE-38332-3)</sub> |**Description**: This setting determines the behavior for outbound connections that do not match an outbound firewall rule. The default behavior is to allow connections unless there are firewall rules that block the connection. Important If you set Outbound connections to Block and then deploy the firewall policy by using a GPO, computers that receive the GPO settings cannot receive subsequent Group Policy updates unless you create and deploy an outbound rule that enables Group Policy to work. Predefined rules for Core Networking include outbound rules that enable Group Policy to work. Ensure that these outbound rules are active, and thoroughly test firewall profiles before deploying.<br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PrivateProfile\DefaultOutboundAction<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Critical |
+|Windows Firewall: Private: Settings: Apply local connection security rules<br /><sub>(CCE-36063-6)</sub> |**Description**: <p><span>This setting controls whether local administrators are allowed to create local connection rules that apply together with firewall rules configured by Group Policy. The recommended state for this setting is ‘Yes’, this will set the registry value to 1.</span></p><br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PrivateProfile\AllowLocalIPsecPolicyMerge<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |
+|Windows Firewall: Private: Settings: Apply local firewall rules<br /><sub>(CCE-37438-9)</sub> |**Description**: <p><span>This setting controls whether local administrators are allowed to create local firewall rules that apply together with firewall rules configured by Group Policy.</span></p><p><span>The recommended state for this setting is Yes, this will set the registry value to 1. </span></p><br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PrivateProfile\AllowLocalPolicyMerge<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Critical |
+|Windows Firewall: Private: Settings: Display a notification<br /><sub>(CCE-37621-0)</sub> |**Description**: <p><span>By selecting this option, no notification is displayed to the user when a program is blocked from receiving inbound connections. In a server environment, the popups are not useful as the users is not logged in, popups are not necessary and can add confusion for the administrator.  </span></p><p><span> Configure this policy setting to ‘No’, this will set the registry value to 1.  Windows Firewall will not display a notification when a program is blocked from receiving inbound connections.</span></p><br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PrivateProfile\DisableNotifications<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Warning |
+|Windows Firewall: Public: Allow unicast response<br /><sub>(AZ-WIN-00090)</sub> |**Description**: <p><span>This option is useful if you need to control whether this computer receives unicast responses to its outgoing multicast or broadcast messages. This can be done by changing the state for this setting to ‘No’, this will set the registry value to 1.</span></p><br />**Key Path**: Software\Policies\Microsoft\WindowsFirewall\PublicProfile\DisableUnicastResponsesToMulticastBroadcast<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Warning |
+|Windows Firewall: Public: Firewall state<br /><sub>(CCE-37862-0)</sub> |**Description**: Select On (recommended) to have Windows Firewall with Advanced Security use the settings for this profile to filter network traffic. If you select Off, Windows Firewall with Advanced Security will not use any of the firewall rules or connection security rules for this profile.<br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PublicProfile\EnableFirewall<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |
+|Windows Firewall: Public: Outbound connections<br /><sub>(CCE-37434-8)</sub> |**Description**: This setting determines the behavior for outbound connections that do not match an outbound firewall rule. The default behavior is to allow connections unless there are firewall rules that block the connection. Important If you set Outbound connections to Block and then deploy the firewall policy by using a GPO, computers that receive the GPO settings cannot receive subsequent Group Policy updates unless you create and deploy an outbound rule that enables Group Policy to work. Predefined rules for Core Networking include outbound rules that enable Group Policy to work. Ensure that these outbound rules are active, and thoroughly test firewall profiles before deploying.<br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PublicProfile\DefaultOutboundAction<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 0<br /><sub>(Registry)</sub> |Critical |
+|Windows Firewall: Public: Settings: Apply local connection security rules<br /><sub>(CCE-36268-1)</sub> |**Description**: <p><span>This setting controls whether local administrators are allowed to create local connection rules that apply together with firewall rules configured by Group Policy. The recommended state for this setting is ‘Yes’, this will set the registry value to 1.</span></p><br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PublicProfile\AllowLocalIPsecPolicyMerge<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Critical |
+|Windows Firewall: Public: Settings: Apply local firewall rules<br /><sub>(CCE-37861-2)</sub> |**Description**: <p><span>This setting controls whether local administrators are allowed to create local firewall rules that apply together with firewall rules configured by Group Policy.</span></p><p><span>The recommended state for this setting is Yes, this will set the registry value to 1. </span></p><br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PublicProfile\AllowLocalPolicyMerge<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |Doesn't exist or \= 1<br /><sub>(Registry)</sub> |Critical |
+|Windows Firewall: Public: Settings: Display a notification<br /><sub>(CCE-38043-6)</sub> |**Description**: <p><span>By selecting this option, no notification is displayed to the user when a program is blocked from receiving inbound connections. In a server environment, the popups are not useful as the users is not logged in, popups are not necessary and can add confusion for the administrator.  </span></p><p><span>Configure this policy setting to ‘No’, this will set the registry value to 1.  Windows Firewall will not display a notification when a program is blocked from receiving inbound connections.</span></p><br />**Key Path**: SOFTWARE\Policies\Microsoft\WindowsFirewall\PublicProfile\DisableNotifications<br />**OS**: WS2008, WS2008R2, WS2012, WS2012R2, WS2016<br />**Server Type**: Domain Controller, Domain Member, Workgroup Member |\= 1<br /><sub>(Registry)</sub> |Warning |
+
+> [!NOTE]
+> Availability of specific Azure Policy Guest Configuration settings may vary in Azure Government
+> and other national clouds.
+
+## Next steps
+
+Additional articles about Azure Policy and Guest Configuration:
+
+- [Azure Policy Guest Configuration](../concepts/guest-configuration.md).
+- [Regulatory Compliance](../concepts/regulatory-compliance.md) overview.
+- Review other examples at [Azure Policy samples](./index.md).
+- Review [Understanding policy effects](../concepts/effects.md).
+- Learn how to [remediate non-compliant resources](../how-to/remediate-resources.md).
hdinsight Apache Spark Intellij Tool Plugin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/spark/apache-spark-intellij-tool-plugin.md
description: Use the Azure Toolkit for IntelliJ to develop Spark applications wr
Previously updated : 04/13/2020 Last updated : 04/14/2020 # Use Azure Toolkit for IntelliJ to create Apache Spark applications for HDInsight cluster
If you're not going to continue to use this application, delete the cluster that
In this article, you learned how to use the Azure Toolkit for IntelliJ plug-in to develop Apache Spark applications written in [Scala](https://www.scala-lang.org/). Then submitted them to an HDInsight Spark cluster directly from the IntelliJ integrated development environment (IDE). Advance to the next article to see how the data you registered in Apache Spark can be pulled into a BI analytics tool such as Power BI. > [!div class="nextstepaction"]
-> [Analyze Apache Spark data using Power BI](apache-spark-use-bi-tools.md)
+> [Analyze Apache Spark data using Power BI](apache-spark-use-bi-tools.md)
iot-hub-device-update Device Update Ubuntu Agent https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub-device-update/device-update-ubuntu-agent.md
Read the license terms prior to using a package. Your installation and use of a
## Import update
-1. Download the following [apt manifest file](https://github.com/Azure/iot-hub-device-update/tree/main/docs/sample-artifacts/libcurl4-doc-apt-manifest.json) and [import manifest file](https://github.com/Azure/iot-hub-device-update/tree/main/docs/sample-artifacts/sample-package-update-1.0.1-importManifest.json). This apt manifest will install the latest available version of `libcurl4-doc package` to your device.
+1. Go to [Device Update releases](https://github.com/Azure/iot-hub-device-update/releases) in Github and click the "Assets" drop-down.
- Alternatively, you can download this [apt manifest file](https://github.com/Azure/iot-hub-device-update/tree/main/docs/sample-artifacts/libcurl4-doc-7.58-apt-manifest.json) and [import manifest file](https://github.com/Azure/iot-hub-device-update/tree/main/docs/sample-artifacts/sample-package-update-2-2.0.1-importManifest.json). This will install specific version v7.58.0 of the `libcurl4-doc package` to your device.
+3. Download the `apt-update-import-samples.zip` by clicking on it.
+
+5. Extract the contents of the folder to discover various update samples and their corresponding import manifests.
2. In Azure portal, select the Device Updates option under Automatic Device Management from the left-hand navigation bar in your IoT Hub.
Read the license terms prior to using a package. Your installation and use of a
4. Select "+ Import New Update".
-5. Select the folder icon or text box under "Select an Import Manifest File". You will see a file picker dialog. Select the Import Manifest you downloaded previously. Next, select the folder icon or text box under "Select one or more update files". You will see a file picker dialog. Select the apt manifest update file you downloaded previously.
+5. Select the folder icon or text box under "Select an Import Manifest File". You will see a file picker dialog. Select the `sample-package-update-1.0.1-importManifest.json` import manifest from the folder you downloaded previously. Next, select the folder icon or text box under "Select one or more update files". You will see a file picker dialog. Select the `sample-1.0.1-libcurl4-doc-apt-manifest.json` apt manifest update file from the folder you downloaded previously.
+This update will install the latest available version of `libcurl4-doc package` to your device.
+
+ Alternatively, you can select the `sample-package-update-2-2.0.1-importManifest.json` import manifest file and `sample-2.0.1-libcurl4-doc-7.58-apt-manifest.json` apt manifest update file from the folder you downloaded previously. This will install specific version v7.58.0 of the `libcurl4-doc package` to your device.
:::image type="content" source="media/import-update/select-update-files.png" alt-text="Screenshot showing update file selection." lightbox="media/import-update/select-update-files.png":::
You have now completed a successful end-to-end package update using Device Updat
## Bonus steps
-1. Download the following [apt manifest file](https://github.com/Azure/iot-hub-device-update/tree/main/docs/sample-artifacts/libcurl4-doc-remove-apt-manifest.json) and [import manifest file](https://github.com/Azure/iot-hub-device-update/tree/main/docs/sample-artifacts/sample-package-update-1.0.2-importManifest.json). This apt manifest will remove the installed `libcurl4-doc package` from your device.
- 1. Repeat the "Import update" and "Deploy update" sections
+3. During the "Import update" step, select the `sample-package-update-1.0.2-importManifest.json` import manifest file and `sample-1.0.2-libcurl4-doc-remove-apt-manifest.json` apt manifest update file from the folder you downloaded previously. This update will remove the installed `libcurl4-doc package` from your device.
+ ## Clean up resources When no longer needed, clean up your device update account, instance, IoT Hub and the IoT Edge device (if you created the VM via the Deploy to Azure button). You can do so, by going to each individual resource and selecting "Delete". Note that you need to clean up a device update instance before cleaning up the device update account.
iot-hub-device-update Understand Device Update https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub-device-update/understand-device-update.md
To realize the full benefits of IoT-enabled digital transformation, customers ne
## Support for a wide range of IoT devices
-Device Update for IoT Hub is designed to offer optimized update deployment and streamlined operations through integration with [Azure IoT Hub](https://azure.microsoft.com/en-us/services/iot-hub/). This integration makes it easy to adopt Device Update on any existing solution. It provides a cloud-hosted solution to connect virtually any device. Device Update supports a broad range of IoT operating systemsΓÇöincluding Linux and [Azure RTOS](https://azure.microsoft.com/en-us/services/rtos/) (real-time operating system)ΓÇöand is extensible via open source. We are codeveloping Device Update for IoT Hub offerings with our semiconductor partners, including STMicroelectronics, NXP, Renesas, and Microchip. See the [samples](https://github.com/azure-rtos/samples/tree/PublicPreview/ADU) of key semiconductors evaluation boards that includes the get started guides to learn how to configure, build, and deploy the over-the-air (OTA) updates to MCU class devices.
+
+Device Update for IoT Hub is designed to offer optimized update deployment and streamlined operations through integration with [Azure IoT Hub](https://azure.microsoft.com/en-us/services/iot-hub/). This integration makes it easy to adopt Device Update on any existing solution. It provides a cloud-hosted solution to connect virtually any device. Device Update supports a broad range of IoT operating systemsΓÇöincluding Linux and [Azure RTOS](https://azure.microsoft.com/en-us/services/rtos/) (real-time operating system)ΓÇöand is extensible via open source. We are codeveloping Device Update for IoT Hub offerings with our semiconductor partners, including STMicroelectronics, NXP, Renesas, and Microchip. See the [samples](https://github.com/azure-rtos/samples/tree/PublicPreview/ADU) of key semiconductors evaluation boards that includes the get started guides to learn how to configure, build, and deploy the over-the-air (OTA) updates to MCU class devices.
Both a Device Update Agent Simulator binary and Raspberry Pi reference Yocto images are provided. Device Update for IoT Hub also supports updating Azure IoT Edge devices. A Device Update Agent is provided for Ubuntu Server 18.04 amd64
deployment. If there are no updates in progress, the status is returned as ΓÇ£Id
### Importing
-Importing is the ability to import your update into Device Update. Device Update supports rolling out a single update per device. This makes it ideal for
+Importing is how your updates are ingested into Device Update so they can be deployed to devices. Device Update supports rolling out a single update per device. This makes it ideal for
full-image updates that update an entire OS partition at once, or an apt Manifest that describes all the packages you want to update on your device. To import updates into Device Update, you first create an import manifest describing the update, then upload the update file(s) and the import
-manifest to an Internet-accessible location. After that, you can use the Azure portal or the Device Update Import
-REST API to initiate the asynchronous process of update import. Device Update uploads the files, processes
+manifest to an Internet-accessible location. After that, you can use the Azure portal or the [Device Update Import
+REST API](https://github.com/Azure/iot-hub-device-update/tree/main/docs/publish-api-reference) to initiate the asynchronous process of update import. Device Update uploads the files, processes
them, and makes them available for distribution to IoT devices. For sensitive content, protect the download using a shared access signature (SAS), such as an ad-hoc SAS for Azure Blob Storage. [Learn more about
iot-hub Iot Hub Ip Filtering https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-ip-filtering.md
Previously updated : 10/19/2020 Last updated : 03/12/2021
IP filter rules are *allow* rules and applied without ordering. Only IP addresse
For example, if you want to accept addresses in the range `192.168.100.0/22` and reject everything else, you only need to add one rule in the grid with address range `192.168.100.0/22`.
+### Azure portal
+
+IP filter rules are also applied when using IoT Hub through Azure portal. This is because API calls to the IoT Hub service are made directly using your browser with your credentials, which is consistent with other Azure services. To access IoT Hub using Azure portal when IP filter is enabled, add your computer's IP address to the allow list.
+ ## Retrieve and update IP filters using Azure CLI Your IoT Hub's IP filters can be retrieved and updated through [Azure CLI](/cli/azure/).
To further explore the capabilities of IoT Hub, see:
* [IoT Hub metrics](./monitor-iot-hub.md) * [IoT Hub support for virtual networks with Private Link and Managed Identity](virtual-network-support.md) * [Managing public network access for your IoT hub](iot-hub-public-network-access.md)
-* [Monitor IoT Hub](monitor-iot-hub.md)
+* [Monitor IoT Hub](monitor-iot-hub.md)
iot-hub Iot Hub Public Network Access https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-public-network-access.md
Previously updated : 02/12/2021 Last updated : 03/12/2021 # Managing public network access for your IoT hub
To restrict access to only [private endpoint for your IoT hub in your VNet](virt
To turn on public network access, selected **All networks**, then **Save**.
+## Accessing the IoT Hub after disabling public network access
+
+After public network access is disabled, the IoT Hub is only accessible through [its VNet private endpoint using Azure private link](virtual-network-support.md). This restriction includes accessing through Azure portal, because API calls to the IoT Hub service are made directly using your browser with your credentials.
+ ## IoT Hub endpoint, IP address, and ports after disabling public network access IoT Hub is a multi-tenant Platform-as-a-Service (PaaS), so different customers share the same pool of compute, networking, and storage hardware resources. IoT Hub's hostnames map to a public endpoint with a publicly routable IP address over the internet. Different customers share this IoT Hub public endpoint, and IoT devices in over wide-area networks and on-premises networks can all access it.
lighthouse Cloud Solution Provider https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/lighthouse/concepts/cloud-solution-provider.md
Title: Cloud Solution Provider program considerations description: For CSP partners, Azure delegated resource management helps improve security and control by enabling granular permissions. Previously updated : 09/22/2020 Last updated : 03/12/2021
lighthouse Enterprise https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/lighthouse/concepts/enterprise.md
Title: Azure Lighthouse in enterprise scenarios description: The capabilities of Azure Lighthouse can be used to simplify cross-tenant management within an enterprise which uses multiple Azure AD tenants. Previously updated : 08/12/2020 Last updated : 03/12/2021 # Azure Lighthouse in enterprise scenarios
-A common scenario for [Azure Lighthouse](../overview.md) is a service provider managing resources in its customersΓÇÖ Azure Active Directory (Azure AD) tenants. However, the capabilities of Azure Lighthouse can also be used to simplify cross-tenant management within an enterprise that uses multiple Azure AD tenants.
+A common scenario for [Azure Lighthouse](../overview.md) is when a service provider manages resources in Azure Active Directory (Azure AD) tenants that belong to customers. The capabilities of Azure Lighthouse can also be used to simplify cross-tenant management within an enterprise that uses multiple Azure AD tenants.
## Single vs. multiple tenants
-For most organizations, management is easier with a single Azure AD tenant. Having all resources within one tenant allows centralization of management tasks by designated users, user groups, or service principals within that tenant. We recommend using one tenant for your organization whenever possible. However, some organizations might have multiple Azure AD tenants. Sometimes this can be a temporary situation, as when acquisitions have taken place and a long-term tenant consolidation strategy hasn't been defined yet. Other times, organizations may need to maintain multiple tenants on an ongoing basis due to wholly independent subsidiaries, geographical or legal requirements, or other considerations.
+For most organizations, management is easier with a single Azure AD tenant. Having all resources within one tenant allows centralization of management tasks by designated users, user groups, or service principals within that tenant. We recommend using one tenant for your organization whenever possible.
+
+Some organizations may need to use multiple Azure AD tenants. This might be a temporary situation, as when acquisitions have taken place and a long-term tenant consolidation strategy hasn't been defined yet. Other times, organizations may need to maintain multiple tenants on an ongoing basis due to wholly independent subsidiaries, geographical or legal requirements, or other considerations.
In cases where a multi-tenant architecture is required, Azure Lighthouse can help centralize and streamline management operations. By using [Azure delegated resource management](azure-delegated-resource-management.md), users in one managing tenant can perform [cross-tenant management functions](cross-tenant-management-experience.md) in a centralized, scalable manner. ## Tenant management architecture
-To use Azure Lighthouse in an enterprise, youΓÇÖll need to determine which tenant will include the users who perform management operations on the other tenants. In other words, you will need to determine which tenant will be the managing tenant for the other tenants.
+To use Azure Lighthouse in an enterprise, you'll need to determine which tenant will include the users who perform management operations on the other tenants. In other words, you will need to designate one tenant as the managing tenant for the other tenants.
-For example, say your organization has a single tenant that weΓÇÖll call *Tenant A*. Your organization then acquires *Tenant B* and *Tenant C*, and you have business reasons that require you to maintain them as separate tenants.
+For example, say your organization has a single tenant that weΓÇÖll call *Tenant A*. Your organization then acquires *Tenant B* and *Tenant C*, and you you have business reasons that require you to maintain them as separate tenants. However, you'd like to use the same policy definitions, backup practices, and security processes for all of them, with management tasks performed by the same set of users.
-Your organization wants to use the same policy definitions, backup practices, and security processes across all tenants. Since Tenant A already includes users who are responsible for these tasks, you can onboard subscriptions within Tenant B and Tenant C, allowing the same users in Tenant A to perform those tasks.
+Since Tenant A already includes users in your organization who have been performing those tasks for Tenant A, you can onboard subscriptions within Tenant B and Tenant C, which allows the same users in Tenant A to perform those tasks across all tenants.
![Diagram showing users in Tenant A managing resources in Tenant B and Tenant C.](../media/enterprise-azure-lighthouse.jpg)
For cross-tenant management within the enterprise, references to service provide
For instance, in the example described above, Tenant A can be thought of as the service provider tenant (the managing tenant) and Tenant B and Tenant C can be thought of as the customer tenants.
-In that example, Tenant A users with the appropriate permissions can [view and manage delegated resources](../how-to/view-manage-customers.md) in the **My customers** page of the Azure portal. Likewise, Tenant B and Tenant C users with the appropriate permissions can [view and manage the resources that have been delegated](../how-to/view-manage-service-providers.md) to Tenant A in the **Service providers** page of the Azure portal.
+Continuing with that example, Tenant A users with the appropriate permissions can [view and manage delegated resources](../how-to/view-manage-customers.md) in the **My customers** page of the Azure portal. Likewise, Tenant B and Tenant C users with the appropriate permissions can [view and manage the resources that have been delegated](../how-to/view-manage-service-providers.md) to Tenant A in the **Service providers** page of the Azure portal.
## Next steps - Learn about [cross-tenant management experiences](cross-tenant-management-experience.md).-- Learn about [Azure delegated resource management](azure-delegated-resource-management.md).
+- Learn about [Azure delegated resource management](azure-delegated-resource-management.md).
lighthouse Recommended Security Practices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/lighthouse/concepts/recommended-security-practices.md
Title: Recommended security practices description: When using Azure Lighthouse, it's important to consider security and access control. Previously updated : 08/12/2020 Last updated : 03/12/2021
When using [Azure Lighthouse](../overview.md), it's important to consider securi
[Azure AD Multi-Factor Authentication](../../active-directory/authentication/concept-mfa-howitworks.md) (also known as two-step verification) helps prevent attackers from gaining access to an account by requiring multiple authentication steps. You should require Multi-Factor Authentication for all users in your managing tenant, including users who will have access to delegated customer resources.
-We suggest that you ask your customers to implement Azure AD Multi-Factor Authentication in their tenants as well.
+We recommend that you ask your customers to implement Azure AD Multi-Factor Authentication in their tenants as well.
## Assign permissions to groups, using the principle of least privilege
Keep in mind that when you [onboard customers through a public managed service
## Next steps
+- Review the [security baseline information](../security-baseline.md) to understand how guidance from the Azure Security Benchmark applies to Azure Lighthouse.
- [Deploy Azure AD Multi-Factor Authentication](../../active-directory/authentication/howto-mfa-getstarted.md). - Learn about [cross-tenant management experiences](cross-tenant-management-experience.md).
lighthouse Manage Hybrid Infrastructure Arc https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/lighthouse/how-to/manage-hybrid-infrastructure-arc.md
Title: Manage hybrid infrastructure at scale with Azure Arc description: Learn how to effectively manage your customers' machines and Kubernetes clusters outside of Azure. Previously updated : 09/22/2020 Last updated : 03/12/2021
As a service provider, you may have onboarded multiple customer tenants to [Azur
With [Azure Arc enabled servers](../../azure-arc/servers/overview.md), customers can manage any Windows and Linux machines hosted outside of Azure on their corporate network, in the same way they manage native Azure virtual machines. By linking a hybrid machine to Azure, it becomes connected and is treated as a resource in Azure. Service providers can then manage these non-Azure machines along with their customers' Azure resources.
-[Azure Arc enabled Kubernetes (preview)](../../azure-arc/kubernetes/overview.md) lets customers attach and configure Kubernetes clusters inside or outside of Azure. When a Kubernetes cluster is attached to Azure Arc, it will appear in the Azure portal, with an Azure Resource Manager ID and a managed identity. Clusters are attached to standard Azure subscriptions, are located in a resource group, and can receive tags just like any other Azure resource.
+[Azure Arc enabled Kubernetes](../../azure-arc/kubernetes/overview.md) lets customers attach and configure Kubernetes clusters inside or outside of Azure. When a Kubernetes cluster is attached to Azure Arc, it will appear in the Azure portal, with an Azure Resource Manager ID and a managed identity. Clusters are attached to standard Azure subscriptions, are located in a resource group, and can receive tags just like any other Azure resource.
-This topic provides an overview of how service providers can use Azure Arc enabled servers and Azure Arc enabled Kubernetes (preview) in a scalable way to manage their customers' hybrid environment, with visibility across all managed customer tenants.
+This topic provides an overview of how service providers can use Azure Arc enabled servers and Azure Arc enabled Kubernetes in a scalable way to manage their customers' hybrid environment, with visibility across all managed customer tenants.
> [!TIP] > Though we refer to service providers and customers in this topic, this guidance also applies to [enterprises using Azure Lighthouse to manage multiple tenants](../concepts/enterprise.md).
When viewing resources for a delegated subscription in the Azure portal, you'll
For example, you can [ensure the same set of policies are applied across customers' hybrid machines](../../azure-arc/servers/learn/tutorial-assign-policy-portal.md). You can also use Azure Security Center to monitor compliance across all of your customers' hybrid environments, or [use Azure Monitor to collect data directly from your hybrid machines](../../azure-arc/servers/learn/tutorial-enable-vm-insights.md) into a Log Analytics workspace. [Virtual machine extensions](../../azure-arc/servers/manage-vm-extensions.md) can be deployed to non-Azure Windows and Linux VMs, simplifying management of customer's hybrid machines.
-## Manage hybrid Kubernetes clusters at scale with Azure Arc enabled Kubernetes (preview)
-
-> [!NOTE]
-> Azure Arc enabled Kubernetes is currently in preview. We don't recommend it for production workloads at this time.
+## Manage hybrid Kubernetes clusters at scale with Azure Arc enabled Kubernetes
You can manage Kubernetes clusters that have been [connected to a customer's subscription with Azure Arc](../../azure-arc/kubernetes/connect-cluster.md), just as if they were running in Azure.
You can also monitor connected clusters with Azure Monitor, and [use Azure Polic
## Next steps -- Explore the jumpstarts and samples in the [Azure Arc GitHub repository](https://github.com/microsoft/azure_arc).
+- Explore the jumpstarts and samples in the [Azure Arc GitHub repository](https://github.com/microsoft/azure_arc).
- Learn about [supported scenarios for Azure Arc enabled servers](../../azure-arc/servers/overview.md#supported-scenarios). - Learn about [Kubernetes distributions supported by Azure Arc](../../azure-arc/kubernetes/overview.md#supported-kubernetes-distributions). - Learn how to [deploy a policy at scale](policy-at-scale.md). - Learn how to [use Azure Monitor Logs at scale](monitor-at-scale.md).-
lighthouse View Manage Customers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/lighthouse/how-to/view-manage-customers.md
Title: View and manage customers and delegated resources
+ Title: View and manage customers and delegated resources in the Azure portal
description: As a service provider or enterprise using Azure Lighthouse, you can view all of your delegated resources and subscriptions by going to My customers in the Azure portal. Previously updated : 08/12/2020 Last updated : 03/12/2021
-# View and manage customers and delegated resources
+# View and manage customers and delegated resources in the Azure portal
-Service providers using [Azure Lighthouse](../overview.md) can use the **My customers** page in the [Azure portal](https://portal.azure.com) to view delegated customer resources and subscriptions.
+Service providers using [Azure Lighthouse](../overview.md) can use the **My customers** page in the [Azure portal](https://portal.azure.com) to view delegated customer resources and subscriptions.
> [!TIP] > While we'll refer to service providers and customers here, [enterprises managing multiple tenants](../concepts/enterprise.md) can use the same process to consolidate their management experience. To access the **My customers** page in the Azure portal, select **All services**, then search for **My customers** and select it. You can also find it by entering ΓÇ£My customersΓÇ¥ in the search box near the top of the Azure portal.
-Keep in mind that the top **Customers** section of the **My customers** page only shows info about customers who have delegated subscriptions or resource groups. If you work with other customers (such as through the [Cloud Solution Provider program](/partner-center/csp-overview)), you wonΓÇÖt see info about those customers in the **Customers** section unless you have [onboarded their resources to Azure Lighthouse](onboard-customer.md).
-
-Lower on the page, a separate section called **Cloud Solution Provider (Preview)** shows billing info and resources for your CSP customers who have [signed the Microsoft Customer Agreement (MCA)](/partner-center/confirm-customer-agreement) and are [under the Azure plan](/partner-center/azure-plan-get-started). For more info, see [Get started with your Microsoft Partner Agreement billing account](../../cost-management-billing/understand/mpa-overview.md). Note that such CSP customers appear in this section whether or not you have also onboarded them to Azure Lighthouse. Similarly, a CSP customer does not have to appear in the **Cloud Solution Provider (Preview)** section of **My customers** in order for you to onboard them to Azure Lighthouse.
+Keep in mind that the top **Customers** section of the **My customers** page only shows info about customers who have delegated subscriptions or resource groups to your Azure Active Directory (Azure AD) tenant through Azure Lighthouse. If you work with other customers (such as through the [Cloud Solution Provider (CSP) program](/partner-center/csp-overview)), you wonΓÇÖt see info about those customers in the **Customers** section unless you have [onboarded their resources to Azure Lighthouse](onboard-customer.md) (though you may see details about certain CSP customers in the [Cloud Solution Provider (Preview) section](#cloud-solution-provider-preview) lower on the page).
> [!NOTE] > Your customers can view info about service providers by navigating to **Service providers** in the Azure portal. For more info, see [View and manage service providers](view-manage-service-providers.md).
Lower on the page, a separate section called **Cloud Solution Provider (Preview)
To view customer details, select **Customers** on the left side of the **My customers** page.
-For each customer, you'll see the customer's name, customer ID (tenant ID), and the offer associated with the engagement. In the **Delegations** column, you'll see the number of delegated subscriptions and/or the number of delegated resource groups.
- > [!IMPORTANT]
-> In order to see a delegation, users must have been granted the [Reader](../../role-based-access-control/built-in-roles.md#reader) role (or another built-in role which includes Reader access) in the onboarding process.
+> In order to see this information, users must have been granted the [Reader](../../role-based-access-control/built-in-roles.md#reader) role (or another built-in role which includes Reader access) in the onboarding process.
+
+For each customer, you'll see the customer's name, customer ID (tenant ID), and the **Offer ID** and **Offer version** associated with the engagement. In the **Delegations** column, you'll see the number of delegated subscriptions and/or the number of delegated resource groups.
-Filters at the top of the page let you sort and group your customer info or filter by specific customers, offers, or keywords.
+Options at the top of the page let you sort, filter, and group your customer information by specific customers, offers, or keywords.
-You can view the following info from this page:
+You can view the following information from this page:
- To see all of the subscriptions, offers, and delegations associated with a customer, select the customer's name. - To see more details about an offer and its delegations, select the offer name.
You can view the following info from this page:
Delegations show the subscription or resource group that has been delegated, along with the users and permissions that have access to it. To view this info, select **Delegations** on the left side of the **My customers** page.
-Filters at the top of the page let you sort and group your access assignment info or filter by specific customers, offers, or keywords.
+Options at the top of the page let you sort, filter, and group this information by specific customers, offers, or keywords.
### View role assignments
The users and permissions associated with each delegation appear in the **Role a
If you included users with the [Managed Services Registration Assignment Delete Role](../../role-based-access-control/built-in-roles.md#managed-services-registration-assignment-delete-role) when onboarding a customer to Azure Lighthouse, those users can remove a delegation by selecting the trash can icon that appears in the row for that delegation. When they do so, no users in the service provider's tenant will be able to access the resources that had been previously delegated.
+For more information, see [Remove access to a delegation](remove-delegation.md).
+
+## View delegation change activity
+
+The **Activity log** section of the **My customers** page keeps track of every time customer subscriptions or resource groups are delegated to your tenant, and every time previously delegated resources are removed. This information can only be viewed by users who have been [assigned the Monitoring Reader role at root scope](monitor-delegation-changes.md).
+
+For more information, see [View delegation changes in the Azure portal](monitor-delegation-changes.md#view-delegation-changes-in-the-azure-portal).
+ ## Work in the context of a delegated subscription
-You can work directly in the context of a delegated subscription within the Azure portal, without switching the directory you're working in. To do so:
+You can work directly in the context of a delegated subscription within the Azure portal, without switching the directory you're signed in to. To do so:
1. Select the **Directory + Subscription** icon near the top of the Azure portal.
-2. In the **Global subscription** filter, ensure that only the box for that delegated subscription is selected. You can use the **Current + delegated directories** drop-down box to show only subscriptions within a specific directory. (Do not use the **Switch directory** option, since that changes the directory to which you're signed in.)
+2. In the **Default subscription filter**, ensure that only the box for that delegated subscription is selected. You can use the **Current + delegated directories** drop-down box to show only subscriptions within a specific directory. (Do not use the **Switch directory** option, since that changes the directory to which you're signed in.)
If you then access a service which supports [cross-tenant management experiences](../concepts/cross-tenant-management-experience.md), the service will default to the context of the delegated subscription that you selected. You can change this by following the steps above and checking the **Select all** box (or choosing one or more subscriptions to work in instead). > [!NOTE]
-> If you have been granted access to one or more resource groups, rather than access to an entire subscription, you can select the subscription to which that resource group belongs. You'll then work in the context of that subscription, but will only be able to access the designated resource groups.
+> If you have been granted access to one or more resource groups, rather than access to an entire subscription, select the subscription to which that resource group belongs. You'll then work in the context of that subscription, but will only be able to access the designated resource groups.
You can also access functionality related to delegated subscriptions or resource groups from within services that support cross-tenant management experiences by selecting the subscription or resource group from within that service.
+## Cloud Solution Provider (Preview)
+
+A separate **Cloud Solution Provider (Preview)** section of the **My customers** page shows billing info and resources for your CSP customers who have [signed the Microsoft Customer Agreement (MCA)](/partner-center/confirm-customer-agreement) and are [under the Azure plan](/partner-center/azure-plan-get-started). For more information, see [Get started with your Microsoft Partner Agreement billing account](../../cost-management-billing/understand/mpa-overview.md).
+
+Such CSP customers will appear in this section whether or not you have also onboarded them to Azure Lighthouse. Similarly, a CSP customer does not have to appear in the **Cloud Solution Provider (Preview)** section of **My customers** in order for you to onboard them to Azure Lighthouse.
+ ## Next steps - Learn about [cross-tenant management experiences](../concepts/cross-tenant-management-experience.md).
load-balancer Cross Region Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-balancer/cross-region-overview.md
The backend pool of cross-region load balancer contains one or more regional loa
Add your existing load balancer deployments to a cross-region load balancer for a highly available, cross-region deployment.
-**Home region** is where the cross-region load balancer is deployed.
+**Home region** is where the cross-region load balancer or Public IP Address of Global tier is deployed.
This region doesn't affect how the traffic will be routed. If a home region goes down, traffic flow is unaffected. ### Home regions
This region doesn't affect how the traffic will be routed. If a home region goes
* East Asia > [!NOTE]
-> You can only deploy your cross-region load balancer in one of the 7 regions above.
+> You can only deploy your cross-region load balancer or Public IP in Global tier in one of the 7 regions above.
-A **participating region** is where the global public IP of the load balancer is available.
+A **participating region** is where the Global public IP of the load balancer is available.
Traffic started by the user will travel to the closest participating region through the Microsoft core network.
load-balancer Load Balancer Multiple Ip Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-balancer/load-balancer-multiple-ip-cli.md
# Load balancing on multiple IP configurations using Azure CLI
+> [!div class="op_single_selector"]
+> * [Portal](load-balancer-multiple-ip.md)
+> * [CLI](load-balancer-multiple-ip-cli.md)
+> * [PowerShell](load-balancer-multiple-ip-powershell.md)
+ This article describes how to use Azure Load Balancer with multiple IP addresses on a secondary network interface (NIC). For this scenario, we have two VMs running Windows, each with a primary and a secondary NIC. Each of the secondary NICs has two IP configurations. Each VM hosts both websites contoso.com and fabrikam.com. Each website is bound to one of the IP configurations on the secondary NIC. We use Azure Load Balancer to expose two frontend IP addresses, one for each website, to distribute traffic to the respective IP configuration for the website. This scenario uses the same port number across both frontends, as well as both backend pool IP addresses. ![LB scenario image](./media/load-balancer-multiple-ip/lb-multi-ip.PNG)
load-balancer Load Balancer Multiple Ip Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-balancer/load-balancer-multiple-ip-powershell.md
> * [CLI](load-balancer-multiple-ip-cli.md) > * [PowerShell](load-balancer-multiple-ip-powershell.md) - This article describes how to use Azure Load Balancer with multiple IP addresses on a secondary network interface (NIC). For this scenario, we have two VMs running Windows, each with a primary and a secondary NIC. Each of the secondary NICs has two IP configurations. Each VM hosts both websites contoso.com and fabrikam.com. Each website is bound to one of the IP configurations on the secondary NIC. We use Azure Load Balancer to expose two frontend IP addresses, one for each website, to distribute traffic to the respective IP configuration for the website. This scenario uses the same port number across both frontends, as well as both backend pool IP addresses. ![LB scenario image](./media/load-balancer-multiple-ip/lb-multi-ip.PNG) - ## Steps to load balance on multiple IP configurations + Follow the steps below to achieve the scenario outlined in this article: 1. Install Azure PowerShell. See [How to install and configure Azure PowerShell](/powershell/azure/) for information about installing the latest version of Azure PowerShell, selecting your subscription, and signing in to your account.
load-balancer Load Balancer Multiple Ip https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-balancer/load-balancer-multiple-ip.md
> * [PowerShell](load-balancer-multiple-ip-powershell.md) > * [CLI](load-balancer-multiple-ip-cli.md) - In this article, we're going to show you how to use Azure Load Balancer with multiple IP addresses on a secondary network interface controller (NIC). The following diagram illustrates our scenario: ![Load balancer scenario](./media/load-balancer-multiple-ip/lb-multi-ip.PNG)
load-balancer Tutorial Cross Region Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-balancer/tutorial-cross-region-cli.md
Create a cross-region load balancer with [az network cross-region-lb create](/cl
--backend-pool-name myBackEndPool-CR ```
-### Create the health probe
-
-Create a cross-region load balancer health probe with [az network cross-region lb probe create](/cli/azure/network/cross-region-lb/probe#az_network_cross_region_lb_probe_create):
-
-* Named **myHealthProbe-CR**.
-* Protocol **Tcp**.
-* Port **80**.
-
-```azurecli-interactive
- az network cross-region lb probe create \
- --lb-name myLoadBalancer-CR \
- --name myHealthProbe-CR \
- --port 80 \
- --protocol Tcp \
- --resource-group myResourceGroupLB-CR
-```
- ### Create the load balancer rule A load balancer rule defines:
Create a load balancer rule with [az network cross-region-lb rule create](/cli/a
--protocol tcp \ --resource-group myResourceGroupLB-CR \ --backend-pool-name myBackEndPool-CR \
- --frontend-ip-name myFrontEnd-CR \
- --probe-name myHealthProbe-CR
+ --frontend-ip-name myFrontEnd-CR
``` ## Create backend pool
When no longer needed, use the [az group delete](/cli/azure/group#az-group-delet
In this tutorial, you: * Created a cross-region load balancer.
-* Created a health probe.
* Created a load-balancing rule. * Added regional load balancers to the backend pool of the cross-region load balancer. * Tested the load balancer.
logic-apps Workflow Definition Language Functions Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/workflow-definition-language-functions-reference.md
ms.suite: integration Previously updated : 02/18/2021 Last updated : 03/12/2021 # Reference guide to using functions in expressions for Azure Logic Apps and Power Automate
workflow().<property>
| Parameter | Required | Type | Description | | | -- | - | -- |
-| <*property*> | No | String | The name for the workflow property whose value you want <p>A workflow object has these properties: **name**, **type**, **id**, **location**, and **run**. The **run** property value is also an object that has these properties: **name**, **type**, and **id**. |
+| <*property*> | No | String | The name for the workflow property whose value you want <p><p>By default, a workflow object has these properties: `name`, `type`, `id`, `location`, `run`, and `tags`. <p><p>- The `run` property value is a JSON object that includes these properties: `name`, `type`, and `id`. <p><p>- The `tags` property is a JSON object that includes [tags that are associated with your logic app in Azure Logic Apps or flow in Power Automate](../azure-resource-manager/management/tag-resources.md) and the values for those tags. For more information about tags in Azure resources, review [Tag resources, resource groups, and subscriptions for logical organization in Azure](../azure-resource-manager/management/tag-resources.md). <p><p>**Note**: By default, a logic app has no tags, but a Power Automate flow has the `flowDisplayName` and `environmentName` tags. |
|||||
-*Example*
+*Example 1*
This example returns the name for a workflow's current run:
-```
-workflow().run.name
-```
+`workflow().run.name`
+
+*Example 2*
+
+If you use Power Automate, you can create a `@workflow()` expression that uses the `tags` output property to get the values from your flow's `flowDisplayName` or `environmentName` property.
+
+For example, you can send custom email notifications from the flow itself that link back to your flow. These notifications can include an HTML link that contains the flow's display name in the email title and follows this syntax:
+
+`<a href=https://flow.microsoft.com/manage/environments/@{workflow()['tags']['environmentName']}/flows/@{workflow()['name']}/details>Open flow @{workflow()['tags']['flowDisplayName']}</a>`
<a name="xml"></a>
machine-learning Train Model https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/algorithm-module-reference/train-model.md
Previously updated : 11/25/2020 Last updated : 03/10/2021 # Train Model module
In Azure Machine Learning, creating and using a machine learning model is typica
> [!TIP] > If you have trouble using the Column Selector, see the article [Select Columns in Dataset](./select-columns-in-dataset.md) for tips. It describes some common scenarios and tips for using the **WITH RULES** and **BY NAME** options.
-1. Submit the pipeline. If you have a lot of data, this can take a while.
+1. Submit the pipeline. If you have a lot of data, it can take a while.
> [!IMPORTANT] > If you have an ID column which is the ID of each row, or a text column, which contains too many unique values, **Train Model** may hit an error like "Number of unique values in column: "{column_name}" is greater than allowed. > > This is because the column hit the threshold of unique values, and may cause out of memory. You can use [Edit Metadata](edit-metadata.md) to mark that column as **Clear feature** and it will not be used in training, or [Extract N-Gram Features from Text module](extract-n-gram-features-from-text.md) to preprocess text column. See [Designer error code](././designer-error-codes.md) for more error details.
+## Model Interpretability
+
+Model interpretability provides possibility to comprehend the ML model and to present the underlying basis for decision-making in a way that is understandable to humans.
+
+Currently **Train Model** module supports [using interpretability package to explain ML models](https://docs.microsoft.com/azure/machine-learning/how-to-machine-learning-interpretability-aml#generate-feature-importance-values-via-remote-runs). Following built-in algorithms are supported:
+
+- Linear Regression
+- Neural Network Regression
+- Two-Class Logistic Regression
+- Two-Class Support Vector Machine
+- Multi-class Decision Forest
+
+To generate model explanations, you can select **True** in the drop-down list of **Model Explanation** in Train Model module. By default it is set to False in the **Train Model** module. Please note that generating explanation requires extra compute cost.
+
+![Screenshot showing model explanation checkbox](./media/module/train-model-explanation-checkbox.png)
+
+After the pipeline run completed, you can visit **Explanations** tab in the right pane of **Train Model** module, and explore the model performance, dataset and feature importance.
+
+![Screenshot showing model explanation charts](./media/module/train-model-explanations-tab.gif)
+
+To learn more about using model explanations in Azure Machine Learning, refer to the how-to article about [Interpret ML models](https://docs.microsoft.com/azure/machine-learning/how-to-machine-learning-interpretability-aml#generate-feature-importance-values-via-remote-runs).
+ ## Results After the model is trained:
machine-learning How To Create Register Datasets https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-create-register-datasets.md
titanic_ds.take(3).to_pandas_dataframe()
To reuse and share datasets across experiments in your workspace, [register your dataset](#register-datasets).
+## Wrangle data
+After you create and [register](#register-datasets) your dataset, you can load it into your notebook for data wrangling and [exploration](#explore-data) prior to model training.
+
+If you don't need to do any data wrangling or exploration, see how to consume datasets in your training scripts for submitting ML experiments in [Train with datasets](how-to-train-with-datasets.md).
+
+### Filter datasets (preview)
+Filtering capabilities depends on the type of dataset you have.
+> [!IMPORTANT]
+> Filtering datasets with the public preview method, [`filter()`](/python/api/azureml-core/azureml.data.tabulardataset#filter-expression-) is an [experimental](/python/api/overview/azure/ml/#stable-vs-experimental) preview feature, and may change at any time.
+>
+**For TabularDatasets**, you can keep or remove columns with the [keep_columns()](/python/api/azureml-core/azureml.data.tabulardataset#keep-columns-columns--validate-false-) and [drop_columns()](/python/api/azureml-core/azureml.data.tabulardataset#drop-columns-columns-) methods.
+
+To filter out rows by a specific column value in a TabularDataset, use the [filter()](/python/api/azureml-core/azureml.data.tabulardataset#filter-expression-) method (preview).
+
+The following examples return an unregistered dataset based on the specified expressions.
+
+```python
+# TabularDataset that only contains records where the age column value is greater than 15
+tabular_dataset = tabular_dataset.filter(tabular_dataset['age'] > 15)
+
+# TabularDataset that contains records where the name column value contains 'Bri' and the age column value is greater than 15
+tabular_dataset = tabular_dataset.filter((tabular_dataset['name'].contains('Bri')) & (tabular_dataset['age'] > 15))
+```
+
+**In FileDatasets**, each row corresponds to a path of a file, so filtering by column value is not helpful. But, you can [filter()](/python/api/azureml-core/azureml.data.filedataset#filter-expression-) out rows by metadata like, CreationTime, Size etc.
+
+The following examples return an unregistered dataset based on the specified expressions.
+
+```python
+# FileDataset that only contains files where Size is less than 100000
+file_dataset = file_dataset.filter(file_dataset.file_metadata['Size'] < 100000)
+
+# FileDataset that only contains files that were either created prior to Jan 1, 2020 or where
+file_dataset = file_dataset.filter((file_dataset.file_metadata['CreatedTime'] < datetime(2020,1,1)) | (file_dataset.file_metadata['CanSeek'] == False))
+```
+
+**Labeled datasets** created from [data labeling projects](how-to-create-labeling-projects.md) are a special case. These datasets are a type of TabularDataset made up of image files. For these types of datasets, you can [filter()](/python/api/azureml-core/azureml.data.tabulardataset#filter-expression-) images by metadata, and by column values like `label` and `image_details`.
+
+```python
+# Dataset that only contains records where the label column value is dog
+labeled_dataset = labeled_dataset.filter(labeled_dataset['label'] == 'dog')
+
+# Dataset that only contains records where the label and isCrowd columns are True and where the file size is larger than 100000
+labeled_dataset = labeled_dataset.filter((labeled_dataset['label']['isCrowd'] == True) & (labeled_dataset.file_metadata['Size'] > 100000))
+```
+ ## Explore data
-After you create and [register](#register-datasets) your dataset, you can load it into your notebook for data exploration prior to model training. If you don't need to do any data exploration, see how to consume datasets in your training scripts for submitting ML experiments in [Train with datasets](how-to-train-with-datasets.md).
+After you're done wrangling your data,you can [register](#register-datasets) your dataset and then load it into your notebook for data exploration prior to model training.
For FileDatasets, you can either **mount** or **download** your dataset, and apply the python libraries you'd normally use for data exploration. [Learn more about mount vs download](how-to-train-with-datasets.md#mount-vs-download).
titanic_ds = titanic_ds.register(workspace=workspace,
## Create datasets using Azure Resource Manager
-There are a number of templates at [https://github.com/Azure/azure-quickstart-templates/tree/master/101-machine-learning-dataset-create-*](https://github.com/Azure/azure-quickstart-templates/tree/master/) that can be used to create datasets.
+There are many templates at [https://github.com/Azure/azure-quickstart-templates/tree/master/101-machine-learning-dataset-create-*](https://github.com/Azure/azure-quickstart-templates/tree/master/) that can be used to create datasets.
For information on using these templates, see [Use an Azure Resource Manager template to create a workspace for Azure Machine Learning](how-to-create-workspace-template.md).
machine-learning How To Custom Dns https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-custom-dns.md
Previously updated : 11/20/2020 Last updated : 03/12/2021
When using an Azure Machine Learning workspace with a private endpoint, there ar
- Optionally, [Azure CLI](/cli/azure/install-azure-cli) or [Azure PowerShell](/powershell/azure/install-az-ps).
-## FQDNs in use
-### These FQDNs are in use in the following regions: eastus, southcentralus, and westus2.
-The following list contains the fully qualified domain names (FQDN) used by your workspace:
+## Public regions
-* `<workspace-GUID>.workspace.<region>.cert.api.azureml.ms`
-* `<workspace-GUID>.workspace.<region>.api.azureml.ms`
-* `<workspace-GUID>.workspace.<region>.experiments.azureml.net`
-* `<workspace-GUID>.workspace.<region>.modelmanagement.azureml.net`
-* `<workspace-GUID>.workspace.<region>.aether.ms`
-* `ml-<workspace-name>-<region>-<workspace-guid>.notebooks.azure.net`
-* If you create a compute instance, you must also add an entry for `<instance-name>.<region>.instances.azureml.ms` with the private IP of the workspace private endpoint.
-
- > [!NOTE]
- > Compute instances can be accessed only from within the virtual network.
-
-### These FQDNs are in use in all other public regions
-The following list contains the fully qualified domain names (FQDN) used by your workspace:
+The following list contains the fully qualified domain names (FQDN) used by your workspace if it is in a public region::
* `<workspace-GUID>.workspace.<region>.cert.api.azureml.ms` * `<workspace-GUID>.workspace.<region>.api.azureml.ms`
The following list contains the fully qualified domain names (FQDN) used by your
> [!NOTE] > Compute instances can be accessed only from within the virtual network.
-### Azure China 21Vianet regions
+## Azure China 21Vianet regions
The following FQDNs are for Azure China 21Vianet regions:
managed-instance-apache-cassandra Create Cluster Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/managed-instance-apache-cassandra/create-cluster-cli.md
This quickstart demonstrates how to use the Azure CLI commands to create a clust
[!INCLUDE [azure-cli-prepare-your-environment.md](../../includes/azure-cli-prepare-your-environment.md)]
-* This article requires the Azure CLI version 2.12.1 or higher. If you are using Azure Cloud Shell, the latest version is already installed.
- * [Azure Virtual Network](../virtual-network/virtual-networks-overview.md) with connectivity to your self-hosted or on-premise environment. For more information on connecting on premises environments to Azure, see the [Connect an on-premises network to Azure](https://docs.microsoft.com/azure/architecture/reference-architectures/hybrid-networking/) article. * If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+> [!IMPORTANT]
+> This article requires the Azure CLI version 2.12.1 or higher. If you are using Azure Cloud Shell, the latest version is already installed.
## <a id="create-cluster"></a>Create a managed instance cluster
This quickstart demonstrates how to use the Azure CLI commands to create a clust
``` > [!NOTE]
- > The `assignee` and `role` values in the previous command are fixed service principle and role identifiers respectively.
+ > The `assignee` and `role` values in the previous command are fixed values, enter these values exactly as mentioned in the command. Not doing so will lead to errors when creating the cluster. If you encounter any errors when executing this command, you may not have permissions to run it, please reach out to your admin for permissions.
1. Next create the cluster in your newly created Virtual Network. Run the following command and make sure that you use the `Resource ID` value retrieved in the previous command as the value of `delegatedManagementSubnetId` variable:
managed-instance-apache-cassandra Create Cluster Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/managed-instance-apache-cassandra/create-cluster-portal.md
If you don't have an Azure subscription, create a [free account](https://azure.m
``` > [!NOTE]
- > The `assignee` and `role` values in the previous command are fixed service principle and role identifiers respectively.
+ > The `assignee` and `role` values in the previous command are fixed values, enter these values exactly as mentioned in the command. Not doing so will lead to errors when creating the cluster. If you encounter any errors when executing this command, you may not have permissions to run it, please reach out to your admin for permissions.
1. Now that you are finished with networking, click **Review + create** > **Create**
managed-instance-apache-cassandra Manage Resources Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/managed-instance-apache-cassandra/manage-resources-cli.md
This article describes common commands to automate the management of your Azure
[!INCLUDE [azure-cli-prepare-your-environment.md](../../includes/azure-cli-prepare-your-environment.md)]
-* This article requires the Azure CLI version 2.12.1 or higher. If you are using Azure Cloud Shell, the latest version is already installed.
- > [!IMPORTANT]
+> This article requires the Azure CLI version 2.12.1 or higher. If you are using Azure Cloud Shell, the latest version is already installed.
+>
> Manage Azure Managed Instance for Apache Cassandra resources cannot be renamed as this violates how Azure Resource Manager works with resource URIs. ## Azure Managed Instance for Apache Cassandra clusters
marketplace Azure Vm Get Sas Uri https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/azure-vm-get-sas-uri.md
Previously updated : 01/10/2021 Last updated : 03/10/2021+ # How to generate a SAS URI for a VM image
marketplace Co Sell Requirements https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/co-sell-requirements.md
Previously updated : 2/24/2021 Last updated : 3/12/2021 # Co-sell requirements
This table shows all possible co-sell statuses:
| | - | | Not co-sell ready | The minimum [requirements for Co-sell ready status](#requirements-for-co-sell-ready-status) have not been met. | | Co-sell ready | All [requirements for Co-sell ready status](#requirements-for-co-sell-ready-status) have been met. |
-| Azure IP Co-sell incentivized | Co-sell ready requirements have been met in addition to [these additional requirements](#requirements-for-ip-co-sell-incentivized-status). |
+| Azure IP Co-sell incentivized | Co-sell ready requirements have been met in addition to [these additional requirements](#requirements-for-azure-ip-co-sell-incentivized-status). |
| Biz Apps ISV Connect Premium incentive | This status applies to Dynamics 365 and Power Apps offers and indicates that all [requirements for this status](#requirements-for-biz-apps-isv-connect-premium-incentive-status) have been met. | |||
For an offer to achieve co-sell ready status, you must meet the following requir
**All partners**: - Have an MPN ID and an active [commercial marketplace account in Partner Center](./partner-center-portal/create-account.md).-- Make sure you have a complete [business profile](/partner-center/create-a-marketing-profile.md) in Partner Center. As a qualified Microsoft partner, your business profile helps to showcase your business to customers who are looking for your unique solutions and expertise to address their business needs, resulting in [referrals](/partner-center/referrals.md).
+- Make sure you have a complete [business profile](/partner-center/create-a-marketing-profile) in Partner Center. As a qualified Microsoft partner, your business profile helps to showcase your business to customers who are looking for your unique solutions and expertise to address their business needs, resulting in [referrals](/partner-center/referrals).
- Complete the **Co-sell with Microsoft** tab and publish the offer to the commercial marketplace. - Provide a sales contact for each co-sell eligible geography and required bill of materials)
We provide templates to help you create these documents. For more information ab
To qualify for co-sell ready status, your offer or solution must be published live to at least one of the commercial marketplace online stores: Azure Marketplace or Microsoft AppSource. For information about publishing offers to the commercial marketplace, see [Publishing guide by offer type](publisher-guide-by-offer-type.md). If you havenΓÇÖt published an offer in the commercial marketplace before, make sure you have a [commercial marketplace account](./partner-center-portal/create-account.md).
-## Requirements for IP Co-sell incentivized status
+## Requirements for Azure IP Co-sell incentivized status
Azure IP Co-sell incentivized status applies to the following offer types:
marketplace Co Sell Status https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/co-sell-status.md
The following table shows all possible co-sell statuses. To learn about the requ
| | - | | Not co-sell ready | The minimum [requirements for Co-sell ready status](co-sell-requirements.md#requirements-for-co-sell-ready-status) have not been met. | | Co-sell ready | All [requirements for Co-sell ready status](co-sell-requirements.md#requirements-for-co-sell-ready-status) have been met. |
-| Azure IP Co-sell incentivized | Co-sell ready requirements have been met in addition to [these additional requirements](co-sell-requirements.md#requirements-for-ip-co-sell-incentivized-status). |
+| Azure IP Co-sell incentivized | Co-sell ready requirements have been met in addition to [these additional requirements](co-sell-requirements.md#requirements-for-azure-ip-co-sell-incentivized-status). |
| Biz Apps ISV Connect Premium incentive | This status applies to Dynamics 365 and Power Apps offers and indicates that all [requirements for this status](co-sell-requirements.md#requirements-for-biz-apps-isv-connect-premium-incentive-status) have been met. | |||
marketplace Marketplace Co Sell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/marketplace-co-sell.md
Previously updated : 3/08/2021 Last updated : 3/12/2021 # Co-sell with Microsoft sales teams and partners overview
A co-sell opportunity is any type of collaboration with Microsoft sales teams, M
- **Co-sell with Microsoft sales teams** ΓÇô Work with one or more Microsoft sales teams to actively fulfill customer needs. This can include selling your offers, MicrosoftΓÇÖs offers, or both. You and Microsoft sales teams can identify and share customer opportunities in which your solutions may be a good fit. - **Partner to Partner (P2P)** ΓÇô Work with another Microsoft partner to actively solve a customer problem. - **Private deal** ΓÇô Share what you are independently working on with Microsoft so it will be reflected in the Microsoft reporting system for analysis and forecasting.-- **Solution Assessment (SA)** ΓÇô Work with partners who are vetted by the solution assessments business team to access the technology needs of customers who are using or planning to use Microsoft technologies.
+- **Solution Assessment (SA)** ΓÇô Work with partners who are vetted by the solution assessments business team to assess the technology needs of customers who are using or planning to use Microsoft technologies.
## Co-sell statuses
marketplace Marketplace Managed Apps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/marketplace-managed-apps.md
Use the managed application offer type under the following conditions:
|Requirements |Details | ||| |An Azure subscription | Managed applications must be deployed to a customer's subscription, but they can be managed by a third party. |
-|Billing and metering | The resources are provided in a customer's Azure subscription. VMs that use the pay-as-you-go payment model are transacted with the customer via Microsoft and billed via the customer's Azure subscription. <br><br> For bring-your-own-license VMs, Microsoft bills any infrastructure costs that are incurred in the customer subscription, but you transact software licensing fees with the customer directly. |
-|An Azure-compatible virtual hard disk (VHD) | VMs must be built on Windows or Linux.<br><br>For more information about creating a Linux VHD, see [Linux distributions endorsed on Azure](../virtual-machines/linux/endorsed-distros.md).<br><br>For more information about creating a Windows VHD, see [create an Azure application offer](./create-new-azure-apps-offer.md). |
+|Billing and metering | The resources are provided in a customer's Azure subscription. Azure Resources that use the pay-as-you-go payment model are transacted with the customer via Microsoft and billed via the customer's Azure subscription. <br><br> For bring-your-own-license Azure Resources, Microsoft bills any infrastructure costs that are incurred in the customer subscription, but you transact software licensing fees with the customer directly. |
+|An Azure Managed Application package | The configured Azure Resource Manager Template and Create UI Definition that will be used to deploy your application to the customer's subscription.<br><br>For more information about creating a Managed Application, see [Managed Application Overview](../azure-resource-manager/managed-applications/publish-service-catalog-app.md).|
migrate Tutorial Containerize Aspnet Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/tutorial-containerize-aspnet-kubernetes.md
Title: Containerize ASP.NET applications and migrate to Azure Kubernetes Service
-description: Learn how to containerize ASP.NET applications and migrate to Azure Kubernetes Service.
-
+ Title: Containerize & migrate ASP.NET applications to Azure Kubernetes
+description: Tutorial:Containerize & migrate ASP.NET applications to Azure Kubernetes Service.
+
# Containerize ASP.NET applications and migrate to Azure Kubernetes Service
-In this article, you'll learn how to containerize ASP.NET applications and migrate them to [Azure Kubernetes Service (AKS)](https://azure.microsoft.com/services/kubernetes-service/) using the Azure Migrate: App Containerization tool. The containerization process doesnΓÇÖt require access to your codebase and provides an easy way to containerize existing applications. The tool works by using the running state of the applications on a server to determine the application components and helps you package them in a container image. The containerized application can then be deployed on Azure Kubernetes Service (AKS).
+In this article, you'll learn how to containerize ASP.NET applications and migrate them to [Azure Kubernetes Service (AKS)](https://azure.microsoft.com/services/kubernetes-service/) using the Azure Migrate: App Containerization tool. The containerization process doesnΓÇÖt require access to your codebase and provides an easy way to containerize existing applications. The tool works by using the running state of the applications on a server to determine the application components and helps you package them in a container image. The containerized application can then be deployed on Azure Kubernetes Service (AKS).
-The Azure Migrate: App Containerization tool currently supports -
+The Azure Migrate: App Containerization tool currently supports -
- Containerizing ASP.NET apps and deploying them on Windows containers on AKS.-- Containerizing Java Web Apps on Apache Tomcat (on Linux servers) and deploying them on Linux containers on AKS. [Learn more](./tutorial-containerize-java-kubernetes.md)
+- Containerizing Java Web Apps on Apache Tomcat (on Linux servers) and deploying them on Linux containers on AKS. [Learn more](./tutorial-containerize-java-kubernetes.md)
-The Azure Migrate: App Containerization tool helps you to -
+The Azure Migrate: App Containerization tool helps you to -
- **Discover your application**: The tool remotely connects to the application servers running your ASP.NET application and discovers the application components. The tool creates a Dockerfile that can be used to create a container image for the application. - **Build the container image**: You can inspect and further customize the Dockerfile as per your application requirements and use that to build your application container image. The application container image is pushed to an Azure Container Registry you specify.-- **Deploy to Azure Kubernetes Service**: The tool then generates the Kubernetes resource definition YAML files needed to deploy the containerized application to your Azure Kubernetes Service cluster. You can customize the YAML files and use them to deploy the application on AKS.
+- **Deploy to Azure Kubernetes Service**: The tool then generates the Kubernetes resource definition YAML files needed to deploy the containerized application to your Azure Kubernetes Service cluster. You can customize the YAML files and use them to deploy the application on AKS.
> [!NOTE]
-> The Azure Migrate: App Containerization tool helps you discover specific application types (ASP.NET and Java web apps on Apache Tomcat) and their components on an application server. To discover servers and the inventory of apps, roles, and features running on on-premises machines, use Azure Migrate: Discovery and assessment capability. [Learn more](./tutorial-discover-vmware.md)
+> The Azure Migrate: App Containerization tool helps you discover specific application types (ASP.NET and Java web apps on Apache Tomcat) and their components on an application server. To discover servers and the inventory of apps, roles, and features running on on-premises machines, use Azure Migrate: Discovery and assessment capability. [Learn more](./tutorial-discover-vmware.md)
While all applications won't benefit from a straight shift to containers without significant rearchitecting, some of the benefits of moving existing apps to containers without rewriting include: -- **Improved infrastructure utilization:** With containers, multiple applications can share resources and be hosted on the same infrastructure. This can help you consolidate infrastructure and improve utilization.
+- **Improved infrastructure utilization:** With containers, multiple applications can share resources and be hosted on the same infrastructure. This can help you consolidate infrastructure and improve utilization.
- **Simplified management:** By hosting your applications on a modern managed infrastructure platform like AKS, you can simplify your management practices while still retaining control over your infrastructure. You can achieve this by retiring or reducing the infrastructure maintenance and management processes that you'd traditionally perform with owned infrastructure.-- **Application portability:** With increased adoption and standardization of container specification formats and orchestration platforms, application portability is no longer a concern.
+- **Application portability:** With increased adoption and standardization of container specification formats and orchestration platforms, application portability is no longer a concern.
- **Adopt modern management with DevOps:** Helps you adopt and standardize on modern practices for management and security with Infrastructure as Code and transition to DevOps.
Before you begin this tutorial, you should:
**Requirement** | **Details** |
-**Identify a machine to install the tool** | A Windows machine to install and run the Azure Migrate: App Containerization tool. The Windows machine could be a server (Windows Server 2016 or later) or client (Windows 10) operating system, meaning that the tool can run on your desktop as well. <br/><br/> The Windows machine running the tool should have network connectivity to the servers/virtual machines hosting the ASP.NET applications to be containerized.<br/><br/> Ensure that 6-GB space is available on the Windows machine running the Azure Migrate: App Containerization tool for storing application artifacts. <br/><br/> The Windows machine should have internet access, directly or via a proxy. <br/> <br/>Install the Microsoft Web Deploy tool on the machine running the App Containerization helper tool and application server if not already installed. You can download the tool from [here](https://aka.ms/webdeploy3.6)
-**Application servers** | Enable PowerShell remoting on the application servers: Login to the application server and Follow [these](https://docs.microsoft.com/powershell/module/microsoft.powershell.core/enable-psremoting) instructions to turn on PowerShell remoting. <br/><br/> If the application server is running Window Server 2008 R2, ensure that PowerShell 5.1 is installed on the application server. Follow the instruction [here](https://docs.microsoft.com/powershell/scripting/windows-powershell/wmf/setup/install-configure) to download and install PowerShell 5.1 on the application server. <br/><br/> Install the Microsoft Web Deploy tool on the machine running the App Containerization helper tool and application server if not already installed. You can download the tool from [here](https://aka.ms/webdeploy3.6)
-**ASP.NET application** | The tool currently supports <br/><br/> - ASP.NET applications using Microsoft .NET framework 3.5 or later.<br/> - Application servers running Windows Server 2008 R2 or later (application servers should be running PowerShell version 5.1). <br/> - Applications running on Internet Information Services (IIS) 7.5 or later. <br/><br/> The tool currently doesn't support <br/><br/> - Applications requiring Windows authentication (AKS doesnΓÇÖt support gMSA currently). <br/> - Applications that depend on other Windows services hosted outside IIS.
+**Identify a machine to install the tool** | A Windows machine to install and run the Azure Migrate: App Containerization tool. The Windows machine could be a server (Windows Server 2016 or later) or client (Windows 10) operating system, meaning that the tool can run on your desktop as well. <br/><br/> The Windows machine running the tool should have network connectivity to the servers/virtual machines hosting the ASP.NET applications to be containerized.<br/><br/> Ensure that 6-GB space is available on the Windows machine running the Azure Migrate: App Containerization tool for storing application artifacts. <br/><br/> The Windows machine should have internet access, directly or via a proxy. <br/> <br/>Install the Microsoft Web Deploy tool on the machine running the App Containerization helper tool and application server if not already installed. You can download the tool from [here](https://aka.ms/webdeploy3.6)
+**Application servers** | Enable PowerShell remoting on the application servers: Login to the application server and Follow [these](https://docs.microsoft.com/powershell/module/microsoft.powershell.core/enable-psremoting) instructions to turn on PowerShell remoting. <br/><br/> If the application server is running Window Server 2008 R2, ensure that PowerShell 5.1 is installed on the application server. Follow the instruction [here](https://docs.microsoft.com/powershell/scripting/windows-powershell/wmf/setup/install-configure) to download and install PowerShell 5.1 on the application server. <br/><br/> Install the Microsoft Web Deploy tool on the machine running the App Containerization helper tool and application server if not already installed. You can download the tool from [here](https://aka.ms/webdeploy3.6)
+**ASP.NET application** | The tool currently supports <br/><br/> - ASP.NET applications using Microsoft .NET framework 3.5 or later.<br/> - Application servers running Windows Server 2008 R2 or later (application servers should be running PowerShell version 5.1). <br/> - Applications running on Internet Information Services (IIS) 7.5 or later. <br/><br/> The tool currently doesn't support <br/><br/> - Applications requiring Windows authentication (AKS doesnΓÇÖt support gMSA currently). <br/> - Applications that depend on other Windows services hosted outside IIS.
## Prepare an Azure user account
If you just created a free Azure account, you're the owner of your subscription.
![Search box to search for the Azure subscription.](./media/tutorial-discover-vmware/search-subscription.png)
-2. In the **Subscriptions** page, select the subscription in which you want to create an Azure Migrate project.
+2. In the **Subscriptions** page, select the subscription in which you want to create an Azure Migrate project.
3. In the subscription, select **Access control (IAM)** > **Check access**. 4. In **Check access**, search for the relevant user account. 5. In **Add a role assignment**, click **Add**.
If you just created a free Azure account, you're the owner of your subscription.
## Download and install Azure Migrate: App Containerization tool 1. [Download](https://go.microsoft.com/fwlink/?linkid=2134571) the Azure Migrate: App Containerization installer on a Windows machine.
-2. Launch PowerShell in administrator mode and change the PowerShell directory to the folder containing the installer.
+2. Launch PowerShell in administrator mode and change the PowerShell directory to the folder containing the installer.
3. Run the installation script using the command ```powershell .\App ContainerizationInstaller.ps1 ```
-## Launch the App Containerization tool
+## Launch the App Containerization tool
1. Open a browser on any machine that can connect to the Windows machine running the App Containerization tool, and open the tool URL: **https://*machine name or IP address*: 44368**.
If you just created a free Azure account, you're the owner of your subscription.
2. If you see a warning stating that says your connection isnΓÇÖt private, click Advanced and choose to proceed to the website. This warning appears as the web interface uses a self-signed TLS/SSL certificate. 3. At the sign-in screen, use the local administrator account on the machine to sign-in.
-4. For specify application type, select **ASP.NET web apps** as the type of application you want to containerize.
+4. For specify application type, select **ASP.NET web apps** as the type of application you want to containerize.
![Default load-up for App Containerization tool.](./media/tutorial-containerize-apps-aks/tool-home.png)
If you just created a free Azure account, you're the owner of your subscription.
- If you've added proxy details or disabled the proxy and/or authentication, click on **Save** to trigger connectivity check again. - **Install updates**: The tool will automatically check for latest updates and install them. You can also manually install the latest version of the tool from [here](https://go.microsoft.com/fwlink/?linkid=2134571). - **Install Microsoft Web Deploy tool**: The tool will check that the Microsoft Web Deploy tool is installed on the Windows machine running the Azure Migrate: App Containerization tool.
- - **Enable PowerShell remoting**: The tool will inform you to ensure that PowerShell remoting is enabled on the application servers running the ASP.NET applications to be containerized.
-
+ - **Enable PowerShell remoting**: The tool will inform you to ensure that PowerShell remoting is enabled on the application servers running the ASP.NET applications to be containerized.
+ ## Log in to Azure
-Click **Login** to log in to your Azure account.
+Click **Login** to log in to your Azure account.
-1. You'll need a device code to authenticate with Azure. Clicking on Login will open a modal with the device code.
+1. You'll need a device code to authenticate with Azure. Clicking on Login will open a modal with the device code.
2. Click on **Copy code & Login** to copy the device code and open an Azure Login prompt in a new browser tab. If it doesn't appear, make sure you've disabled the pop-up blocker in the browser. ![Modal showing device code.](./media/tutorial-containerize-apps-aks/login-modal.png) 3. On the new tab, paste the device code and complete log in using your Azure account credentials. You can close the browser tab after log in is complete and return to the App Containerization tool's web interface. 4. Select the **Azure tenant** that you want to use.
-5. Specify the **Azure subscription** that you want to use.
+5. Specify the **Azure subscription** that you want to use.
## Discover ASP.NET applications The App Containerization helper tool connects remotely to the application servers using the provided credentials and attempts to discover ASP.NET applications hosted on the application servers.
-1. Specify the **IP address/FQDN and the credentials** of the server running the ASP.NET application that should be used to remotely connect to the server for application discovery.
- - The credentials provided must be for a local administrator (Windows) on the application server.
- - For domain accounts (the user must be an administrator on the application server), prefix the username with the domain name in the format *<domain\username>*.
- - You can run application discovery for upto five servers at a time.
+1. Specify the **IP address/FQDN and the credentials** of the server running the ASP.NET application that should be used to remotely connect to the server for application discovery.
+ - The credentials provided must be for a local administrator (Windows) on the application server.
+ - For domain accounts (the user must be an administrator on the application server), prefix the username with the domain name in the format *<domain\username>*.
+ - You can run application discovery for upto five servers at a time.
2. Click **Validate** to verify that the application server is reachable from the machine running the tool and that the credentials are valid. Upon successful validation, the status column will show the status as **Mapped**. ![Screenshot for server IP and credentials.](./media/tutorial-containerize-apps-aks/discovery-credentials.png)
-3. Click **Continue** to start application discovery on the selected application servers.
+3. Click **Continue** to start application discovery on the selected application servers.
4. Upon successful completion of application discovery, you can select the list of applications to containerize.
The App Containerization helper tool connects remotely to the application server
4. Use the checkbox to select the applications to containerize. 5. **Specify container name**: Specify a name for the target container for each selected application. The container name should be specified as <*name:tag*> where the tag is used for container image. For example, you can specify the target container name as *appname:v1*.
-### Parameterize application configurations
+### Parameterize application configurations
Parameterizing the configuration makes it available as a deployment time parameter. This allows you to configure this setting while deploying the application as opposed to having it hard-coded to a specific value in the container image. For example, this option is useful for parameters like database connection strings.
-1. Click **app configurations** to review detected configurations.
-2. Select the checkbox to parameterize the detected application configurations.
+1. Click **app configurations** to review detected configurations.
+2. Select the checkbox to parameterize the detected application configurations.
3. Click **Apply** after selecting the configurations to parameterize. ![Screenshot for app configuration parameterization ASP.NET application.](./media/tutorial-containerize-apps-aks/discovered-app-configs.png)
Parameterizing the configuration makes it available as a deployment time paramet
### Externalize file system dependencies You can add other folders that your application uses. Specify if they should be part of the container image or are to be externalized through persistent volumes on Azure file share. Using persistent volumes works great for stateful applications that store state outside the container or have other static content stored on the file system. [Learn more](https://docs.microsoft.com/azure/aks/concepts-storage)
-
-1. Click **Edit** under App Folders to review the detected application folders. The detected application folders have been identified as mandatory artifacts needed by the application and will be copied into the container image.
-
-2. Click **Add folders** and specify the folder paths to be added.
-3. To add multiple folders to the same volume, provide comma (`,`) separated values.
-4. Select **Persistent Volume** as the storage option if you want the folders to be stored outside the container on a Persistent Volume.
-5. Click **Save** after reviewing the application folders.
+
+1. Click **Edit** under App Folders to review the detected application folders. The detected application folders have been identified as mandatory artifacts needed by the application and will be copied into the container image.
+
+2. Click **Add folders** and specify the folder paths to be added.
+3. To add multiple folders to the same volume, provide comma (`,`) separated values.
+4. Select **Persistent Volume** as the storage option if you want the folders to be stored outside the container on a Persistent Volume.
+5. Click **Save** after reviewing the application folders.
![Screenshot for app volumes storage selection.](./media/tutorial-containerize-apps-aks/discovered-app-volumes.png) 6. Click **Continue** to proceed to the container image build phase.
Parameterizing the configuration makes it available as a deployment time paramet
4. **Track build status**: You can also monitor progress of the build step by clicking the **Build in Progress** link under the status column. The link takes a couple of minutes to be active after you've triggered the build process.
-5. Once the build is completed, click **Continue** to specify deployment settings.
+5. Once the build is completed, click **Continue** to specify deployment settings.
![Screenshot for app container image build completion.](./media/tutorial-containerize-apps-aks/build-aspnet-app-completed.png) ## Deploy the containerized app on AKS
-Once the container image is built, the next step is to deploy the application as a container on [Azure Kubernetes Service (AKS)](https://azure.microsoft.com/services/kubernetes-service/).
+Once the container image is built, the next step is to deploy the application as a container on [Azure Kubernetes Service (AKS)](https://azure.microsoft.com/services/kubernetes-service/).
-1. **Select the Azure Kubernetes Service Cluster**: Specify the AKS cluster that the application should be deployed to.
+1. **Select the Azure Kubernetes Service Cluster**: Specify the AKS cluster that the application should be deployed to.
- - The selected AKS cluster must have a Windows node pool.
- - The cluster must be configured to allow pulling of images from the Azure Container Registry that was selected to store the images.
+ - The selected AKS cluster must have a Windows node pool.
+ - The cluster must be configured to allow pulling of images from the Azure Container Registry that was selected to store the images.
- Run the following command in Azure CLI to attach the AKS cluster to the ACR. ``` Azure CLI az aks update -n <cluster-name> -g <cluster-resource-group> --attach-acr <acr-name>
Once the container image is built, the next step is to deploy the application as
- The AKS cluster created using the tool will be created with a Windows node pool. The cluster will be configured to allow it to pull images from the Azure Container Registry that was created earlier (if create new registry option was chosen). - Click **Continue** after selecting the AKS cluster.
-2. **Specify Azure file share**: If you had added more folders and selected the Persistent Volume option, then specify the Azure file share that should be used by Azure Migrate: App Containerization tool during the deployment process. The tool will create new directories in this Azure file share to copy over the application folders that are configured for Persistent Volume storage. Once the application deployment is complete, the tool will clean up the Azure file share by deleting the directories it had created.
+2. **Specify Azure file share**: If you had added more folders and selected the Persistent Volume option, then specify the Azure file share that should be used by Azure Migrate: App Containerization tool during the deployment process. The tool will create new directories in this Azure file share to copy over the application folders that are configured for Persistent Volume storage. Once the application deployment is complete, the tool will clean up the Azure file share by deleting the directories it had created.
- If you don't have an Azure file share or would like to create a new Azure file share, you can choose to create on from the tool by clicking **Create new Storage Account and file share**.
Once the container image is built, the next step is to deploy the application as
- **Prefix string**: Specify a prefix string to use in the name for all resources that are created for the containerized application in the AKS cluster. - **SSL certificate**: If your application requires an https site binding, specify the PFX file that contains the certificate to be used for the binding. The PFX file shouldn't be password protected and the original site shouldn't have multiple bindings. - **Replica Sets**: Specify the number of application instances (pods) that should run inside the containers.
- - **Load balancer type**: Select *External* if the containerized application should be reachable from public networks.
+ - **Load balancer type**: Select *External* if the containerized application should be reachable from public networks.
- **Application Configuration**: For any application configurations that were parameterized, provide the values to use for the current deployment. - **Storage**: For any application folders that were configured for Persistent Volume storage, specify whether the volume should be shared across application instances or should be initialized individually with each instance in the container. By default, all application folders on Persistent Volumes are configured as shared. - Click **Apply** to save the deployment configuration.
Once the container image is built, the next step is to deploy the application as
![Screenshot for deployment app configuration.](./media/tutorial-containerize-apps-aks/deploy-aspnet-app-config.png)
-4. **Deploy the application**: Once the deployment configuration for the application is saved, the tool will generate the Kubernetes deployment YAML for the application.
- - Click **Edit** to review and customize the Kubernetes deployment YAML for the applications.
+4. **Deploy the application**: Once the deployment configuration for the application is saved, the tool will generate the Kubernetes deployment YAML for the application.
+ - Click **Edit** to review and customize the Kubernetes deployment YAML for the applications.
- Select the application to deploy. - Click **Deploy** to start deployments for the selected applications ![Screenshot for app deployment configuration.](./media/tutorial-containerize-apps-aks/deploy-aspnet-app-deploy.png)
- - Once the application is deployed, you can click the *Deployment status* column to track the resources that were deployed for the application.
+ - Once the application is deployed, you can click the *Deployment status* column to track the resources that were deployed for the application.
## Download generated artifacts
A single folder is created for each application server. You can view and downloa
## Troubleshoot issues
-To troubleshoot any issues with the tool, you can look at the log files on the Windows machine running the App Containerization tool. Tool log files are located at *C:\ProgramData\Microsoft Azure Migrate App Containerization\Logs* folder.
+To troubleshoot any issues with the tool, you can look at the log files on the Windows machine running the App Containerization tool. Tool log files are located at *C:\ProgramData\Microsoft Azure Migrate App Containerization\Logs* folder.
## Next steps -- Containerizing Java Web Apps on Apache Tomcat (on Linux servers) and deploying them on Linux containers on AKS. [Learn more](./tutorial-containerize-java-kubernetes.md)
+- Containerizing Java Web Apps on Apache Tomcat (on Linux servers) and deploying them on Linux containers on AKS. [Learn more](./tutorial-containerize-java-kubernetes.md)
migrate Tutorial Containerize Java Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/tutorial-containerize-java-kubernetes.md
Title: Containerize Java web apps applications and migrate to Azure Kubernetes Service
-description: Learn how to containerize Java web applications and migrate to Azure Kubernetes Service.
-
+ Title: Containerize & migrate Java web applications to Azure Kubernetes Service.
+description: Tutorial:Containerize & migrate Java web applications to Azure Kubernetes Service.
+
# Containerize Java web applications and migrate to Azure Kubernetes Service
-In this article, you'll learn how to containerize Java web applications (running on Apache Tomcat) and migrate them to [Azure Kubernetes Service (AKS)](https://azure.microsoft.com/services/kubernetes-service/) using the Azure Migrate: App Containerization tool. The containerization process doesnΓÇÖt require access to your codebase and provides an easy way to containerize existing applications. The tool works by using the running state of the applications on a server to determine the application components and helps you package them in a container image. The containerized application can then be deployed on Azure Kubernetes Service (AKS).
+In this article, you'll learn how to containerize Java web applications (running on Apache Tomcat) and migrate them to [Azure Kubernetes Service (AKS)](https://azure.microsoft.com/services/kubernetes-service/) using the Azure Migrate: App Containerization tool. The containerization process doesnΓÇÖt require access to your codebase and provides an easy way to containerize existing applications. The tool works by using the running state of the applications on a server to determine the application components and helps you package them in a container image. The containerized application can then be deployed on Azure Kubernetes Service (AKS).
-The Azure Migrate: App Containerization tool currently supports -
+The Azure Migrate: App Containerization tool currently supports -
-- Containerizing ASP.NET apps and deploying them on Windows containers on AKS. [Learn more](./tutorial-containerize-aspnet-kubernetes.md) -- Containerizing Java Web Apps on Apache Tomcat (on Linux servers) and deploying them on Linux containers on AKS.
+- Containerizing ASP.NET apps and deploying them on Windows containers on AKS. [Learn more](./tutorial-containerize-aspnet-kubernetes.md)
+- Containerizing Java Web Apps on Apache Tomcat (on Linux servers) and deploying them on Linux containers on AKS.
-The Azure Migrate: App Containerization tool helps you to -
+The Azure Migrate: App Containerization tool helps you to -
- **Discover your application**: The tool remotely connects to the application servers running your Java web application (running on Apache Tomcat) and discovers the application components. The tool creates a Dockerfile that can be used to create a container image for the application. - **Build the container image**: You can inspect and further customize the Dockerfile as per your application requirements and use that to build your application container image. The application container image is pushed to an Azure Container Registry you specify.-- **Deploy to Azure Kubernetes Service**: The tool then generates the Kubernetes resource definition YAML files needed to deploy the containerized application to your Azure Kubernetes Service cluster. You can customize the YAML files and use them to deploy the application on AKS.
+- **Deploy to Azure Kubernetes Service**: The tool then generates the Kubernetes resource definition YAML files needed to deploy the containerized application to your Azure Kubernetes Service cluster. You can customize the YAML files and use them to deploy the application on AKS.
> [!NOTE]
-> The Azure Migrate: App Containerization tool helps you discover specific application types (ASP.NET and Java web apps on Apache Tomcat) and their components on an application server. To discover servers and the inventory of apps, roles, and features running on on-premises machines, use Azure Migrate: Discovery and assessment capability. [Learn more](./tutorial-discover-vmware.md)
+> The Azure Migrate: App Containerization tool helps you discover specific application types (ASP.NET and Java web apps on Apache Tomcat) and their components on an application server. To discover servers and the inventory of apps, roles, and features running on on-premises machines, use Azure Migrate: Discovery and assessment capability. [Learn more](./tutorial-discover-vmware.md)
While all applications won't benefit from a straight shift to containers without significant rearchitecting, some of the benefits of moving existing apps to containers without rewriting include: -- **Improved infrastructure utilization:** With containers, multiple applications can share resources and be hosted on the same infrastructure. This can help you consolidate infrastructure and improve utilization.
+- **Improved infrastructure utilization:** With containers, multiple applications can share resources and be hosted on the same infrastructure. This can help you consolidate infrastructure and improve utilization.
- **Simplified management:** By hosting your applications on a modern managed infrastructure platform like AKS, you can simplify your management practices while still retaining control over your infrastructure. You can achieve this by retiring or reducing the infrastructure maintenance and management processes that you'd traditionally perform with owned infrastructure.-- **Application portability:** With increased adoption and standardization of container specification formats and orchestration platforms, application portability is no longer a concern.
+- **Application portability:** With increased adoption and standardization of container specification formats and orchestration platforms, application portability is no longer a concern.
- **Adopt modern management with DevOps:** Helps you adopt and standardize on modern practices for management and security with Infrastructure as Code and transition to DevOps.
Before you begin this tutorial, you should:
**Requirement** | **Details** |
-**Identify a machine to install the tool** | A Windows machine to install and run the Azure Migrate: App Containerization tool. The Windows machine could be a server (Windows Server 2016 or later) or client (Windows 10) operating system, meaning that the tool can run on your desktop as well. <br/><br/> The Windows machine running the tool should have network connectivity to the servers/virtual machines hosting the ASP.NET applications to be containerized.<br/><br/> Ensure that 6-GB space is available on the Windows machine running the Azure Migrate: App Containerization tool for storing application artifacts. <br/><br/> The Windows machine should have internet access, directly or via a proxy. <br/> <br/>Install the Microsoft Web Deploy tool on the machine running the App Containerization helper tool and application server if not already installed. You can download the tool from [here](https://aka.ms/webdeploy3.6)
+**Identify a machine to install the tool** | A Windows machine to install and run the Azure Migrate: App Containerization tool. The Windows machine could be a server (Windows Server 2016 or later) or client (Windows 10) operating system, meaning that the tool can run on your desktop as well. <br/><br/> The Windows machine running the tool should have network connectivity to the servers/virtual machines hosting the ASP.NET applications to be containerized.<br/><br/> Ensure that 6-GB space is available on the Windows machine running the Azure Migrate: App Containerization tool for storing application artifacts. <br/><br/> The Windows machine should have internet access, directly or via a proxy. <br/> <br/>Install the Microsoft Web Deploy tool on the machine running the App Containerization helper tool and application server if not already installed. You can download the tool from [here](https://aka.ms/webdeploy3.6)
**Application servers** | - Enable Secure Shell (SSH) connection on port 22 on the server(s) running the Java application(s) to be containerized. <br/> **Java web application** | The tool currently supports <br/><br/> - Applications running on Tomcat 8 or later.<br/> - Application servers on Ubuntu Linux 16.04/18.04/20.04, Debian 7/8, CentOS 6/7, Red Hat Enterprise Linux 5/6/7. <br/> - Applications using Java version 7 or later. <br/><br/> The tool currently doesn't support <br/><br/> - Applications servers running multiple Tomcat instances <br/>
If you just created a free Azure account, you're the owner of your subscription.
![Search box to search for the Azure subscription.](./media/tutorial-discover-vmware/search-subscription.png)
-2. In the **Subscriptions** page, select the subscription in which you want to create an Azure Migrate project.
+2. In the **Subscriptions** page, select the subscription in which you want to create an Azure Migrate project.
3. In the subscription, select **Access control (IAM)** > **Check access**. 4. In **Check access**, search for the relevant user account. 5. In **Add a role assignment**, click **Add**.
If you just created a free Azure account, you're the owner of your subscription.
## Download and install Azure Migrate: App Containerization tool 1. [Download](https://go.microsoft.com/fwlink/?linkid=2134571) the Azure Migrate: App Containerization installer on a Windows machine.
-2. Launch PowerShell in administrator mode and change the PowerShell directory to the folder containing the installer.
+2. Launch PowerShell in administrator mode and change the PowerShell directory to the folder containing the installer.
3. Run the installation script using the command ```powershell .\App ContainerizationInstaller.ps1 ```
-## Launch the App Containerization tool
+## Launch the App Containerization tool
1. Open a browser on any machine that can connect to the Windows machine running the App Containerization tool, and open the tool URL: **https://*machine name or IP address*: 44368**.
If you just created a free Azure account, you're the owner of your subscription.
2. If you see a warning stating that says your connection isnΓÇÖt private, click Advanced and choose to proceed to the website. This warning appears as the web interface uses a self-signed TLS/SSL certificate. 3. At the sign-in screen, use the local administrator account on the machine to sign-in.
-4. For specify application type, select **Java web apps on Tomcat** as the type of application you want to containerize.
+4. For specify application type, select **Java web apps on Tomcat** as the type of application you want to containerize.
![Default load-up for App Containerization tool.](./media/tutorial-containerize-apps-aks/tool-home.png)
If you just created a free Azure account, you're the owner of your subscription.
- Only HTTP proxy is supported. - If you've added proxy details or disabled the proxy and/or authentication, click on **Save** to trigger connectivity check again. - **Install updates**: The tool will automatically check for latest updates and install them. You can also manually install the latest version of the tool from [here](https://go.microsoft.com/fwlink/?linkid=2134571).
- - **Enable Secure Shell (SSH)**: The tool will inform you to ensure that Secure Shell (SSH) is enabled on the application servers running the Java web applications to be containerized.
-
+ - **Enable Secure Shell (SSH)**: The tool will inform you to ensure that Secure Shell (SSH) is enabled on the application servers running the Java web applications to be containerized.
+ ## Login to Azure
-Click **Login** to log in to your Azure account.
+Click **Login** to log in to your Azure account.
-1. You'll need a device code to authenticate with Azure. Clicking on Login will open a modal with the device code.
+1. You'll need a device code to authenticate with Azure. Clicking on Login will open a modal with the device code.
2. Click on **Copy code & Login** to copy the device code and open an Azure Login prompt in a new browser tab. If it doesn't appear, make sure you've disabled the pop-up blocker in the browser. ![Modal showing device code.](./media/tutorial-containerize-apps-aks/login-modal.png) 3. On the new tab, paste the device code and complete log in using your Azure account credentials. You can close the browser tab after log in is complete and return to the App Containerization tool's web interface. 4. Select the **Azure tenant** that you want to use.
-5. Specify the **Azure subscription** that you want to use.
+5. Specify the **Azure subscription** that you want to use.
## Discover Java web applications The App Containerization helper tool connects remotely to the application servers using the provided credentials and attempts to discover Java web applications (running on Apache Tomcat) hosted on the application servers.
-1. Specify the **IP address/FQDN and the credentials** of the server running the Java web application that should be used to remotely connect to the server for application discovery.
- - The credentials provided must be for a root account (Linux) on the application server.
- - For domain accounts (the user must be an administrator on the application server), prefix the username with the domain name in the format *<domain\username>*.
- - You can run application discovery for upto five servers at a time.
+1. Specify the **IP address/FQDN and the credentials** of the server running the Java web application that should be used to remotely connect to the server for application discovery.
+ - The credentials provided must be for a root account (Linux) on the application server.
+ - For domain accounts (the user must be an administrator on the application server), prefix the username with the domain name in the format *<domain\username>*.
+ - You can run application discovery for upto five servers at a time.
2. Click **Validate** to verify that the application server is reachable from the machine running the tool and that the credentials are valid. Upon successful validation, the status column will show the status as **Mapped**. ![Screenshot for server IP and credentials.](./media/tutorial-containerize-apps-aks/discovery-credentials.png)
-3. Click **Continue** to start application discovery on the selected application servers.
+3. Click **Continue** to start application discovery on the selected application servers.
4. Upon successful completion of application discovery, you can select the list of applications to containerize.
The App Containerization helper tool connects remotely to the application server
4. Use the checkbox to select the applications to containerize. 5. **Specify container name**: Specify a name for the target container for each selected application. The container name should be specified as <*name:tag*> where the tag is used for container image. For example, you can specify the target container name as *appname:v1*.
-### Parameterize application configurations
+### Parameterize application configurations
Parameterizing the configuration makes it available as a deployment time parameter. This allows you to configure this setting while deploying the application as opposed to having it hard-coded to a specific value in the container image. For example, this option is useful for parameters like database connection strings.
-1. Click **app configurations** to review detected configurations.
-2. Select the checkbox to parameterize the detected application configurations.
+1. Click **app configurations** to review detected configurations.
+2. Select the checkbox to parameterize the detected application configurations.
3. Click **Apply** after selecting the configurations to parameterize. ![Screenshot for app configuration parameterization ASP.NET application.](./media/tutorial-containerize-apps-aks/discovered-app-configs.png)
Parameterizing the configuration makes it available as a deployment time paramet
### Externalize file system dependencies You can add other folders that your application uses. Specify if they should be part of the container image or are to be externalized through persistent volumes on Azure file share. Using persistent volumes works great for stateful applications that store state outside the container or have other static content stored on the file system. [Learn more](https://docs.microsoft.com/azure/aks/concepts-storage)
-
-1. Click **Edit** under App Folders to review the detected application folders. The detected application folders have been identified as mandatory artifacts needed by the application and will be copied into the container image.
-
-2. Click **Add folders** and specify the folder paths to be added.
-3. To add multiple folders to the same volume, provide comma (`,`) separated values.
-4. Select **Persistent Volume** as the storage option if you want the folders to be stored outside the container on a Persistent Volume.
-5. Click **Save** after reviewing the application folders.
+
+1. Click **Edit** under App Folders to review the detected application folders. The detected application folders have been identified as mandatory artifacts needed by the application and will be copied into the container image.
+
+2. Click **Add folders** and specify the folder paths to be added.
+3. To add multiple folders to the same volume, provide comma (`,`) separated values.
+4. Select **Persistent Volume** as the storage option if you want the folders to be stored outside the container on a Persistent Volume.
+5. Click **Save** after reviewing the application folders.
![Screenshot for app volumes storage selection.](./media/tutorial-containerize-apps-aks/discovered-app-volumes.png) 6. Click **Continue** to proceed to the container image build phase.
Parameterizing the configuration makes it available as a deployment time paramet
4. **Track build status**: You can also monitor progress of the build step by clicking the **Build in Progress** link under the status column. The link takes a couple of minutes to be active after you've triggered the build process.
-5. Once the build is completed, click **Continue** to specify deployment settings.
+5. Once the build is completed, click **Continue** to specify deployment settings.
![Screenshot for app container image build completion.](./media/tutorial-containerize-apps-aks/build-java-app-completed.png) ## Deploy the containerized app on AKS
-Once the container image is built, the next step is to deploy the application as a container on [Azure Kubernetes Service (AKS)](https://azure.microsoft.com/services/kubernetes-service/).
+Once the container image is built, the next step is to deploy the application as a container on [Azure Kubernetes Service (AKS)](https://azure.microsoft.com/services/kubernetes-service/).
-1. **Select the Azure Kubernetes Service Cluster**: Specify the AKS cluster that the application should be deployed to.
+1. **Select the Azure Kubernetes Service Cluster**: Specify the AKS cluster that the application should be deployed to.
- - The selected AKS cluster must have a Linux node pool.
- - The cluster must be configured to allow pulling of images from the Azure Container Registry that was selected to store the images.
+ - The selected AKS cluster must have a Linux node pool.
+ - The cluster must be configured to allow pulling of images from the Azure Container Registry that was selected to store the images.
- Run the following command in Azure CLI to attach the AKS cluster to the ACR. ``` Azure CLI az aks update -n <cluster-name> -g <cluster-resource-group> --attach-acr <acr-name> ``` - If you donΓÇÖt have an AKS cluster or would like to create a new AKS cluster to deploy the application to, you can choose to create on from the tool by clicking **Create new AKS cluster**.
- - The AKS cluster created using the tool will be created with a Linux node pool. The cluster will be configured to allow it to pull images from the Azure Container Registry that was created earlier (if create new registry option was chosen).
+ - The AKS cluster created using the tool will be created with a Linux node pool. The cluster will be configured to allow it to pull images from the Azure Container Registry that was created earlier (if create new registry option was chosen).
- Click **Continue** after selecting the AKS cluster.
-2. **Specify Azure file share**: If you had added more folders and selected the Persistent Volume option, then specify the Azure file share that should be used by Azure Migrate: App Containerization tool during the deployment process. The tool will create new directories in this Azure file share to copy over the application folders that are configured for Persistent Volume storage. Once the application deployment is complete, the tool will clean up the Azure file share by deleting the directories it had created.
+2. **Specify Azure file share**: If you had added more folders and selected the Persistent Volume option, then specify the Azure file share that should be used by Azure Migrate: App Containerization tool during the deployment process. The tool will create new directories in this Azure file share to copy over the application folders that are configured for Persistent Volume storage. Once the application deployment is complete, the tool will clean up the Azure file share by deleting the directories it had created.
- If you don't have an Azure file share or would like to create a new Azure file share, you can choose to create on from the tool by clicking **Create new Storage Account and file share**. 3. **Application deployment configuration**: Once you've completed the steps above, you'll need to specify the deployment configuration for the application. Click **Configure** to customize the deployment for the application. In the configure step you can provide the following customizations: - **Prefix string**: Specify a prefix string to use in the name for all resources that are created for the containerized application in the AKS cluster. - **Replica Sets**: Specify the number of application instances (pods) that should run inside the containers.
- - **Load balancer type**: Select *External* if the containerized application should be reachable from public networks.
+ - **Load balancer type**: Select *External* if the containerized application should be reachable from public networks.
- **Application Configuration**: For any application configurations that were parameterized, provide the values to use for the current deployment.
- - **Storage**: For any application folders that were configured for Persistent Volume storage, specify whether the volume should be shared across application instances or should be initialized individually with each instance in the container. By default, all application folders on Persistent Volumes are configured as shared.
+ - **Storage**: For any application folders that were configured for Persistent Volume storage, specify whether the volume should be shared across application instances or should be initialized individually with each instance in the container. By default, all application folders on Persistent Volumes are configured as shared.
- Click **Apply** to save the deployment configuration. - Click **Continue** to deploy the application. ![Screenshot for deployment app configuration.](./media/tutorial-containerize-apps-aks/deploy-java-app-config.png)
-4. **Deploy the application**: Once the deployment configuration for the application is saved, the tool will generate the Kubernetes deployment YAML for the application.
- - Click **Edit** to review and customize the Kubernetes deployment YAML for the applications.
+4. **Deploy the application**: Once the deployment configuration for the application is saved, the tool will generate the Kubernetes deployment YAML for the application.
+ - Click **Edit** to review and customize the Kubernetes deployment YAML for the applications.
- Select the application to deploy. - Click **Deploy** to start deployments for the selected applications ![Screenshot for app deployment configuration.](./media/tutorial-containerize-apps-aks/deploy-java-app-deploy.png)
- - Once the application is deployed, you can click the *Deployment status* column to track the resources that were deployed for the application.
+ - Once the application is deployed, you can click the *Deployment status* column to track the resources that were deployed for the application.
## Download generated artifacts
-All artifacts that are used to build and deploy the application into AKS, including the Dockerfile and Kubernetes YAML specification files, are stored on the machine running the tool. The artifacts are located at *C:\ProgramData\Microsoft Azure Migrate App Containerization*.
+All artifacts that are used to build and deploy the application into AKS, including the Dockerfile and Kubernetes YAML specification files, are stored on the machine running the tool. The artifacts are located at *C:\ProgramData\Microsoft Azure Migrate App Containerization*.
A single folder is created for each application server. You can view and download all intermediate artifacts used in the containerization process by navigating to this folder. The folder, corresponding to the application server, will be cleaned up at the start of each run of the tool for a particular server. ## Troubleshoot issues
-To troubleshoot any issues with the tool, you can look at the log files on the Windows machine running the App Containerization tool. Tool log files are located at *C:\ProgramData\Microsoft Azure Migrate App Containerization\Logs* folder.
+To troubleshoot any issues with the tool, you can look at the log files on the Windows machine running the App Containerization tool. Tool log files are located at *C:\ProgramData\Microsoft Azure Migrate App Containerization\Logs* folder.
## Next steps -- Containerizing ASP.NET apps and deploying them on Windows containers on AKS. [Learn more](./tutorial-containerize-aspnet-kubernetes.md) -
+- Containerizing ASP.NET apps and deploying them on Windows containers on AKS. [Learn more](./tutorial-containerize-aspnet-kubernetes.md)
purview Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/overview.md
Discovering and understanding data sources and their use is the primary purpose
At the same time, users can contribute to the catalog by tagging, documenting, and annotating data sources that have already been registered. They can also register new data sources, which are then discovered, understood, and consumed by the community of catalog users.
+## In-region data residency
+Azure Purview does not move or store customer data out of the region in which it is deployed.
+ ## Next steps To get started with Azure Purview, see [Create an Azure Purview account](create-catalog-portal.md).
route-server Quickstart Configure Route Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/route-server/quickstart-configure-route-server-cli.md
az network vnet create -g "RouteServerRG" -n "myVirtualNetwork" --address-prefix
1. Obtain the RouteServerSubnet ID. To view the resource ID of all subnets in the virtual network, use this command: ```azurecli-interactive
- subnet_id = $(az network vnet subnet show -n "RouteServerSubnet" --vnet-name "myVirtualNetwork" -g "RouteServerRG" --query id -o tsv)
+ $subnet_id = $(az network vnet subnet show -n "RouteServerSubnet" --vnet-name "myVirtualNetwork" -g "RouteServerRG" --query id -o tsv)
``` The RouteServerSubnet ID looks like the following one:
search Cognitive Search Concept Image Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/cognitive-search-concept-image-scenarios.md
When the *imageAction* is set to a value other then "none", the new *normalized_
] ```
-## Image-related skills
+## Image related skills
There are two built-in cognitive skills that take images as an input: [OCR](cognitive-search-skill-ocr.md) and [Image Analysis](cognitive-search-skill-image-analysis.md).
As a helper, if you need to transform normalized coordinates to the original coo
return original; } ```
+## Passing images to custom skills
+
+For scenarios where you require a custom skill to work on images, you can pass images to the custom skill, and have it return text or images. The [Python sample](https://github.com/Azure-Samples/azure-search-python-samples/tree/master/Image-Processing) image-processing demonstrates the workflow. The following skillset is from the sample.
+
+The following skillset takes the normalized image (obtained during document cracking), and outputs slices of the image.
+
+#### Sample skillset
+```json
+{
+ "description": "Extract text from images and merge with content text to produce merged_text",
+ "skills":
+ [
+ {
+ "@odata.type": "#Microsoft.Skills.Custom.WebApiSkill",
+ "name": "ImageSkill",
+ "description": "Segment Images",
+ "context": "/document/normalized_images/*",
+ "uri": "https://your.custom.skill.url",
+ "httpMethod": "POST",
+ "timeout": "PT30S",
+ "batchSize": 100,
+ "degreeOfParallelism": 1,
+ "inputs": [
+ {
+ "name": "image",
+ "source": "/document/normalized_images/*"
+ }
+ ],
+ "outputs": [
+ {
+ "name": "slices",
+ "targetName": "slices"
+ }
+ ],
+ "httpHeaders": {}
+ }
+ ]
+}
+```
+
+#### Custom skill
+
+The custom skill itself is external to the skillset. In this case, it is Python code that first loops thorough the batch of request records in the custom skill format, then converts the base64-encoded string to an image.
+
+```python
+# deserialize the request, for each item in the batch
+for value in values:
+ data = value['data']
+ base64String = data["image"]["data"]
+ base64Bytes = base64String.encode('utf-8')
+ inputBytes = base64.b64decode(base64Bytes)
+ # Use numpy to convert the string to an image
+ jpg_as_np = np.frombuffer(inputBytes, dtype=np.uint8)
+ # you now have an image to work with
+```
+Similarly to return an image, return a base64 encoded string within a JSON object with a `$type` property of `file`.
+
+```python
+def base64EncodeImage(image):
+ is_success, im_buf_arr = cv2.imencode(".jpg", image)
+ byte_im = im_buf_arr.tobytes()
+ base64Bytes = base64.b64encode(byte_im)
+ base64String = base64Bytes.decode('utf-8')
+ return base64String
+
+ base64String = base64EncodeImage(jpg_as_np)
+ result = {
+ "$type": "file",
+ "data": base64String
+}
+```
## See also + [Create indexer (REST)](/rest/api/searchservice/create-indexer)
As a helper, if you need to transform normalized coordinates to the original coo
+ [OCR skill](cognitive-search-skill-ocr.md) + [Text merge skill](cognitive-search-skill-textmerger.md) + [How to define a skillset](cognitive-search-defining-skillset.md)
-+ [How to map enriched fields](cognitive-search-output-field-mapping.md)
++ [How to map enriched fields](cognitive-search-output-field-mapping.md)++ [How to pass images to custom skills](https://github.com/Azure-Samples/azure-search-python-samples/tree/master/Image-Processing)
search Cognitive Search Custom Skill Scale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/cognitive-search-custom-skill-scale.md
+
+ Title: 'Scale and manage custom skill'
+
+description: Learn the tools and techniques for efficiently scaling out a custom skill for maximum throughput. Custom skills invoke custom AI models or logic that you can add to an AI-enriched indexing pipeline in Azure Cognitive Search.
++++++ Last updated : 01/28/2021++
+# Efficiently scale out a custom skill
+
+Custom skills are web APIs that implement a specific interface. A custom skill can be implemented on any publicly addressable resource. The most common implementations for custom skills are:
+* Azure Functions for custom logic skills
+* Azure Webapps for simple containerized AI skills
+* Azure Kubernetes service for more complex or larger skills.
+
+## Prerequisites
+++ Review the [custom skill interface](cognitive-search-custom-skill-interface.md) for an introduction into the input/output interface that a custom skill should implement.+++ Set up your environment. You could start with [this tutorial end-to-end](/python/tutorial-vs-code-serverless-python-01) to set up serverless Azure Function using Visual Studio Code and Python extensions.+
+## Skillset configuration
+
+Configuring a custom skill for maximizing throughput of the indexing process requires an understanding of the skill, indexer configurations and how the skill relates to each document. For example, the number of times a skill is invoked per document and the expected duration per invocation.
+
+### Skill settings
+
+On the [custom skill](cognitive-search-custom-skill-web-api.md) set the following parameters.
+
+1. Set `batchSize` of the custom skill to configure the number of records sent to the skill in a single invocation of the skill.
+
+2. Set the `degreeOfParallelism` to calibrate the number of concurrent requests the indexer will make to your skill.
+
+3. Set `timeout`to a value sufficient for the skill to respond with a valid response.
+
+4. In the `indexer` definition, set [`batchSize`](https://docs.microsoft.com/rest/api/searchservice/create-indexer#indexer-parameters) to the number of documents that should be read from the data source and enriched concurrently.
+
+### Considerations
+
+Setting these variables to optimize the indexers performance requires determining if your skill performs better with many concurrent small requests or fewer large requests. A few questions to consider are:
+
+* What is the skill invocation cardinality? Does the skill execute once for each document, for instance a document classification skill, or could the skill execute multiple times per document, a paragraph classification skill?
+
+* On average how many documents are read from the data source to fill out a skill request based on the skill batch size? Ideally, this should be less than the indexer batch size. With batch sizes greater than 1 your skill can receive records from multiple source documents. For example if the indexer batch count is 5 and the skill batch count is 50 and each document generates only five records, the indexer will need to fill a custom skill request across multiple indexer batches.
+
+* The average number of requests an indexer batch can generate should give you an optimal setting for the degrees of parallelism. If your infrastructure hosting the skill cannot support that level of concurrency, consider dialing down the degrees of parallelism. As a best practice, test your configuration with a few documents to validate your choices on the parameters.
+
+* Testing with a smaller sample of documents, evaluate the execution time of your skill to the overall time taken to process the subset of documents. Does your indexer spend more time building a batch or waiting for a response from your skill?
+
+* Consider the upstream implications of parallelism. If the input to a custom skill is an output from a prior skill, are all the skills in the skillset scaled out effectively to minimize latency?
+
+## Error handling in the custom skill
+
+Custom skills should return a success status code HTTP 200 when the skill completes successfully. If one or more records in a batch result in errors, consider returning multi-status code 207. The errors or warnings list for the record should contain the appropriate message.
+
+Any items in a batch that errors will result in the corresponding document failing. If you need the document to succeed, return a warning.
+
+Any status code over 299 is evaluated as an error and all the enrichments are failed resulting in a failed document.
+
+### Common error messages
+
+* `Could not execute skill because it did not execute within the time limit '00:00:30'. This is likely transient. Please try again later. For custom skills, consider increasing the 'timeout' parameter on your skill in the skillset.` Set the timeout parameter on the skill to allow for a longer execution duration.
+
+* `Could not execute skill because Web Api skill response is invalid.` Indicative of the skill not returning a message in the custom skill response format. This could be the result of an uncaught exception in the skill.
+
+* `Could not execute skill because the Web Api request failed.` Most likely caused by authorization errors or exceptions.
+
+* `Could not execute skill.` Commonly the result of the skill response being mapped to an existing property in the document hierarchy.
+
+## Testing custom skills
+
+Start by testing your custom skill with a REST API client to validate:
+
+* The skill implements the custom skill interface for requests and responses
+
+* The skill returns valid JSON with the `application/JSON` MIME type
+
+* Returns a valid HTTP status code
+
+Create a [debug session](cognitive-search-debug-session.md) to add your skill to the skillset and make sure it produces a valid enrichment. While a debug session does not allow you to tune the performance of the skill, it enables you to ensure that the skill is configured with valid values and returns the expected enriched objects.
+
+## Best practices
+
+* While skills can accept and return larger payloads, consider limiting the response to 150 MB or less when returning JSON.
+
+* Consider setting the batch size on the indexer and skill to ensure that each data source batch generates a full payload for your skill.
+
+* For long running tasks, set the timeout to a high enough value to ensure the indexer does not error out when processing documents concurrently.
+
+* Optimize the indexer batch size, skill batch size, and skill degrees of parallelism to generate the load pattern your skill expects, fewer large requests or many small requests.
+
+* Monitor custom skills with detailed logs of failures as you can have scenarios where specific requests consistently fail as a result of the data variability.
++
+## Next steps
+Congratulations! Your custom skill is now scaled right to maximize throughput on the indexer.
+++ [Power Skills: a repository of custom skills](https://github.com/Azure-Samples/azure-search-power-skills)++ [Add a custom skill to an AI enrichment pipeline](cognitive-search-custom-skill-interface.md)++ [Add a Azure Machine Learning skill](https://docs.microsoft.com/azure/search/cognitive-search-aml-skill)++ [Use debug sessions to test changes](https://docs.microsoft.com/azure/search/cognitive-search-debug-session)
search Index Ranking Similarity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/index-ranking-similarity.md
Title: Configure ranking Similarity Algorithm
+ Title: Configure the similarity algorithm
-description: How to set the similarity algorithm to try new similarity algorithm for ranking
-
+description: Learn how to enable BM25 on older search services, and how BM25 parameters can be modified to better accommodate the content of your indexes.
Previously updated : 03/02/2021 Last updated : 03/12/2021
-# Configure ranking algorithms in Azure Cognitive Search
+# Configure the similarity ranking algorithm in Azure Cognitive Search
Azure Cognitive Search supports two similarity ranking algorithms: + A *classic similarity* algorithm, used by all search services up until July 15, 2020. + An implementation of the *Okapi BM25* algorithm, used in all search services created after July 15.
-BM25 ranking is the new default because it tends to produce search rankings that align better with user expectations. It also enables configuration options for tuning results based on factors such as document size. For new services created after July 15, 2020, BM25 is used automatically and is the sole similarity algorithm. If you try to set similarity to ClassicSimilarity on a new service, an HTTP 400 error will be returned because that algorithm is not supported by the service.
+BM25 ranking is the new default because it tends to produce search rankings that align better with user expectations. It comes with [parameters](#set-bm25-parameters) for tuning results based on factors such as document size.
+
+For new services created after July 15, 2020, BM25 is used automatically and is the sole similarity algorithm. If you try to set similarity to ClassicSimilarity on a new service, an HTTP 400 error will be returned because that algorithm is not supported by the service.
-For older services created before July 15, 2020, classic similarity remains the default algorithm. Older services can set properties on a search index to invoke BM25, as explained below. If you are switching from classic to BM25, you can expect to see some differences how search results are ordered.
+For older services created before July 15, 2020, classic similarity remains the default algorithm. Older services can upgrade to BM25 on a per-index basis, as explained below. If you are switching from classic to BM25, you can expect to see some differences how search results are ordered.
> [!NOTE]
-> Semantic search is an additional semantic re-ranking algorithm that narrows the gap between expectations and results even more. Unlike the other algorithms, it is an add-on feature that iterates over an existing result set. To use the preview semantic search algorithm, you must create a new service, and you must specify a [semantic query type](semantic-how-to-query-request.md). For more information, see [Semantic search overview](semantic-search-overview.md).
+> Semantic ranking, currently in preview for standard services in selected regions, is an additional step forward in producing more relevant results. Unlike the other algorithms, it is an add-on feature that iterates over an existing result set. For more information, see [Semantic search overview](semantic-search-overview.md) and [Semantic ranking](semantic-ranking.md).
+
+## Enable BM25 scoring on older services
-## Create a search index for BM25 scoring
+If you are running a search service that was created prior to July 15, 2020, you can enable BM25 by setting a Similarity property on new indexes. The property is only exposed on new indexes, so if want BM25 on an existing index, you must drop and [rebuild the index](search-howto-reindex.md) with a new Similarity property set to "Microsoft.Azure.Search.BM25Similarity".
-If you are running a search service that was created prior to July 15, 2020, you can set the similarity property to either BM25Similarity or ClassicSimilarity in the index definition. If the similarity property is omitted or set to null, the index uses the Classic algorithm.
+Once an index exists with a Similarity property, you can switch between BM25Similarity or ClassicSimilarity.
-The similarity algorithm can only be set at index creation time. However, once an index is created with BM25, you can update the existing index to set or modify the BM25 parameters.
+The following links describe the Similarity property in the Azure SDKs.
| Client library | Similarity property | |-||
PUT https://[search service name].search.windows.net/indexes/[index name]?api-ve
} ```
-## BM25 similarity parameters
+## Set BM25 parameters
BM25 similarity adds two user customizable parameters to control the calculated relevance score. You can set BM25 parameters during index creation, or as an index update if the BM25 algorithm was specified during index creation.
search Search Get Started Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-get-started-python.md
Previously updated : 01/29/2021 Last updated : 03/12/2021
To load documents, create a documents collection, using an [index action](/pytho
## 3 - Search an index
-This step shows you how to query an index using the [Search Documents (REST)](/rest/api/searchservice/search-documents).
+This step shows you how to query an index using the **search** method of the [search.client class](/python/api/azure-search-documents/azure.search.documents.searchclient).
-1. For this operation, use search_client. This query executes an empty search (`search=*`), returning an unranked list (search score = 1.0) of arbitrary documents. Because there are no criteria, all documents are included in results. This query prints just two of the fields in each document. It also adds `include_total_count=True` to get a count of all documents (4) in the results.
+1. The following step executes an empty search (`search=*`), returning an unranked list (search score = 1.0) of arbitrary documents. Because there are no criteria, all documents are included in results. This query prints just two of the fields in each document. It also adds `include_total_count=True` to get a count of all documents (4) in the results.
```python results = search_client.search(search_text="*", include_total_count=True)
search Semantic Answers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/semantic-answers.md
+
+ Title: Return a semantic answer
+
+description: Describes the composition of a semantic answer and how to obtain answers from a result set.
++++++ Last updated : 03/12/2021++
+# Return a semantic answer in Azure Cognitive Search
+
+> [!IMPORTANT]
+> Semantic search features are in public preview, available through the preview REST API only. Preview features are offered as-is, under [Supplemental Terms of Use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/), and are not guaranteed to have the same implementation at general availability. For more information, see [Availability and pricing](semantic-search-overview.md#availability-and-pricing).
+
+When formulating a [semantic query](semantic-how-to-query-request.md), you can optionally extract content from the top-matching documents that "answers" the query directly. One or more answers can be included in the response, which you can then render on a search page to improve the user experience of your app.
+
+In this article, learn how to request a semantic answer, unpack the response, and learn what content characteristics are most conducive to producing high quality answers.
+
+## Prerequisites
+
+All prerequisites that apply to [semantic queries](semantic-how-to-query-request.md) also apply to answers, including service tier and region.
+++ Queries formulated using the semantic query parameters, and include the "answers" parameter. Required parameters are discussed in this article.+++ Query strings must be formulated in language having the characteristics of a question (what, where, when, how).+++ Search documents must contain text having the characteristics of an answer, and that text must exist in one of the fields listed in "searchFields".+
+## What is a semantic answer?
+
+A semantic answer is an artifact of a [semantic query](semantic-how-to-query-request.md). It consists of one or more verbatim passages from a search document, formulated as an answer to a query that looks like a question. For an answer to be returned, phrases or sentences must exist in a search document that have the language characteristics of an answer, and the query itself must be posed as a question.
+
+Cognitive Search uses a machine reading comprehension model to formulate answers. The model produces a set of potential answers from the available documents, and when it reaches a high enough confidence level, it will propose an answer.
+
+Answers are returned as an independent, top-level object in the query response payload that you can choose to render on search pages, along side search results. Structurally, it's an array element of a response that includes text, a document key, and a confidence score.
+
+<a name="query-params"></a>
+
+## How to request semantic answers in a query
+
+To return a semantic answer, the query must have the semantic query type, language, search fields, and the "answers" parameter. Specifying the "answers" parameter does not guarantee that you will get an answer, but the request must include this parameter if answer processing is to be invoked at all.
+
+The "searchFields" parameter is critical to returning a high quality answer, both in terms of content and order.
+
+```json
+{
+ "search": "how do clouds form",
+ "queryType": "semantic",
+ "queryLanguage": "en-us",
+ "searchFields": "title,locations,content",
+ "answers": "extractive|count-3",
+ "count": "true"
+}
+```
+++ A query string must not be null and should be formulated as question. In this preview, the "queryType" and "queryLanguage" must be set exactly as shown in the example.+++ The "searchFields" parameter determines which fields provide tokens to the extraction model. Be sure to set this parameter. You must have at least one string field, but include any string field that you think is useful in providing an answer. Only about 8,000 tokens per document are passed into the model. Start the field list with concise fields, and then progress to text-rich fields. For precise guidance on how to set this field, see [Set searchFields](semantic-how-to-query-request.md#searchfields).+++ For "answers", the basic parameter construction is `"answers": "extractive"`, where the default number of answers returned is one. You can increase the number of answers by adding a count, up to a maximum of five. Whether you need more than one answer depends on the user experience of your app, and how you want to render results.+
+## Deconstruct an answer from the response
+
+Answers are provided in the @search.answers array, which appears first in the response. If an answer is indeterminate, the response will show up as `"@search.answers": []`. When designing a search results page that includes answers, be sure to handle cases where answers are not found.
+
+Within @search.answers, the "key" is the document key or ID of the match. Given a document key, you can use [Lookup Document](/rest/api/searchservice/lookup-document) API to retrieve any or all parts of the search document to include on the search page or a detail page.
+
+Both "text" and "highlights" provide identical content, in both plain text and with highlights. By default, highlights are styled as `<em>`, which you can override using the existing highlightPreTag and highlightPostTag parameters. As noted elsewhere, the substance of an answer is verbatim content from a search document. The extraction model looks for characteristics of an answer to find the appropriate content, but does not compose new language in the response.
+
+The "score" is a confidence score that reflects the strength of the answer. If there are multiple answers in the response, this score is used to determine the order. Top answers and top captions can be derived from different search documents, where the top answer originates from one document, and the top caption from another, but in general you will see the same documents in the top positions within each array.
+
+Answers are followed by the "value" array, which always includes scores, captions, and any fields that are retrievable by default. If you specified the select parameter, the "value" array is limited to the fields that you specified. For more information about items in the response, see [Create a semantic query](semantic-how-to-query-request.md).
+
+Given the query "how do clouds form", the following answer is returned in the response:
+
+```json
+{
+ "@search.answers": [
+ {
+ "key": "4123",
+ "text": "Sunlight heats the land all day, warming that moist air and causing it to rise high into the atmosphere until it cools and condenses into water droplets. Clouds generally form where air is ascending (over land in this case), but not where it is descending (over the river).",
+ "highlights": "Sunlight heats the land all day, warming that moist air and causing it to rise high into the atmosphere until it cools and condenses into water droplets. Clouds generally form<em> where air is ascending</em> (over land in this case), but not where it is<em> descending</em> (over the river).",
+ "score": 0.94639826
+ }
+ ],
+ "value": [
+ {
+ "@search.score": 0.5479723,
+ "@search.rerankerScore": 1.0321671911515296,
+ "@search.captions": [
+ {
+ "text": "Like all clouds, it forms when the air reaches its dew pointΓÇöthe temperature at which an air mass is cool enough for its water vapor to condense into liquid droplets. This false-color image shows valley fog, which is common in the Pacific Northwest of North America.",
+ "highlights": "Like all<em> clouds</em>, it<em> forms</em> when the air reaches its dew pointΓÇöthe temperature at which an air mass is cool enough for its water vapor to condense into liquid droplets. This false-color image shows valley<em> fog</em>, which is common in the Pacific Northwest of North America."
+ }
+ ],
+ "title": "Earth Atmosphere",
+ "content": "Fog is essentially a cloud lying on the ground. Like all clouds, it forms when the air reaches its dew pointΓÇöthe temperature at \n\nwhich an air mass is cool enough for its water vapor to condense into liquid droplets.\n\nThis false-color image shows valley fog, which is common in the Pacific Northwest of North America. On clear winter nights, the \n\nground and overlying air cool off rapidly, especially at high elevations. Cold air is denser than warm air, and it sinks down into the \n\nvalleys. The moist air in the valleys gets chilled to its dew point, and fog forms. If undisturbed by winds, such fog may persist for \n\ndays. The Terra satellite captured this image of foggy valleys northeast of Vancouver in February 2010.\n\n\n",
+ "locations": [
+ "Pacific Northwest",
+ "North America",
+ "Vancouver"
+ ]
+ }
+```
+
+## Tips for producing high-quality answers
+
+For best results, return semantic answers on a document corpus having the following characteristics:
+++ "searchFields" should include one or more fields that provides sufficient text in which an answer is likely to be found.+++ Semantic extraction and summarization have limits over how much content can be analyzed in a timely fashion. Collectively, only the first 20,000 tokens are analyzed. Anything beyond that is ignored. In practical terms, if you have large documents that run into hundreds of pages, you should try to break the content up into manageable parts first.+++ query strings must not be null (search=`*`) and the string should have the characteristics of a question, as opposed to a keyword search (a sequential list of arbitrary terms or phrases). If the query string does not appear to be answer, answer processing is skipped, even if the request specifies "answers" as a query parameter.+
+## Next steps
+++ [Semantic search overview](semantic-search-overview.md)++ [Semantic ranking algorithm](semantic-ranking.md)++ [Similarity algorithm](index-ranking-similarity.md)++ [Create a semantic query](semantic-how-to-query-request.md)
search Semantic How To Query Request https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/semantic-how-to-query-request.md
Previously updated : 03/05/2021 Last updated : 03/12/2021 # Create a semantic query in Cognitive Search > [!IMPORTANT]
-> Semantic query type is in public preview, available through the preview REST API and Azure portal. Preview features are offered as-is, under [Supplemental Terms of Use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). During the initial preview launch, there is no charge for semantic search. For more information, see [Availability and pricing](semantic-search-overview.md#availability-and-pricing).
+> Semantic query type is in public preview, available through the preview REST API and Azure portal. Preview features are offered as-is, under [Supplemental Terms of Use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). For more information, see [Availability and pricing](semantic-search-overview.md#availability-and-pricing).
-In this article, learn how to formulate a search request that uses semantic ranking, and produces semantic captions and answers.
+In this article, learn how to formulate a search request that uses semantic ranking. The request will return semantic captions, and optionally [semantic answers](semantic-answers.md), with highlights over the most relevant terms and phrases.
-Semantic queries tend to work best on search indexes that are built off of text-heavy content, such as PDFs or documents with large chunks of text.
+Both captions and answers are extracted verbatim from text in the search document. The semantic subsystem determines what content has the characteristics of a caption or answer, but it does not compose new sentences or phrases. For this reason, content that includes explanations or definitions work best for semantic search.
## Prerequisites
Semantic queries tend to work best on search indexes that are built off of text-
The search client must support preview REST APIs on the query request. You can use [Postman](search-get-started-rest.md), [Visual Studio Code](search-get-started-vs-code.md), or code that you've modified to make REST calls to the preview APIs. You can also use [Search explorer](search-explorer.md) in Azure portal to submit a semantic query.
-+ A [Search Documents](/rest/api/searchservice/preview-api/search-documents) request with the semantic option and other parameters described in this article.
++ A [query request](/rest/api/searchservice/preview-api/search-documents) must include the semantic option and other parameters described in this article. ## What's a semantic query? In Cognitive Search, a query is a parameterized request that determines query processing and the shape of the response. A *semantic query* adds parameters that invoke the semantic reranking model that can assess the context and meaning of matching results, promote more relevant matches to the top, and return semantic answers and captions.
-The following request is representative of a basic semantic query (without answers).
+The following request is representative of a minimal semantic query (without answers).
```http POST https://[service name].search.windows.net/indexes/[index name]/docs/search?api-version=2020-06-30-Preview     
POST https://[service name].search.windows.net/indexes/[index name]/docs/search?
} ```
-As with all queries in Cognitive Search, the request targets the documents collection of a single index. Furthermore, a semantic query undergoes the same sequence of parsing, analysis, and scanning as a non-semantic query. The difference lies in how relevance is computed. As defined in this preview release, a semantic query is one whose *results* are re-processed using advanced algorithms, providing a way to surface the matches deemed most relevant by the semantic ranker, rather than the scores assigned by the default similarity ranking algorithm.
+As with all queries in Cognitive Search, the request targets the documents collection of a single index. Furthermore, a semantic query undergoes the same sequence of parsing, analysis, scanning, and scoring as a non-semantic query.
-Only the top 50 matches from the initial results can be semantically ranked, and all include captions in the response. Optionally, you can specify an **`answer`** parameter on the request to extract a potential answer. This model formulates up to five potential answers to the query, which you can choose to render at the top of search page.
+The difference lies in relevance and scoring. As defined in this preview release, a semantic query is one whose *results* are reranked using a semantic language model, providing a way to surface the matches deemed most relevant by the semantic ranker, rather than the scores assigned by the default similarity ranking algorithm.
-## Query using REST APIs
+Only the top 50 matches from the initial results can be semantically ranked, and all include captions in the response. Optionally, you can specify an **`answer`** parameter on the request to extract a potential answer. For more information, see [Semantic answers](semantic-answers.md).
-The full specification of the REST API can be found at [Search Documents (REST preview)](/rest/api/searchservice/preview-api/search-documents).
+## Query with Search explorer
+
+[Search explorer](search-explorer.md) has been updated to include options for semantic queries. These options become visible in the portal after you get access to the preview. Query options can enable semantic queries, searchFields, and spell correction.
+
+You can also paste the required query parameters into the query string.
+
-Semantic queries provide captions and highlighting automatically. If you want the response to include answers, you can add an optional **`answer`** parameter on the request. This parameter, plus the construction of the query string itself, will produce an answer in the response.
+## Query using REST
+
+Use the [Search Documents (REST preview)](/rest/api/searchservice/preview-api/search-documents) to formulate the request programmatically.
+
+A response includes captions and highlighting automatically. If you want the response to include spelling correction or answers, add an optional **`speller`** or **`answers`** parameter on the request.
The following example uses the hotels-sample-index to create a semantic query request with semantic answers and captions:
POST https://[service name].search.windows.net/indexes/hotels-sample-index/docs/
} ```
+The following table summarizes the query parameters used in a semantic query so that you can see them holistically. For a list of all parameters, see [Search Documents (REST preview)](/rest/api/searchservice/preview-api/search-documents)
+
+| Parameter | Type | Description |
+|--|-|-|
+| queryType | String | Valid values include simple, full, and semantic. A value of "semantic" is required for semantic queries. |
+| queryLanguage | String | Required for semantic queries. Currently, only "en-us" is implemented. |
+| searchFields | String | A comma-delimited list of searchable fields. Optional but recommended. Specifies the fields over which semantic ranking occurs. </br></br>In contrast with simple and full query types, the order in which fields are listed determines precedence. For more usage instructions, see [Step 2: Set searchFields](#searchfields). |
+| speller | String | Optional parameter, not specific to semantic queries, that corrects misspelled terms before they reach the search engine. For more information, see [Add spell correction to queries](speller-how-to-add.md). |
+| answers |String | Optional parameters that specify whether semantic answers are included in the result. Currently, only "extractive" is implemented. Answers can be configured to return a maximum of five. The default is one. This example shows a count of three answers: "extractive\|count3"`. For more information, see [Return semantic answers](semantic-answers.md).|
+ ### Formulate the request This section steps through the query parameters necessary for semantic search.
This parameter is optional in that there is no error if you leave it out, but pr
The searchFields parameter is used to identify passages to be evaluated for "semantic similarity" to the query. For the preview, we do not recommend leaving searchFields blank as the model requires a hint as to what fields are the most important to process.
-The order of the searchFields is critical. If you already use searchFields in existing simple or full Lucene queries, be sure that you revisit this parameter when switching to a semantic query type.
+The order of the searchFields is critical. If you already use searchFields in existing simple or full Lucene queries, be sure that you revisit this parameter to check for field order when switching to a semantic query type.
Follow these guidelines to ensure optimum results when two or more searchFields are specified:
Follow these guidelines to ensure optimum results when two or more searchFields
+ First field should always be concise (such as a title or name), ideally under 25 words.
-+ If the index has a URL field that is textual (human readable such as `www.domain.com/name-of-the-document-and-other-details` and not machine focused such as `www.domain.com/?id=23463&param=eis`), place it second in the list (or first if there is no concise title field).
++ If the index has a URL field that is textual (human readable such as `www.domain.com/name-of-the-document-and-other-details`, and not machine focused such as `www.domain.com/?id=23463&param=eis`), place it second in the list (or first if there is no concise title field). + Follow those fields by descriptive fields where the answer to semantic queries may be found, such as the main content of a document.
-If only one field specified, use a descriptive fields where the answer to semantic queries may be found, such as the main content of a document. Choose a field that provides sufficient content.
+If only one field specified, use a descriptive field where the answer to semantic queries may be found, such as the main content of a document. Choose a field that provides sufficient content. To ensure timely processing, only about 8,000 tokens of the collective contents of searchFields undergo semantic evaluation and ranking.
#### Step 3: Remove orderBy clauses Remove any orderBy clauses, if they exist in an existing request. The semantic score is used to order results, and if you include explicit sort logic, an HTTP 400 error is returned.
-#### Step 4: add answers
+#### Step 4: Add answers
-Optionally, add "answers" if you want to include additional processing that provides an answer. Answers (and captions) are formulated from passages found in fields listed in searchFields. Be sure to include content-rich fields in searchFields to get the best answers and captions in a response.
-
-There are explicit and implicit conditions that produce answers.
-
-+ Explicit conditions include adding "answers=extractive". Additionally, to specify the number of answers returned in the overall response, add "count" followed by a number: `"answers=extractive|count=3"`. The default is one. Maximum is five.
-
-+ Implicit conditions include a query string construction that lends itself to an answer. A query composed of 'what hotel has the green room' is more likely to be "answered" than a query composed of a statement like 'hotel with fancy interior'. As you might expect, the query cannot be unspecified or null.
-
-The important point to take away is that if the query doesn't look like a question, answer processing is skipped, even if the "answers" parameter is set.
+Optionally, add "answers" if you want to include additional processing that provides an answer. Answers (and captions) are extracted from passages found in fields listed in searchFields. Be sure to include content-rich fields in searchFields to get the best answers in a response. For more information, see [How to return semantic answers](semantic-answers.md).
#### Step 5: Add other parameters
Set any other parameters that you want in the request. Parameters such as [spell
Optionally, you can customize the highlight style applied to captions. Captions apply highlight formatting over key passages in the document that summarize the response. The default is `<em>`. If you want to specify the type of formatting (for example, yellow background), you can set the highlightPreTag and highlightPostTag.
-### Review the response
+## Evaluate the response
+
+As with all queries, a response is composed of all fields marked as retrievable, or just those fields listed in the select parameter. It includes the original relevance score, and might also include a count, or batched results, depending on how you formulated the request.
+
+In a semantic query, the response has additional elements: a new semantically ranked relevance score, captions in plain text and with highlights, and optionally an answer.
-Response for the above query returns the following match as the top pick. Captions are returned automatically, with plain text and highlighted versions. For more information about semantic responses, see [Semantic ranking and responses](semantic-how-to-query-response.md).
+In a client app, you can structure the search page to include a caption as the description of the match, rather than the entire contents of a specific field. This is useful when individual fields are too dense for the search results page.
+
+The response for the above example query returns the following match as the top pick. Captions are returned automatically, with plain text and highlighted versions. Answers are omitted from the example because one could not be determined for this particular query and corpus.
```json
-"@odata.count": 29,
+"@odata.count": 35,
+"@search.answers": [],
"value": [ {
- "@search.score": 1.8920634,
- "@search.rerankerScore": 1.1091284966096282,
+ "@search.score": 1.8810667,
+ "@search.rerankerScore": 1.1446577133610845,
"@search.captions": [ {
- "text": "Oceanside Resort. Budget. New Luxury Hotel. Be the first to stay. Bay views from every room, location near the pier, rooftop pool, waterfront dining & more.",
- "highlights": "<strong>Oceanside Resort.</strong> Budget. New Luxury Hotel. Be the first to stay.<strong> Bay views</strong> from every room, location near the pier, rooftop pool, waterfront dining & more."
+ "text": "Oceanside Resort. Luxury. New Luxury Hotel. Be the first to stay. Bay views from every room, location near the pier, rooftop pool, waterfront dining & more.",
+ "highlights": "<strong>Oceanside Resort.</strong> Luxury. New Luxury Hotel. Be the first to stay.<strong> Bay</strong> views from every room, location near the pier, rooftop pool, waterfront dining & more."
} ],
- "HotelId": "18",
"HotelName": "Oceanside Resort",
- "Description": "New Luxury Hotel. Be the first to stay. Bay views from every room, location near the pier, rooftop pool, waterfront dining & more.",
- "Category": "Budget"
+ "Description": "New Luxury Hotel. Be the first to stay. Bay views from every room, location near the pier, rooftop pool, waterfront dining & more.",
+ "Category": "Luxury"
}, ```
-### Parameters used in a semantic query
-
-The following table summarizes the query parameters used in a semantic query so that you can see them holistically. For a list of all parameters, see [Search Documents (REST preview)](/rest/api/searchservice/preview-api/search-documents)
-
-| Parameter | Type | Description |
-|--|-|-|
-| queryType | String | Valid values include simple, full, and semantic. A value of "semantic" is required for semantic queries. |
-| queryLanguage | String | Required for semantic queries. Currently, only "en-us" is implemented. |
-| searchFields | String | A comma-delimited list of searchable fields. Optional but recommended. Specifies the fields over which semantic ranking occurs. </br></br>In contrast with simple and full query types, the order in which fields are listed determines precedence.|
-| answers |String | Optional field to specify whether semantic answers are included in the result. Currently, only "extractive" is implemented. Answers can be configured to return a maximum of five. The default is one. This example shows a count of three answers: "extractive\|count3"`. |
-
-## Query with Search explorer
-
-The following query targets the built-in Hotels sample index, using API version 2020-06-30-Preview, and runs in Search explorer. The `$select` clause limits the results to just a few fields, making it easier to scan in the verbose JSON in Search explorer.
-
-### With queryType=semantic
-
-```json
-search=nice hotel on water with a great restaurant&$select=HotelId,HotelName,Description,Tags&queryType=semantic&queryLanguage=english&searchFields=Description,Tags
-```
-
-The first few results are as follows.
-
-```json
-{
- "@search.score": 0.38330218,
- "@search.rerankerScore": 0.9754053303040564,
- "HotelId": "18",
- "HotelName": "Oceanside Resort",
- "Description": "New Luxury Hotel. Be the first to stay. Bay views from every room, location near the pier, rooftop pool, waterfront dining & more.",
- "Tags": [
- "view",
- "laundry service",
- "air conditioning"
- ]
-},
-{
- "@search.score": 1.8920634,
- "@search.rerankerScore": 0.8829904259182513,
- "HotelId": "36",
- "HotelName": "Pelham Hotel",
- "Description": "Stunning Downtown Hotel with indoor Pool. Ideally located close to theatres, museums and the convention center. Indoor Pool and Sauna and fitness centre. Popular Bar & Restaurant",
- "Tags": [
- "view",
- "pool",
- "24-hour front desk service"
- ]
-},
-{
- "@search.score": 0.95706713,
- "@search.rerankerScore": 0.8538530203513801,
- "HotelId": "22",
- "HotelName": "Stone Lion Inn",
- "Description": "Full breakfast buffet for 2 for only $1. Excited to show off our room upgrades, faster high speed WiFi, updated corridors & meeting space. Come relax and enjoy your stay.",
- "Tags": [
- "laundry service",
- "air conditioning",
- "restaurant"
- ]
-},
-```
-
-### With queryType (default)
-
-For comparison, run the same query as above, removing `&queryType=semantic&queryLanguage=english&searchFields=Description,Tags`. Notice that there is no `"@search.rerankerScore"` in these results, and that different hotels appear in the top three positions.
-
-```json
-{
- "@search.score": 8.633856,
- "HotelId": "3",
- "HotelName": "Triple Landscape Hotel",
- "Description": "The Hotel stands out for its gastronomic excellence under the management of William Dough, who advises on and oversees all of the HotelΓÇÖs restaurant services.",
- "Tags": [
- "air conditioning",
- "bar",
- "continental breakfast"
- ]
-},
-{
- "@search.score": 6.407289,
- "HotelId": "40",
- "HotelName": "Trails End Motel",
- "Description": "Only 8 miles from Downtown. On-site bar/restaurant, Free hot breakfast buffet, Free wireless internet, All non-smoking hotel. Only 15 miles from airport.",
- "Tags": [
- "continental breakfast",
- "view",
- "view"
- ]
-},
-{
- "@search.score": 5.843788,
- "HotelId": "14",
- "HotelName": "Twin Vertex Hotel",
- "Description": "New experience in the Making. Be the first to experience the luxury of the Twin Vertex. Reserve one of our newly-renovated guest rooms today.",
- "Tags": [
- "bar",
- "restaurant",
- "air conditioning"
- ]
-},
-```
- ## Next steps Recall that semantic ranking and responses are built over an initial result set. Any logic that improves the quality of the initial results will carry forward to semantic search. As a next step, review the features that contribute to initial results, including analyzers that affect how strings are tokenized, scoring profiles that can tune results, and the default relevance algorithm.
search Semantic How To Query Response https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/semantic-how-to-query-response.md
- Title: Structure a semantic response-
-description: Describes the semantic ranking algorithm in Cognitive Search and how to structure 'semantic answers' and 'semantic captions' from a result set.
------ Previously updated : 03/02/2021---
-# Semantic ranking and responses in Azure Cognitive Search
-
-> [!IMPORTANT]
-> The semantic ranking algorithm and semantic answers/captions are in public preview, available through the preview REST API only. Preview features are offered as-is, under [Supplemental Terms of Use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-
-Semantic ranking improves the precision of search results by reranking the top matches using a semantic ranking model trained for queries expressed in natural language as opposed to keywords.
-
-This article describes the semantic ranking algorithm and how a semantic response is shaped. A response includes captions, both in plain text and with highlights, and answers (optional).
-
-+ Semantic captions are text passages relevant to the query extracted from the search results. They can help to summarize a result when individual content fields are too large for the results page. Captions feature semantic highlights, allowing users to quickly skim query results to find the most relevant documents thus improving overall user experience.
-
-+ Semantic answers use machine learning models from Bing to formulate answers to queries that look like questions. The answers are selected from a list of passages most relevant to the query, as extracted from the top documents in the query result set. Answers are returned as an independent, top-level object in the query response payload that you can choose to render on the search pages, along side search results.
-
-## Prerequisites
-
-+ Queries formulated using the semantic query type. For more information, see [Create a semantic query](semantic-how-to-query-request.md).
-
-## Understanding a semantic response
-
-A semantic response includes new properties for scores, captions, and answers. A semantic response is built from the standard response, using the top 50 results returned by the [full text search engine](search-lucene-query-architecture.md), which are then re-ranked using the semantic ranker. If more than 50 are specified, the additional results are returned, but they wonΓÇÖt be semantically re-ranked.
-
-As with all queries, a response is composed of all fields marked as retrievable, or just those fields listed in the select statement. It also includes an "answer" field and "captions".
-
-+ For each semantic result, by default, there is one "answer", returned as a distinct field that you can choose to render in a search page. You can specify up to five. Formulation of answer is automated: reading through all the documents in the initial results, running extractive summarization, followed by machine reading comprehension, and finally promoting a direct answer to the userΓÇÖs question in the answer field.
-
-+ A "caption" is an extraction-based summarization of document content, returned in plain text or with highlights. Captions are included automatically and cannot be suppressed. Highlights are applied using machine reading comprehension to identify which strings should be emphasized. Highlights draw your attention to the most relevant passages, so that you can quickly scan a page of results to find the right document.
-
-A semantically re-ranked result set orders results by the @search.rerankerScore value. If you add an OrderBy clause, an HTTP 400 error will be returned because that clause is not supported in a semantic query.
-
-## Example
-
-The @search.rerankerScore exists alongside the standard @search.score and is used to re-rank the results.
-
-Given the following query:
-
-```http
-POST /indexes/hotels-sample-index/docs/search?api-version=2020-06-30-Preview
-{
- "search": "newer hotel near the water with a great restaurant",
- "queryType": "semantic",
- "queryLanguage": "en-us",
- "answers": "none",
- "searchFields": "HotelName,Category,Description",
- "select": "HotelId,HotelName,Description,Category",
- "count": true
-}
-```
-
-You can expect to see the following result, representative of a semantic response. These results show just the top three responses, as ranked by "@search.rerankerScore". Notice how Oceanside Resort is now ranked first. Under the default ranking of "@search.score", this result would be returned in second place, after Trails End.
-
-```http
-{
- "@odata.count": 31,
- "@search.answers": [],
- "value": [
- {
- "@search.score": 1.8920634,
- "@search.rerankerScore": 1.1091284966096282,
- "@search.captions": [
- {
- "text": "Oceanside Resort. Budget. New Luxury Hotel. Be the first to stay. Bay views from every room, location near the piper, rooftop pool, waterfront dining & more.",
- "highlights": "<em>Oceanside Resort.</em> Budget. New Luxury Hotel. Be the first to stay.<em> Bay views</em> from every room, location near the piper, rooftop pool, waterfront dining & more."
- }
- ],
- "HotelId": "18",
- "HotelName": "Oceanside Resort",
- "Description": "New Luxury Hotel. Be the first to stay. Bay views from every room, location near the piper, rooftop pool, waterfront dining & more.",
- "Category": "Budget"
- },
- {
- "@search.score": 2.5204072,
- "@search.rerankerScore": 1.0731962407007813,
- "@search.captions": [
- {
- "text": "Trails End Motel. Luxury. Only 8 miles from Downtown. On-site bar/restaurant, Free hot breakfast buffet, Free wireless internet, All non-smoking hotel. Only 15 miles from airport.",
- "highlights": "<em>Trails End Motel.</em> Luxury. Only 8 miles from Downtown. On-site bar/restaurant, Free hot breakfast buffet, Free wireless internet, All non-smoking hotel. Only 15 miles from airport."
- }
- ],
- "HotelId": "40",
- "HotelName": "Trails End Motel",
- "Description": "Only 8 miles from Downtown. On-site bar/restaurant, Free hot breakfast buffet, Free wireless internet, All non-smoking hotel. Only 15 miles from airport.",
- "Category": "Luxury"
- },
- {
- "@search.score": 1.4104348,
- "@search.rerankerScore": 1.06992666143924,
- "@search.captions": [
- {
- "text": "Winter Panorama Resort. Resort and Spa. Newly-renovated with large rooms, free 24-hr airport shuttle & a new restaurant. Rooms/suites offer mini-fridges & 49-inch HDTVs.",
- "highlights": "<em>Winter Panorama Resort</em>. Resort and Spa. Newly-renovated with large rooms, free 24-hr airport shuttle & a new restaurant. Rooms/suites offer mini-fridges & 49-inch HDTVs."
- }
- ],
- "HotelId": "12",
- "HotelName": "Winter Panorama Resort",
- "Description": "Newly-renovated with large rooms, free 24-hr airport shuttle & a new restaurant. Rooms/suites offer mini-fridges & 49-inch HDTVs.",
- "Category": "Resort and Spa"
- }
-```
-
-## Next steps
-
-+ [Semantic search overview](semantic-search-overview.md)
-+ [Similarity algorithm](index-ranking-similarity.md)
-+ [Create a semantic query](semantic-how-to-query-request.md)
-+ [Create a basic query](search-query-create.md)
search Semantic Ranking https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/semantic-ranking.md
+
+ Title: Semantic ranking
+
+description: Describes the semantic ranking algorithm in Cognitive Search.
++++++ Last updated : 03/12/2021++
+# Semantic ranking in Azure Cognitive Search
+
+> [!IMPORTANT]
+> Semantic search features are in public preview, available through the preview REST API only. Preview features are offered as-is, under [Supplemental Terms of Use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/), and are not guaranteed to have the same implementation at general availability. For more information, see [Availability and pricing](semantic-search-overview.md#availability-and-pricing).
+
+Semantic ranking is an extension of the query execution pipeline that improves the precision and recall by reranking the top matches of an initial result set. Semantic ranking is backed by state-of-the-art deep machine reading comprehension models, trained for queries expressed in natural language as opposed to linguistic matching on keywords. In contrast with the [default similarity ranking algorithm](index-ranking-similarity.md), the semantic ranker uses the context and meaning of words to determine relevance.
+
+## How semantic ranking works
+
+The semantic ranking is both resource and time intensive. In order to complete processing within the expected latency of a query operation, the model takes as an input just the top 50 documents returned from the default [similarity ranking algorithm](index-ranking-similarity.md). Results from the initial ranking can include more than 50 matches, but only the first 50 will be reranked semantically.
+
+For semantic ranking, the model uses both machine reading comprehension and transfer learning to re-score the documents based on how well each one matches the intent of the query.
+
+1. For each document, the semantic ranker evaluates the fields in the searchFields parameter in order, consolidating the contents into one large string.
+
+1. The string is then trimmed to ensure the overall length is not more than 8,000 tokens. If you have very large documents, with a content field or merged_content field that has many pages of content, anything after the token limit is ignored.
+
+1. Each of the 50 documents is now represented by a single long string. This string is sent to the summarization model. The summarization model produces captions (and answers), using machine reading comprehension to identify passages that appear to summarize the content or answer the question. The output of the summarization model is a further reduced string, which be at most 128 tokens.
+
+1. The smaller string becomes the caption of the document, and it represents the most relevant passages found in the larger string. The set of 50 (or fewer) captions is then ranked in order relevance.
+
+Conceptual and semantic relevance is established through vector representation and term clusters. Whereas a keyword similarity algorithm might give equal weight to any term in the query, the semantic model has been trained to recognize the interdependency and relationships among words that are otherwise unrelated on the surface. As a result, if a query string includes terms from the same cluster, a document containing both will rank higher than one that doesn't.
++
+## Next steps
+
+Semantic ranking is offered on Standard tiers, in specific regions. For more information and to sign up, see [Availability and pricing](semantic-search-overview.md#availability-and-pricing).
+
+A new query type enables the relevance ranking and response structures of semantic search. [Create a semantic query](semantic-how-to-query-request.md) to get started.
+
+Alternatively, review either of the following articles for related information.
+++ [Add spell check to query terms](speller-how-to-add.md)++ [Return a semantic answer](semantic-answers.md)
search Semantic Search Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/semantic-search-overview.md
Previously updated : 03/05/2021 Last updated : 03/12/2021 # Semantic search in Azure Cognitive Search > [!IMPORTANT]
-> Semantic search features are in public preview, available through the preview REST API only. Preview features are offered as-is, under [Supplemental Terms of Use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+> Semantic search features are in public preview, available through the preview REST API only. Preview features are offered as-is, under [Supplemental Terms of Use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/), and are not guaranteed to have the same implementation at general availability. For more information, see [Availability and pricing](semantic-search-overview.md#availability-and-pricing).
-Semantic search is a collection of query-related features that support a higher-quality, more natural query experience. Features include semantic reranking of search results, as well as captions and answers generation with semantic highlighting. The top 50 results returned from the [full text search engine](search-lucene-query-architecture.md) are reranked to find the most relevant matches.
+Semantic search is a collection of query-related features that support a higher-quality, more natural query experience.
-The underlying technology is from Bing and Microsoft Research, and integrated into the Cognitive Search infrastructure. For more information about the research and AI investments backing semantic search, see [How AI from Bing is powering Azure Cognitive Search (Microsoft Research Blog)](https://www.microsoft.com/research/blog/the-science-behind-semantic-search-how-ai-from-bing-is-powering-azure-cognitive-search/).
+These capabilities include a semantic reranking of search results, as well as caption and answer extraction, with semantic highlighting over relevant terms and phrases. State-of-the-art pretrained models are used for extraction and ranking. To maintain the fast performance that users expect from search, semantic summarization and ranking are applied to just the top 50 results, as scored by the [default similarity scoring algorithm](index-similarity-and-scoring.md#similarity-ranking-algorithms). Using those results as the document corpus, semantic ranking re-scores those results based on the semantic strength of the match.
-To use semantic search in queries, you'll need to make small modifications to the search request, but no extra configuration or reindexing is required.
+The underlying technology is from Bing and Microsoft Research, and integrated into the Cognitive Search infrastructure as an add-on feature. For more information about the research and AI investments backing semantic search, see [How AI from Bing is powering Azure Cognitive Search (Microsoft Research Blog)](https://www.microsoft.com/research/blog/the-science-behind-semantic-search-how-ai-from-bing-is-powering-azure-cognitive-search/).
-Public preview features include:
+The following video provides an overview of the capabilities.
-+ Semantic ranking model that uses the context or semantic meaning to compute a relevance score
-+ Semantic captions that summarize key passages from a result for easy scanning
-+ Semantic answers to the query, if the query is a question
-+ Semantic highlights that bring focus to key phrases and terms
-+ Spell check that corrects typos before the query terms reach the search engine
+> [!VIDEO https://www.youtube.com/embed/yOf0WfVd_V0]
-## Availability and pricing
+## Components and workflow
-Semantic ranking is available through [sign-up registration](https://aka.ms/SemanticSearchPreviewSignup), on search services created at a Standard tier (S1, S2, S3), located in one of these regions: North Central US, West US, West US 2, East US 2, North Europe, West Europe. Spell correction is available in the same regions, but has no tier restrictions. If you have an existing service that meets tier and region criteria, only sign up is required.
+Semantic search improves precision and recall with the addition of the following capabilities:
-Between preview launch on March 2 through April 1, spell correction and semantic ranking are offered free of charge. After April 1, the computational costs of running this functionality will become a billable event. The expected cost is about USD $500/month for 250,000 queries. You can find detailed cost information documented in the [Cognitive Search pricing page](https://azure.microsoft.com/pricing/details/search/) and in [Estimate and manage costs](search-sku-manage-costs.md).
+| Feature | Description |
+||-|
+| [Spell check](speller-how-to-add.md) | Corrects typos before the query terms reach the search engine. |
+| [Semantic ranking](semantic-ranking.md) | Uses the context or semantic meaning to compute a new relevance score. |
+| [Semantic captions and highlights](semantic-how-to-query-request.md) | Sentences and phrases from a document that best summarize the content, with highlights over key passages for easy scanning. Captions that summarize a result are useful when individual content fields are too dense for the results page. Highlighted text elevates the most relevant terms and phrases so that users can quickly determine why a match was considered relevant. |
+| [Semantic answers](semantic-answers.md) | An optional and additional substructure returned from a semantic query. It provides a direct answer to a query that looks like a question. |
+
+### Order of operations
+
+Components of semantic search extend the existing query execution pipeline in both directions. If you enable spelling correction, the [speller](speller-how-to-add.md) corrects typos at the outset, before the query terms reach the search engine.
+
-## Semantic search architecture
+Query execution proceeds as usual, with term parsing, analysis, and scans over the inverted indexes. The engine retrieves documents using token matching, and scores the results using the [default similarity scoring algorithm](index-similarity-and-scoring.md#similarity-ranking-algorithms). Scores are calculated based on the degree of linguistic similarity between query terms and matching terms in the index. If you defined them, scoring profiles are also applied at this stage. Results are then passed to the semantic search subsystem.
-Components of semantic search are layered on top of the existing query execution pipeline. Spell correction (not shown in the diagram) improves recall by correcting typos in individual query terms. After parsing and analysis are completed, the search engine retrieves the documents that matched the query and scores them using the [default scoring algorithm](index-similarity-and-scoring.md#similarity-ranking-algorithms), either BM25 or classic, depending on when the service was created. Scoring profiles are also applied at this stage.
+In the preparation step, the document corpus returned from the initial result set is analyzed at the sentence and paragraph level to find passages that summarize each document. In contrast with keyword search, this step uses machine reading and comprehension to evaluate the content. As part of result composition, a semantic query returns captions and answers. To formulate them, semantic search uses language representation to extract and highlight key passages that best summarize a result. If the search query is a question - and answers are requested - the response will also include a text passage that best answers the question, as expressed by the search query. For both captions and answers, existing text is used in the formulation. The semantic models do not compose new sentences or phrases from the available content, nor does it apply logic to arrive at new conclusions. In short, the system will never return content that doesn't already exist.
-Having received the top 50 matches, the [semantic ranking model](semantic-how-to-query-response.md) re-evaluates the document corpus. Results can include more than 50 matches, but only the first 50 will be reranked. For ranking, the model uses both machine learning and transfer learning to re-score the documents based on how well each one matches the intent of the query.
+Results are then re-scored based on the [conceptual similarity](semantic-ranking.md) of query terms.
-To create captions and answers, semantic search uses language representation to extract and highlight key passages that best summarize a result. If the search query is a question, and answers are requested, the response will include a text passage that best answers the question, as expressed by the search query.
+To use semantic capabilities in queries, you'll need to make small modifications to the [search request](semantic-how-to-query-request.md), but no extra configuration or reindexing is required.
+## Availability and pricing
+
+Semantic capabilities are available through [sign-up registration](https://aka.ms/SemanticSearchPreviewSignup), on search services created at a Standard tier (S1, S2, S3), located in one of these regions: North Central US, West US, West US 2, East US 2, North Europe, West Europe.
+
+Spell correction is available in the same regions, but has no tier restrictions. If you have an existing service that meets tier and region criteria, only sign up is required.
+
+Between preview launch on March 2 through April 1, spell correction and semantic ranking are offered free of charge. After April 1, the computational costs of running this functionality will become a billable event. The expected cost is about USD $500/month for 250,000 queries. You can find detailed cost information documented in the [Cognitive Search pricing page](https://azure.microsoft.com/pricing/details/search/) and in [Estimate and manage costs](search-sku-manage-costs.md).
## Next steps A new query type enables the relevance ranking and response structures of semantic search.
-[Create a semantic query](semantic-how-to-query-request.md) to get started. Or, review either of the following articles for related information.
+[Create a semantic query](semantic-how-to-query-request.md) to get started. Or, review the following articles for related information.
+ [Add spell check to query terms](speller-how-to-add.md)
-+ [Semantic ranking and responses (answers and captions)](semantic-how-to-query-response.md)
++ [Return a semantic answer](semantic-answers.md)++ [Semantic ranking](semantic-ranking.md)
search Speller How To Add https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/speller-how-to-add.md
POST https://[service name].search.windows.net/indexes/hotels-sample-index/docs/
The queryLanguage parameter required for speller must be consistent with any [language analyzers](index-add-language-analyzers.md) assigned to field definitions in the index schema.
-+ queryLanguage determines which lexicons are used for spell check, and is also used as an input to the [semantic ranking algorithm](semantic-how-to-query-response.md) if you are using "queryType=semantic".
++ queryLanguage determines which lexicons are used for spell check, and is also used as an input to the [semantic ranking algorithm](semantic-answers.md) if you are using "queryType=semantic". + Language analyzers are used during indexing and query execution to find matching documents in the search index. An example of a field definition that uses a language analyzer is `"name": "Description", "type": "Edm.String", "analyzer": "en.microsoft"`.
search Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/whats-new.md
Previously updated : 03/02/2021 Last updated : 03/12/2021 # What's new in Azure Cognitive Search
Learn what's new in the service. Bookmark this page to keep up to date with the
|Feature&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | Description | Availability | ||||
-| [Semantic search](semantic-search-overview.md) | A collection of query-related features that improve the relevance of search results with very little effort. With small changes to a search request, you can try out these features on existing indexes.</br></br>[Semantic query](semantic-how-to-query-request.md) is a new query type that leverages advancements in natural language processing to improve ranking, as well as understand query intent to provide answers, captions, and semantic highlights.</br></br>[Semantic ranking and responses (answers, captions, and highlights)](semantic-how-to-query-response.md) refer to the model that evaluates results and the ability of the model to add structure to the response. | Public preview ([by request](https://aka.ms/SemanticSearchPreviewSignup)). </br></br>Use [Search Documents (REST)](/rest/api/searchservice/preview-api/search-documents) api-version=2020-06-30-Preview and [Search explorer](search-explorer.md) in Azure portal. </br></br>Region and tier restrictions apply. |
+| [Semantic search](semantic-search-overview.md) | A collection of query-related features that significantly improve the relevance of search results with very little effort. </br></br>[Semantic ranking](semantic-ranking.md) computes relevance scores using the semantic meaning behind words and content. </br></br>[Semantic captions](semantic-how-to-query-request.md) are relevant passages from the document that best summarize the document, with highlights over the most important terms or phrases. </br></br>[Semantic answers](semantic-answers.md) are key passages, extracted from a search document, that are formulated as a direct answer to a query that looks like a question. | Public preview ([by request](https://aka.ms/SemanticSearchPreviewSignup)). </br></br>Use [Search Documents (REST)](/rest/api/searchservice/preview-api/search-documents) api-version=2020-06-30-Preview and [Search explorer](search-explorer.md) in Azure portal. </br></br>Region and tier restrictions apply. |
| [Spell check query terms](speller-how-to-add.md) | Before query terms reach the search engine, you can have them checked for spelling errors. The `speller` option works with any query type (simple, full, or semantic). | Public preview, REST only, api-version=2020-06-30-Preview| | [SharePoint Online indexer](search-howto-index-sharepoint-online.md) | This indexer connects you to a SharePoint Online site so that you can index content from a document library. | Public preview, REST only, api-version=2020-06-30-Preview |
service-fabric Service Fabric Cluster Fabric Settings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-fabric/service-fabric-cluster-fabric-settings.md
The following is a list of Fabric settings that you can customize, organized by
|DisableContainers|bool, default is FALSE|Static|Config for disabling containers - used instead of DisableContainerServiceStartOnContainerActivatorOpen which is deprecated config | |DisableDockerRequestRetry|bool, default is FALSE |Dynamic| By default SF communicates with DD (docker dameon) with a timeout of 'DockerRequestTimeout' for each http request sent to it. If DD does not responds within this time period; SF resends the request if top level operation still has remaining time. With hyperv container; DD sometimes take much more time to bring up the container or deactivate it. In such cases DD request times out from SF perspective and SF retries the operation. Sometimes this seems to adds more pressure on DD. This config allows to disable this retry and wait for DD to respond. | |DnsServerListTwoIps | Bool, default is FALSE | Static | This flags adds the local dns server twice to help alleviate intermittent resolve issues. |
+| DockerTerminateOnLastHandleClosed | bool, default is FALSE | Static | By default if FabricHost is managing the 'dockerd' (based on: SkipDockerProcessManagement == false) this setting configures what happens when either FabricHost or dockerd crash. When set to `true` if either process crashes all running containers will be forcibly terminated by the HCS. If set to `false` the containers will continue to keep running. Note: Previous to 8.0 this behavior was unintentionally the equivalent of `false`. The default setting of `true` here is what we expect to happen by default moving forward for our cleanup logic to be effective on restart of these processes. |
| DoNotInjectLocalDnsServer | bool, default is FALSE | Static | Prevents the runtime to injecting the local IP as DNS server for containers. | |EnableActivateNoWindow| bool, default is FALSE|Dynamic| The activated process is created in the background without any console. | |EnableContainerServiceDebugMode|bool, default is TRUE|Static|Enable/disable logging for docker containers. Windows only.|
static-web-apps Authentication Authorization https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/static-web-apps/authentication-authorization.md
For example, to login with GitHub you could include a login link like the follow
If you chose to support more than one provider, then you need to expose a provider-specific link for each on your website.
-You can use a [route rule](routes.md) to map a default provider to a friendly route like _/login_.
+You can use a [route rule](./configuration.md#routes) to map a default provider to a friendly route like _/login_.
```json { "route": "/login",
- "serve": "/.auth/login/github"
+ "redirect": "/.auth/login/github"
} ```
You can use a [route rule](routes.md) to map a default provider to a friendly ro
If you want a user to return to a specific page after login, provide a URL in `post_login_redirect_uri` query string parameter. - ## Logout The `/.auth/logout` route logs users out from the website. You can add a link to your site navigation to allow the user to log out as shown in the following example.
The `/.auth/logout` route logs users out from the website. You can add a link to
<a href="/.auth/logout">Log out</a> ```
-You can use a [route rule](routes.md) to map a friendly route like _/logout_.
+You can use a [route rule](./configuration.md#routes) to map a friendly route like _/logout_.
```json { "route": "/logout",
- "serve": "/.auth/logout"
+ "redirect": "/.auth/logout"
} ```
static-web-apps Get Started Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/static-web-apps/get-started-cli.md
Now that the repository is created, you can create a static web app from the Azu
> [!IMPORTANT] > The URL passed to the `s` parameter must not include the `.git` suffix.
- - `<RESOURCE_GROUP_NAME>`: Replace this value with an existing Azure resource group name.
+ - `<RESOURCE_GROUP_NAME>`: Replace this value with an existing [Azure resource group name](../azure-resource-manager/management/manage-resources-cli.md).
+
+ - See the [az group](https://docs.microsoft.com/cli/azure/group?view=azure-cli-latest#az_group_list) documentation for details on listing resource groups.
- `<YOUR_GITHUB_ACCOUNT_NAME>`: Replace this value with your GitHub username.
static-web-apps Github Actions Workflow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/static-web-apps/github-actions-workflow.md
The deployment always calls `npm install` before any custom command.
| Command | Description | ||-|
-| `app_build_command` | Defines a custom command to run during deployment of the static content application.<br><br>For example, to configure a production build for an Angular application create an npm script named `build-prod` to run `ng build --prod` and enter `npm run build-prod` as the custom command. If left blank, the workflow tries to run the `npm run build` or `npm run build:Azure` commands. |
+| `app_build_command` | Defines a custom command to run during deployment of the static content application.<br><br>For example, to configure a production build for an Angular application create an npm script named `build-prod` to run `ng build --prod` and enter `npm run build-prod` as the custom command. If left blank, the workflow tries to run the `npm run build` or `npm run build:azure` commands. |
| `api_build_command` | Defines a custom command to run during deployment of the Azure Functions API application. | ## Route file location
storage Network File System Protocol Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/network-file-system-protocol-support.md
The following Azure Storage features aren't supported when you enable the NFS 3.
## NFS 3.0 features not yet supported
-The following NFS 3.0 features aren't yet supported with Azure Data Lake Storage Gen2.
+The following NFS 3.0 features aren't yet supported.
- NFS 3.0 over UDP. Only NFS 3.0 over TCP is supported.
The following NFS 3.0 features aren't yet supported with Azure Data Lake Storage
- Exporting a container as read-only
+## NFS 3.0 clients not yet supported
+
+The following NFS 3.0 clients aren't yet supported.
+
+- Windows client for NFS
+ ## Pricing During the preview, the data stored in your storage account is billed at the same capacity rate that blob storage charges per GB per month.
storage Storage Blob Rehydration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/storage-blob-rehydration.md
Previously updated : 01/08/2021 Last updated : 03/11/2021
While a blob is in the archive access tier, it's considered offline and can't be
[!INCLUDE [storage-blob-rehydration](../../../includes/storage-blob-rehydrate-include.md)]
+### Lifecycle management
+
+Rehydrating a blob doesn't change it's `Last-Modified` time. Using the [lifecycle management](storage-lifecycle-management-concepts.md) feature can create a scenario where a blob is rehydrated, then a lifecycle management policy moves the blob back to archive because the `Last-Modified` time is beyond the threshold set for the policy. To avoid this scenario, use the *[Copy an archived blob to an online tier](#copy-an-archived-blob-to-an-online-tier)* method. The copy method creates a new instance of the blob with an updated `Last-Modified` time and won't trigger the lifecycle management policy.
+ ## Monitor rehydration progress During rehydration, use the get blob properties operation to check the **Archive Status** attribute and confirm when the tier change is complete. The status reads "rehydrate-pending-to-hot" or "rehydrate-pending-to-cool" depending on the destination tier. Upon completion, the archive status property is removed, and the **Access Tier** blob property reflects the new hot or cool tier.
Copying a blob from archive can take hours to complete depending on the rehydrat
> [!IMPORTANT] > Do not delete the the source blob until the copy is completed successfully at the destination. If the source blob is deleted then the destination blob may not complete copying and will be empty. You may check the *x-ms-copy-status* to determine the state of the copy operation.
-Archive blobs can only be copied to online destination tiers within the same storage account. Copying an archive blob to another archive blob is not supported. The following table indicates CopyBlob's capabilities.
+Archive blobs can only be copied to online destination tiers within the same storage account. Copying an archive blob to another archive blob is not supported. The following table shows the capabilities of a **Copy Blob** operation.
| | **Hot tier source** | **Cool tier source** | **Archive tier source** | | -- | | -- | - |
storage Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/common/policy-reference.md
the link in the **Version** column to view the source on the
[!INCLUDE [azure-policy-reference-service-storage](../../../includes/policy/reference/byrp/microsoft.storage.md)]
+## Microsoft.StorageCache
++
+## Microsoft.StorageSync
++ ## Microsoft.ClassicStorage [!INCLUDE [azure-policy-reference-service-storageclassic](../../../includes/policy/reference/byrp/microsoft.classicstorage.md)]
storage Storage Files Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/files/storage-files-monitoring.md
The following table lists some example scenarios to monitor and the proper metri
For standard file shares, select the following response types:
+ - SuccessWithShareIopsThrottling
- SuccessWithThrottling
- - ClientThrottlingError
+ - ClientShareIopsThrottlingError
For premium file shares, select the following response types:
storage Storage Troubleshooting Files Performance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/files/storage-troubleshooting-files-performance.md
To confirm whether your share is being throttled, you can access and use Azure m
For standard file shares, the following response types are logged if a request is throttled: - SuccessWithThrottling
- - ClientThrottlingError
+ - SuccessWithShareIopsThrottling
+ - ClientShareIopsThrottlingError
For premium file shares, the following response types are logged if a request is throttled:
To confirm, you can use Azure Metrics in the portal -
For standard file shares, select the following response types: - SuccessWithThrottling
- - ClientThrottlingError
+ - SuccessWithShareIopsThrottling
+ - ClientShareIopsThrottlingError
For premium file shares, select the following response types:
stream-analytics Stream Analytics Dotnet Management Sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/stream-analytics/stream-analytics-dotnet-management-sdk.md
Previously updated : 12/06/2018 Last updated : 3/12/2021 # Management .NET SDK: Set up and run analytics jobs using the Azure Stream Analytics API for .NET
The **TestConnection** method tests whether the Stream Analytics job is able to
// Test the connection to the input ResourceTestStatus testInputResult = streamAnalyticsManagementClient.Inputs.Test(resourceGroupName, streamingJobName, inputName); ```
+The result of the TestConnection call is a *ResourceTestResult* object that contains two properties:
+
+- *status*: It can be one of the following strings: ["TestNotAttempted", "TestSucceeded", "TestFailed"]
+- *error*: It's of type ErrorResponse containing the following properties:
+ - *code*: a required property of type string. The value is standard System.Net.HttpStatusCode received while testing.
+ - *message*: a required property of type string representing the error.
## Create a Stream Analytics output target Creating an output target is similar to creating a Stream Analytics input source. Like input sources, output targets are tied to a specific job. To use the same output target for different jobs, you must call the method again and specify a different job name.
You've learned the basics of using a .NET SDK to create and run analytics jobs.
[stream.analytics.developer.guide]: stream-analytics-developer-guide.md [stream.analytics.scale.jobs]: stream-analytics-scale-jobs.md [stream.analytics.query.language.reference]: /stream-analytics-query/stream-analytics-query-language-reference
-[stream.analytics.rest.api.reference]: /rest/api/streamanalytics/
+[stream.analytics.rest.api.reference]: /rest/api/streamanalytics/
synapse-analytics Oracle To Synapse Analytics Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/migration-guides/oracle-to-synapse-analytics-guide.md
+
+ Title: "Oracle to Azure Synapse Analytics: Migration guide"
+description: The following sections provide an overview of what's involved with migrating an existing Oracle database solution to Azure Synapse Analytics.
+++++ Last updated : 08/25/2020++
+# Migration guide: Migrate Oracle data warehouse to a dedicated SQL pool in Azure Synapse Analytics
+The following sections provide an overview of what's involved with migrating an existing Oracle data warehouse solution to Azure Synapse Analytics.
+
+## Overview
+Before migrating, you should verify that Azure Synapse Analytics is the best solution for your workload. Azure Synapse Analytics is a distributed system designed to perform analytics on large data. Migrating to Azure Synapse Analytics requires some design changes that are not difficult to understand but that might take some time to implement. If your business requires an enterprise-class data warehouse, the benefits are worth the effort. However, if you don't need the power of Azure Synapse Analytics, it is more cost-effective to use [SQL Server](https://docs.microsoft.com/sql/sql-server/) or [Azure SQL Database](https://docs.microsoft.com/azure/azure-sql/).
+
+Consider using Azure Synapse Analytics when you:
+- Have one or more Terabytes of data.
+- Plan to run analytics on substantial amounts of data.
+- Need the ability to scale compute and storage.
+- Want to save on costs by pausing compute resources when you don't need them.
+
+Rather than Azure Synapse Analytics, consider other options for operational (OLTP) workloads that have:
+- High frequency reads and writes.
+- Large numbers of singleton selects.
+- High volumes of single row inserts.
+- Row-by-row processing needs.
+- Incompatible formats (JSON, XML).
+
+## Prerequisites
+To migrate your Oracle data warehouse to Azure Synapse Analytics, make sure you have the following prerequisites:
+
+- A data warehouse or Analytics workload
+- SSMA for Oracle to convert Oracle objects to SQL Server. See [Migrating Oracle Databases to SQL Server (OracleToSQL)](https://docs.microsoft.com/sql/ssma/oracle/migrating-oracle-databases-to-sql-server-oracletosql) for more information.
+- Latest version of [Azure Synapse Pathway](https://www.microsoft.com/en-us/download/details.aspx?id=102787) tool to migrate SQL Server objects to Azure Synapse objects.
+- A [dedicated SQL pool](../get-started-create-workspace.md) in Azure Synapse workspace.
++
+## Pre-migration
+After you make the decision to migrate an existing solution to Azure Synapse Analytics, it is important to plan the migration before you get started. A primary goal of planning is to ensure that your data, table schemas, and code are compatible with Azure Synapse Analytics. There are some compatibility differences between your current system and SQL Data Warehouse that you will need to work around. In addition, migrating large amounts of data to Azure takes time. Careful planning will speed up the process of getting your data to Azure. Another key goal of planning is to adjust your design to ensure that your solution takes full advantage of the high query performance that Azure Synapse Analytics is designed to provide. Designing data warehouses for scale introduces unique design patterns, so traditional approaches aren't always the best. While some design adjustments can be made after migration, making changes earlier in the process will save you time later.
+
+## Azure Synapse Pathway
+One of the critical blockers customers face is translating their SQL code when migrating from one system to another. [Azure Synapse Pathway](https://docs.microsoft.com/sql/tools/synapse-pathway/azure-synapse-pathway-overview) helps you upgrade to a modern data warehouse platform by automating the code translation of your existing data warehouse. It's a free, intuitive, and easy to use tool that automates the code translation enabling a quicker migration to Azure Synapse Analytics.
+
+## Migrate
+Performing a successful migration requires you to migrate your table schemas, code, and data. For more detailed guidance on these topics, see:
+- The article [Migrate your schemas](https://docs.microsoft.com/azure/synapse-analytics/sql-data-warehouse/sql-data-warehouse-overview-develop).
+- The article [Migrate your code](https://docs.microsoft.com/azure/synapse-analytics/sql-data-warehouse/sql-data-warehouse-overview-develop).
+- The article [Migrate your data](https://docs.microsoft.com/azure/synapse-analytics/sql-data-warehouse/sql-data-warehouse-overview-develop).
+
+## Additional resources
+- The CAT (Customer Advisory Team) has some great Azure Synapse Analytics (formerly SQL Data Warehouse) guidance published as blog postings. Be sure to take a look at their article, [Migrating data to Azure SQL Data Warehouse in practice](https://docs.microsoft.com/archive/blogs/sqlcat/migrating-data-to-azure-sql-data-warehouse-in-practice), for additional guidance on migration.
+- Check out the white paper [Choosing your database migration path to Azure](https://azure.microsoft.com/mediahandler/files/resourcefiles/choosing-your-database-migration-path-to-azure/Choosing_your_database_migration_path_to_Azure.pdf) for additional information and recommendations.
+- For a matrix of the Microsoft and third-party services and tools that are available to assist you with various database and data migration scenarios as well as specialty tasks, see the article [Service and tools for data migration](https://docs.microsoft.com/azure/dms/dms-tools-matrix).
+
+## Migration assets from real-world engagements
+For additional assistance with completing this migration scenario, please see the following resources, which were developed in support of a real-world migration project engagement.
+
+| Title/link | Description |
+| | |
+| [Data Workload Assessment Model and Tool](https://github.com/Microsoft/DataMigrationTeam/tree/master/Data%20Workload%20Assessment%20Model%20and%20Tool) | This tool provides suggested ΓÇ£best fitΓÇ¥ target platforms, cloud readiness, and application/database remediation level for a given workload. It offers simple, one-click calculation and report generation that greatly helps to accelerate large estate assessments by providing and automated and uniform target platform decision process. |
+| [Handling Data Encoding Issues While Loading Data to Azure Synapse Analytics](https://azure.microsoft.com/en-us/blog/handling-data-encoding-issues-while-loading-data-to-sql-data-warehouse/) | This blog is intended to provide insight on some of the data encoding issues that you may encounter while using PolyBase to load data to SQL Data Warehouse. This article also provides some options that you can use to overcome such issues and load the data successfully. |
+| [Getting table sizes in Azure Synapse Analytics SQL pool](https://github.com/Microsoft/DataMigrationTeam/blob/master/Whitepapers/Getting%20table%20sizes%20in%20SQL%20DW.pdf) | One of the key tasks that an architect must perform execute is to get metrics about a new environment post-migration: collecting load times from on-premises to the cloud, collecting PolyBase load times, etc. Of these tasks, one of the most important is to determine the storage size in SQL Data Warehouse compared to the customer's current platform. |
+| [Utility to move On-Premises SQL Server Logins to Azure Synapse Analytics](https://github.com/Microsoft/DataMigrationTeam/tree/master/IP%20and%20Scripts/MoveLogins) | A PowerShell script that creates a T-SQL command script to re-create logins and select database users from an ΓÇ£on premisesΓÇ¥ SQL Server to an Azure SQL PaaS service. The tool allows the automatic mapping of Windows AD accounts to Azure AD accounts or it can do UPN lookups for each login against the on premises Windows Active Directory. The tool optionally moves SQL Server native logins as well. Custom server and database roles are scripted, as well as role membership and database role and user permissions. Contained databases are yet not supported and only a subset of possible SQL Server permissions are scripted; i.e. permissions grant with grant are not supported (complex permission trees). More details are available in the support document and the script have comments for ease of understanding. |
+
+> [!NOTE]
+> These above resources were developed as part of the Data Migration Jumpstart Program (DM Jumpstart), which is sponsored by the Azure Data Group engineering team. The core charter of DM Jumpstart is to unblock and accelerate complex modernization and compete data platform migration opportunities to MicrosoftΓÇÖs Azure Data platform. If you think your organization would be interested in participating in the DM Jumpstart program, please contact your account team and ask that they submit a nomination.
+
+## Videos
+- For an overview of the Azure Database Migration Guide and the information it contains, see the video [How to Use the Database Migration Guide](https://azure.microsoft.com/resources/videos/how-to-use-the-azure-database-migration-guide/).
+- For a walk through of the phases of the migration process and detail about the specific tools and services recommended to perform assessment and migration, see the video [Overview of the migration journey and the tools/services recommended for performing assessment and migration](https://azure.microsoft.com/resources/videos/overview-of-migration-and-recommended-tools-services/).
synapse-analytics Sql Server To Synapse Analytics Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/migration-guides/sql-server-to-synapse-analytics-guide.md
+
+ Title: "SQL Server to Azure Synapse Analytics: Migration guide"
+description: Follow this guide to migrate your SQL databases to Azure Synapse Analytics SQL pool.
++++++ Last updated : 03/10/2021+
+# Migration guide: SQL Server to a dedicated SQL pool in Azure Synapse Analytics
+The following sections provide an overview of what's involved with migrating an existing SQL Server data warehouse solution to Azure Synapse Analytics SQL pool.
+
+## Overview
+Before migrating, you should verify that Azure Synapse Analytics is the best solution for your workload. Azure Synapse Analytics is a distributed system designed to perform analytics on large data. Migrating to Azure Synapse Analytics requires some design changes that aren't difficult to understand but that might take some time to implement. If your business requires an enterprise-class data warehouse, the benefits are worth the effort. However, if you don't need the power of Azure Synapse Analytics, it's more cost-effective to use [SQL Server](/sql/sql-server/) or [Azure SQL Database](/azure/azure-sql/database/sql-database-paas-overview).
+
+Consider using Azure Synapse Analytics when you:
+- Have one or more Terabytes of data.
+- Plan to run analytics on substantial amounts of data.
+- Need the ability to scale compute and storage.
+- Want to save on costs by pausing compute resources when you don't need them.
+
+Rather than Azure Synapse Analytics, consider other options for operational (OLTP) workloads that have:
+- High frequency reads and writes.
+- Large numbers of singleton selects.
+- High volumes of single row inserts.
+- Row-by-row processing needs.
+- Incompatible formats (JSON, XML).
+
+## Prerequisites
+To migrate your SQL Server to Azure Synapse Analytics, make sure you have the following prerequisites:
+
+- A data warehouse or Analytics workload
+- Latest version of [Azure Synapse Pathway](https://www.microsoft.com/en-us/download/details.aspx?id=102787) tool to migrate SQL Server objects to Azure Synapse objects.
+- A [dedicated SQL pool](../get-started-create-workspace.md) in Azure Synapse workspace.
+
+## Pre-migration
+After you make the decision to migrate an existing solution to Azure Synapse Analytics, it's important to plan the migration before you get started. A primary goal of planning is to ensure that your data, table schemas, and code are compatible with Azure Synapse Analytics. There are some compatibility differences between your current system and SQL Data Warehouse that you'll need to work around. Also, migrating large amounts of data to Azure takes time. Careful planning will speed up the process of getting your data to Azure. Another key goal of planning is to adjust your design to ensure that your solution takes full advantage of the high query performance that Azure Synapse Analytics is designed to provide. Designing data warehouses for scale introduces unique design patterns, so traditional approaches aren't always the best. While some design adjustments can be made after migration, making changes earlier in the process will save you time later.
+
+## Azure Synapse Pathway
+One of the critical blockers customers face is translating their SQL code when migrating from one system to another. [Azure Synapse Pathway](/sql/tools/synapse-pathway/azure-synapse-pathway-overview) helps you upgrade to a modern data warehouse platform by automating the code translation of your existing data warehouse. It's a free, intuitive, and easy to use tool that automates the code translation enabling a quicker migration to Azure Synapse Analytics.
+
+## Migrate
+Performing a successful migration requires you to migrate your table schemas, code, and data. For more detailed guidance on these topics, see:
+- The article [Migrate your schemas](https://docs.microsoft.com/azure/synapse-analytics/sql-data-warehouse/sql-data-warehouse-overview-develop).
+- The article [Migrate your code](https://docs.microsoft.com/azure/synapse-analytics/sql-data-warehouse/sql-data-warehouse-overview-develop).
+- The article [Migrate your data](https://docs.microsoft.com/azure/synapse-analytics/sql-data-warehouse/sql-data-warehouse-overview-develop).
+
+## Additional resources
+- The CAT (Customer Advisory Team) has some great Azure Synapse Analytics (formerly SQL Data Warehouse) guidance published as blog postings. Be sure to take a look at their article, [Migrating data to Azure SQL Data Warehouse in practice](https://docs.microsoft.com/archive/blogs/sqlcat/migrating-data-to-azure-sql-data-warehouse-in-practice), for additional guidance on migration.
+- Check out the white paper [Choosing your database migration path to Azure](https://azure.microsoft.com/mediahandler/files/resourcefiles/choosing-your-database-migration-path-to-azure/Choosing_your_database_migration_path_to_Azure.pdf) for additional information and recommendations.
+- For a matrix of the Microsoft and third-party services and tools that are available to assist you with various database and data migration scenarios as well as specialty tasks, see the article [Service and tools for data migration](https://docs.microsoft.com/azure/dms/dms-tools-matrix).
+
+## Migration assets from real-world engagements
+For additional assistance with completing this migration scenario, please see the following resources, which were developed in support of a real-world migration project engagement.
+
+| Title/link | Description |
+| | |
+| [Data Workload Assessment Model and Tool](https://github.com/Microsoft/DataMigrationTeam/tree/master/Data%20Workload%20Assessment%20Model%20and%20Tool) | This tool provides suggested ΓÇ£best fitΓÇ¥ target platforms, cloud readiness, and application/database remediation level for a given workload. It offers simple, one-click calculation and report generation that greatly helps to accelerate large estate assessments by providing and automated and uniform target platform decision process. |
+| [Handling Data Encoding Issues While Loading Data to Azure Synapse Analytics](https://azure.microsoft.com/en-us/blog/handling-data-encoding-issues-while-loading-data-to-sql-data-warehouse/) | This blog is intended to provide insight on some of the data encoding issues that you may encounter while using PolyBase to load data to SQL Data Warehouse. This article also provides some options that you can use to overcome such issues and load the data successfully. |
+| [Getting table sizes in Azure Synapse Analytics SQL pool](https://github.com/Microsoft/DataMigrationTeam/blob/master/Whitepapers/Getting%20table%20sizes%20in%20SQL%20DW.pdf) | One of the key tasks that an architect must perform execute is to get metrics about a new environment post-migration: collecting load times from on-premises to the cloud, collecting PolyBase load times, etc. Of these tasks, one of the most important is to determine the storage size in SQL Data Warehouse compared to the customer's current platform. |
+| [Utility to move On-Premises SQL Server Logins to Azure Synapse Analytics](https://github.com/Microsoft/DataMigrationTeam/tree/master/IP%20and%20Scripts/MoveLogins) | A PowerShell script that creates a T-SQL command script to re-create logins and select database users from an ΓÇ£on premisesΓÇ¥ SQL Server to an Azure SQL PaaS service. The tool allows the automatic mapping of Windows AD accounts to Azure AD accounts or it can do UPN lookups for each login against the on premises Windows Active Directory. The tool optionally moves SQL Server native logins as well. Custom server and database roles are scripted, as well as role membership and database role and user permissions. Contained databases are yet not supported and only a subset of possible SQL Server permissions are scripted; i.e. permissions grant with grant are not supported (complex permission trees). More details are available in the support document and the script have comments for ease of understanding. |
+
+> [!NOTE]
+> These above resources were developed as part of the Data Migration Jumpstart Program (DM Jumpstart), which is sponsored by the Azure Data Group engineering team. The core charter of DM Jumpstart is to unblock and accelerate complex modernization and compete data platform migration opportunities to MicrosoftΓÇÖs Azure Data platform. If you think your organization would be interested in participating in the DM Jumpstart program, please contact your account team and ask that they submit a nomination.
+
+## Videos
+- For an overview of the Azure Database Migration Guide and the information it contains, see the video [How to Use the Database Migration Guide](https://azure.microsoft.com/resources/videos/how-to-use-the-azure-database-migration-guide/).
+- For a walk through of the phases of the migration process and detail about the specific tools and services recommended to perform assessment and migration, see the video [Overview of the migration journey and the tools/services recommended for performing assessment and migration](https://azure.microsoft.com/resources/videos/overview-of-migration-and-recommended-tools-services/).
synapse-analytics Backup And Restore https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql-data-warehouse/backup-and-restore.md
order by run_id desc
This feature enables you to manually trigger snapshots to create restore points of your data warehouse before and after large modifications. This capability ensures that restore points are logically consistent, which provides additional data protection in case of any workload interruptions or user errors for quick recovery time. User-defined restore points are available for seven days and are automatically deleted on your behalf. You cannot change the retention period of user-defined restore points. **42 user-defined restore points** are guaranteed at any point in time so they must be [deleted](/powershell/module/azurerm.sql/remove-azurermsqldatabaserestorepoint) before creating another restore point. You can trigger snapshots to create user-defined restore points through [PowerShell](/powershell/module/az.sql/new-azsqldatabaserestorepoint?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.jsont#examples) or the Azure portal. > [!NOTE]
-> If you require restore points longer than 7 days, please vote for this capability [here](https://feedback.azure.com/forums/307516-sql-data-warehouse/suggestions/35114410-user-defined-retention-periods-for-restore-points). You can also create a user-defined restore point and restore from the newly created restore point to a new data warehouse. Once you have restored, you have the dedicated SQL pool online and can pause it indefinitely to save compute costs. The paused database incurs storage charges at the Azure Premium Storage rate. If you need an active copy of the restored data warehouse, you can resume which should take only a few minutes.
+> If you require restore points longer than 7 days, please vote for this capability [here](https://feedback.azure.com/forums/307516-sql-data-warehouse/suggestions/35114410-user-defined-retention-periods-for-restore-points). You can also create a user-defined restore point and restore from the newly created restore point to a new data warehouse. Once you have restored, you have the dedicated SQL pool online and can pause it indefinitely to save compute costs. The paused database incurs storage charges at the Azure Synapse storage rate. If you need an active copy of the restored data warehouse, you can resume which should take only a few minutes.
### Restore point retention
synapse-analytics Query Json Files https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql/query-json-files.md
The query examples read *json* files containing documents with following structu
### Query JSON files using JSON_VALUE
-The query below shows you how to use [JSON_VALUE](/sql/t-sql/functions/json-value-transact-sql?view=azure-sqldw-latest&preserve-view=true) to retrieve scalar values (title, publisher) from a JSON documents:
+The query below shows you how to use [JSON_VALUE](/sql/t-sql/functions/json-value-transact-sql?view=azure-sqldw-latest&preserve-view=true) to retrieve scalar values (`date_rep`, `countries_and_territories`, `cases`) from a JSON documents:
```sql select JSON_VALUE(doc, '$.date_rep') AS date_reported, JSON_VALUE(doc, '$.countries_and_territories') AS country,
+ CAST(JSON_VALUE(doc, '$.deaths') AS INT) as fatal,
JSON_VALUE(doc, '$.cases') as cases, doc from openrowset(
from openrowset(
order by JSON_VALUE(doc, '$.geo_id') desc ```
+Once you extract JSON properties from a JSON document, you can define column aliases and optionally cast the textual value to some type.
+ ### Query JSON files using OPENJSON The following query uses [OPENJSON](/sql/t-sql/functions/openjson-transact-sql?view=azure-sqldw-latest&preserve-view=true). It will retrieve COVID statistics reported in Serbia:
from openrowset(
where country = 'Serbia' order by country, date_rep desc; ```
+The results are functionally same as the results returned using the `JSON_VALUE` function. In some cases, `OPENJSON` might have advantage over `JSON_VALUE`:
+- In the `WITH` clause you can explicitly set the column aliases and the types for every property. You don't need to put the `CAST` function in every column in `SELECT` list.
+- `OPENJSON` might be faster if you are returning a large number of properties. If you are returning just 1-2 properties, the `OPENJSON` function might be overhead.
+- You must use the `OPENJSON` function if you need to parse the array from each document, and join it with the parent row.
## Next steps
virtual-machine-scale-sets Virtual Machine Scale Sets Orchestration Modes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-orchestration-modes.md
The following table compares the Flexible orchestration mode, Uniform orchestrat
| Azure Alerts | No | Yes | Yes | | VM Insights | No | Yes | Yes | | Azure Backup | Yes | Yes | Yes |
-| Azure Site Recovery | No | No | Yes |
+| Azure Site Recovery | No | No | Yes |
| Add/remove existing VM to the group | No | No | No | ## Register for Flexible orchestration mode Before you can deploy virtual machine scale sets in Flexible orchestration mode, you must first register your subscription for the preview feature. The registration may take several minutes to complete. You can use the following Azure PowerShell or Azure CLI commands to register.
+### Azure Portal
+Navigate to the details page for the subscription you would like to create a scale set in Flexible orchestration mode, and select Preview Features from the menu. Select the two orchestrator features to enable: _VMOrchestratorSingleFD_ and _VMOrchestratorMultiFD_, and press the Register button. Feature registration can take up to 15 minutes.
+
+![Feature registration.](https://user-images.githubusercontent.com/157768/110361543-04d95880-7ff5-11eb-91a7-2e98f4112ae0.png)
+
+Once the features have been registered for your subscription, complete the opt-in process by propagating the change into the Compute resource provider. Navigate to the Resource providers tab for your subscription, select Microsoft.compute, and click Re-register.
+
+![Re-register](https://user-images.githubusercontent.com/157768/110362176-cd1ee080-7ff5-11eb-8cc8-36aa967e267a.png)
++ ### Azure PowerShell Use the [Register-AzProviderFeature](/powershell/module/az.resources/register-azproviderfeature) cmdlet to enable the preview for your subscription.
virtual-machines Nct4 V3 Series https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/nct4-v3-series.md
To take advantage of the GPU capabilities of Azure NCasT4_v3-series VMs running
To install Nvidia GPU drivers manually, see [N-series GPU driver setup for Windows](./windows/n-series-driver-setup.md) for supported operating systems, drivers, installation, and verification steps.
+The Azure Nvidia GPU driver extension will deploy CUDA drivers on the NCasT4_v3-series VMs. For graphics and visualization workloads manually install the GRID drivers supported by Azure.
+ ## Other sizes - [General purpose](sizes-general.md)
virtual-machines Image Builder Virtual Desktop https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/windows/image-builder-virtual-desktop.md
+
+ Title: Image Builder - Create a Windows Virtual Desktop image
+description: Create an Azure VM image of Windows Virtual Desktop using Azure Image Builder in PowerShell.
+++ Last updated : 01/27/2021++++++
+# Create a Windows Virtual Desktop image using Azure VM Image Builder and PowerShell
+
+This article shows you how to create a Windows Virtual Desktop image with these customizations:
+
+* Installing [FsLogix](https://github.com/DeanCefola/Azure-WVD/blob/master/PowerShell/FSLogixSetup.ps1).
+* Running a [Windows Virtual Desktop Optimization script](https://github.com/The-Virtual-Desktop-Team/Virtual-Desktop-Optimization-Tool) from the community repo.
+* Install [Microsoft Teams](https://docs.microsoft.com/azure/virtual-desktop/teams-on-wvd).
+* [Restart](https://docs.microsoft.com/azure/virtual-machines/linux/image-builder-json?toc=%2Fazure%2Fvirtual-machines%2Fwindows%2Ftoc.json&bc=%2Fazure%2Fvirtual-machines%2Fwindows%2Fbreadcrumb%2Ftoc.json#windows-restart-customizer)
+* Run [Windows Update](https://docs.microsoft.com/azure/virtual-machines/linux/image-builder-json?toc=%2Fazure%2Fvirtual-machines%2Fwindows%2Ftoc.json&bc=%2Fazure%2Fvirtual-machines%2Fwindows%2Fbreadcrumb%2Ftoc.json#windows-update-customizer)
+
+We will show you how to automate this using the Azure VM Image Builder, and distribute the image to a [Shared Image Gallery](https://docs.microsoft.com/azure/virtual-machines/windows/shared-image-galleries), where you can replicate to other regions, control the scale, and share the image inside and outside your organizations.
++
+To simplify deploying an Image Builder configuration, this example uses an Azure Resource Manager template with the Image Builder template nested inside. This gives you some other benefits, like variables and parameter inputs. You can also pass parameters from the command line.
+
+This article is intended to be a copy and paste exercise.
+
+> [!NOTE]
+> The scripts to install the apps are located on [GitHub](https://github.com/danielsollondon/azvmimagebuilder/tree/master/solutions/14_Building_Images_WVD). They are for illustration and testing only, and not for production workloads.
+
+## Tips for building Windows images
+
+- VM Size - the default VM size is a `Standard_D1_v2`, which is not suitable for Windows. Use a `Standard_D2_v2` or greater.
+- This example uses the [PowerShell customizer scripts](../linux/image-builder-json.md). You need to use these settings or the build will hang.
+
+ ```json
+ "runElevated": true,
+ "runAsSystem": true,
+ ```
+
+ For example:
+
+ ```json
+ {
+ "type": "PowerShell",
+ "name": "installFsLogix",
+ "runElevated": true,
+ "runAsSystem": true,
+ "scriptUri": "https://raw.githubusercontent.com/danielsollondon/azvmimagebuilder/master/solutions/14_Building_Images_WVD/0_installConfFsLogix.ps1"
+ ```
+- Comment your code - The AIB build log (customization.log) is extremely verbose, if you comment your scripts using 'write-host' these will be sent to the logs, and make troubleshooting easier.
+
+ ```PowerShell
+ write-host 'AIB Customization: Starting OS Optimizations script'
+ ```
+
+- Exit Codes - AIB expects all scripts to return a 0 exit code, any non-zero exit code will result in AIB failing the customization and stopping the build. If you have complex scripts, add instrumentation and emit exit codes, these will be shown in the customization.log.
+
+ ```PowerShell
+ Write-Host "Exit code: " $LASTEXITCODE
+ ```
+- Test: Please test and test your code before on a standalone VM, ensure there are no user prompts, you are using the right privilege etc.
+
+- Networking - `Set-NetAdapterAdvancedProperty`. This is being set in the optimization script, but fails the AIB build, as it disconnects the network, this is commented out. It is under investigation.
+
+## Prerequisites
+
+You must have the latest Azure PowerShell CmdLets installed, see [here](https://docs.microsoft.com/powershell/azure/overview) for install details.
+
+```PowerShell
+# Register for Azure Image Builder Feature
+Register-AzProviderFeature -FeatureName VirtualMachineTemplatePreview -ProviderNamespace Microsoft.VirtualMachineImages
+
+Get-AzProviderFeature -FeatureName VirtualMachineTemplatePreview -ProviderNamespace Microsoft.VirtualMachineImages
+
+# wait until RegistrationState is set to 'Registered'
+
+# check you are registered for the providers, ensure RegistrationState is set to 'Registered'.
+Get-AzResourceProvider -ProviderNamespace Microsoft.VirtualMachineImages
+Get-AzResourceProvider -ProviderNamespace Microsoft.Storage
+Get-AzResourceProvider -ProviderNamespace Microsoft.Compute
+Get-AzResourceProvider -ProviderNamespace Microsoft.KeyVault
+
+# If they do not saw registered, run the commented out code below.
+
+## Register-AzResourceProvider -ProviderNamespace Microsoft.VirtualMachineImages
+## Register-AzResourceProvider -ProviderNamespace Microsoft.Storage
+## Register-AzResourceProvider -ProviderNamespace Microsoft.Compute
+## Register-AzResourceProvider -ProviderNamespace Microsoft.KeyVault
+```
+
+## Set up environment and variables
+
+```azurepowershell-interactive
+# Step 1: Import module
+Import-Module Az.Accounts
+
+# Step 2: get existing context
+$currentAzContext = Get-AzContext
+
+# destination image resource group
+$imageResourceGroup="wvdImageDemoRg"
+
+# location (see possible locations in main docs)
+$location="westus2"
+
+# your subscription, this will get your current subscription
+$subscriptionID=$currentAzContext.Subscription.Id
+
+# image template name
+$imageTemplateName="wvd10ImageTemplate01"
+
+# distribution properties object name (runOutput), i.e. this gives you the properties of the managed image on completion
+$runOutputName="sigOutput"
+
+# create resource group
+New-AzResourceGroup -Name $imageResourceGroup -Location $location
+```
+
+## Permissions, user identity and role
++
+ Create a user identity.
+
+```azurepowershell-interactive
+# setup role def names, these need to be unique
+$timeInt=$(get-date -UFormat "%s")
+$imageRoleDefName="Azure Image Builder Image Def"+$timeInt
+$idenityName="aibIdentity"+$timeInt
+
+## Add AZ PS modules to support AzUserAssignedIdentity and Az AIB
+'Az.ImageBuilder', 'Az.ManagedServiceIdentity' | ForEach-Object {Install-Module -Name $_ -AllowPrerelease}
+
+# create identity
+New-AzUserAssignedIdentity -ResourceGroupName $imageResourceGroup -Name $idenityName
+
+$idenityNameResourceId=$(Get-AzUserAssignedIdentity -ResourceGroupName $imageResourceGroup -Name $idenityName).Id
+$idenityNamePrincipalId=$(Get-AzUserAssignedIdentity -ResourceGroupName $imageResourceGroup -Name $idenityName).PrincipalId
+
+```
+
+Assign permissions to the identity to distribute images. This command will download and update the template with the parameters specified earlier.
+
+```azurepowershell-interactive
+$aibRoleImageCreationUrl="https://raw.githubusercontent.com/danielsollondon/azvmimagebuilder/master/solutions/12_Creating_AIB_Security_Roles/aibRoleImageCreation.json"
+$aibRoleImageCreationPath = "aibRoleImageCreation.json"
+
+# download config
+Invoke-WebRequest -Uri $aibRoleImageCreationUrl -OutFile $aibRoleImageCreationPath -UseBasicParsing
+
+((Get-Content -path $aibRoleImageCreationPath -Raw) -replace '<subscriptionID>',$subscriptionID) | Set-Content -Path $aibRoleImageCreationPath
+((Get-Content -path $aibRoleImageCreationPath -Raw) -replace '<rgName>', $imageResourceGroup) | Set-Content -Path $aibRoleImageCreationPath
+((Get-Content -path $aibRoleImageCreationPath -Raw) -replace 'Azure Image Builder Service Image Creation Role', $imageRoleDefName) | Set-Content -Path $aibRoleImageCreationPath
+
+# create role definition
+New-AzRoleDefinition -InputFile ./aibRoleImageCreation.json
+
+# grant role definition to image builder service principal
+New-AzRoleAssignment -ObjectId $idenityNamePrincipalId -RoleDefinitionName $imageRoleDefName -Scope "/subscriptions/$subscriptionID/resourceGroups/$imageResourceGroup"
+```
+
+> [!NOTE]
+> If you see this error: 'New-AzRoleDefinition: Role definition limit exceeded. No more role definitions can be created.' see this article to resolve: https://docs.microsoft.com/azure/role-based-access-control/troubleshooting.
+++
+## Create the Shared Image Gallery
+
+If you don't already have a Shared Image Gallery, you need to create one.
+
+```azurepowershell-interactive
+$sigGalleryName= "myaibsig01"
+$imageDefName ="win10wvd"
+
+# create gallery
+New-AzGallery -GalleryName $sigGalleryName -ResourceGroupName $imageResourceGroup -Location $location
+
+# create gallery definition
+New-AzGalleryImageDefinition -GalleryName $sigGalleryName -ResourceGroupName $imageResourceGroup -Location $location -Name $imageDefName -OsState generalized -OsType Windows -Publisher 'myCo' -Offer 'Windows' -Sku '10wvd'
+
+```
+
+## Configure the Image Template
+
+For this example, we have a template ready to that will download and update the template with the parameters specified earlier, it will install FsLogix, OS optimizations, Microsoft Teams, and run Windows Update at the end.
+
+If you open the template you can see in the source property the image that is being used, in this example it uses a Win 10 Multi session image.
+
+### Windows 10 images
+Two key types you should be aware of: multisession and single-session.
+
+Multi session images are intended for pooled usage. Here is an example of the image details in Azure:
+
+```json
+"publisher": "MicrosoftWindowsDesktop",
+"offer": "Windows-10",
+"sku": "20h2-evd",
+"version": "latest"
+```
+
+Single session images are intend for individual usage. Here is an example of the image details in Azure:
+
+```json
+"publisher": "MicrosoftWindowsDesktop",
+"offer": "Windows-10",
+"sku": "19h2-ent",
+"version": "latest"
+```
+
+You can also change the Win10 images available:
+
+```azurepowershell-interactive
+Get-AzVMImageSku -Location westus2 -PublisherName MicrosoftWindowsDesktop -Offer windows-10
+```
+
+## Download template and configure
+
+Now, you need to download the template and configure it for your use.
+
+```azurepowershell-interactive
+$templateUrl="https://raw.githubusercontent.com/danielsollondon/azvmimagebuilder/master/solutions/14_Building_Images_WVD/armTemplateWVD.json"
+$templateFilePath = "armTemplateWVD.json"
+
+Invoke-WebRequest -Uri $templateUrl -OutFile $templateFilePath -UseBasicParsing
+
+((Get-Content -path $templateFilePath -Raw) -replace '<subscriptionID>',$subscriptionID) | Set-Content -Path $templateFilePath
+((Get-Content -path $templateFilePath -Raw) -replace '<rgName>',$imageResourceGroup) | Set-Content -Path $templateFilePath
+((Get-Content -path $templateFilePath -Raw) -replace '<region>',$location) | Set-Content -Path $templateFilePath
+((Get-Content -path $templateFilePath -Raw) -replace '<runOutputName>',$runOutputName) | Set-Content -Path $templateFilePath
+
+((Get-Content -path $templateFilePath -Raw) -replace '<imageDefName>',$imageDefName) | Set-Content -Path $templateFilePath
+((Get-Content -path $templateFilePath -Raw) -replace '<sharedImageGalName>',$sigGalleryName) | Set-Content -Path $templateFilePath
+((Get-Content -path $templateFilePath -Raw) -replace '<region1>',$location) | Set-Content -Path $templateFilePath
+((Get-Content -path $templateFilePath -Raw) -replace '<imgBuilderId>',$idenityNameResourceId) | Set-Content -Path $templateFilePath
+
+```
+
+Feel free to view the [template](https://raw.githubusercontent.com/danielsollondon/azvmimagebuilder/master/solutions/14_Building_Images_WVD/armTemplateWVD.json), all the code is viewable.
++
+## Submit the template
+
+Your template must be submitted to the service, this will download any dependent artifacts (like scripts), validate, check permissions, and store them in the staging Resource Group, prefixed, *IT_*.
+
+```azurepowershell-interactive
+New-AzResourceGroupDeployment -ResourceGroupName $imageResourceGroup -TemplateFile $templateFilePath -api-version "2020-02-14" -imageTemplateName $imageTemplateName -svclocation $location
+
+# Optional - if you have any errors running the above, run:
+$getStatus=$(Get-AzImageBuilderTemplate -ResourceGroupName $imageResourceGroup -Name $imageTemplateName)
+$getStatus.ProvisioningErrorCode
+$getStatus.ProvisioningErrorMessage
+```
+
+## Build the image
+```azurepowershell-interactive
+Start-AzImageBuilderTemplate -ResourceGroupName $imageResourceGroup -Name $imageTemplateName -NoWait
+```
+
+> [!NOTE]
+> The command will not wait for the image builder service to complete the image build, you can query the status below.
+
+```azurepowershell-interactive
+$getStatus=$(Get-AzImageBuilderTemplate -ResourceGroupName $imageResourceGroup -Name $imageTemplateName)
+
+# this shows all the properties
+$getStatus | Format-List -Property *
+
+# these show the status the build
+$getStatus.LastRunStatusRunState
+$getStatus.LastRunStatusMessage
+$getStatus.LastRunStatusRunSubState
+```
+## Create a VM
+Now the build is finished you can build a VM from the image, use the examples from [here](https://docs.microsoft.com/powershell/module/az.compute/new-azvm#examples).
+
+## Clean up
+
+Delete the resource group template first, do not just delete the entire resource group, otherwise the staging resource group (*IT_*) used by AIB will not be cleaned up.
+
+Remove the Image Template.
+
+```azurepowershell-interactive
+Remove-AzImageBuilderTemplate -ResourceGroupName $imageResourceGroup -Name wvd10ImageTemplate
+```
+
+Delete the role assignment.
+
+```azurepowershell-interactive
+Remove-AzRoleAssignment -ObjectId $idenityNamePrincipalId -RoleDefinitionName $imageRoleDefName -Scope "/subscriptions/$subscriptionID/resourceGroups/$imageResourceGroup"
+
+## remove definitions
+Remove-AzRoleDefinition -Name "$idenityNamePrincipalId" -Force -Scope "/subscriptions/$subscriptionID/resourceGroups/$imageResourceGroup"
+
+## delete identity
+Remove-AzUserAssignedIdentity -ResourceGroupName $imageResourceGroup -Name $idenityName -Force
+```
+
+Delete the resource group.
+
+```azurepowershell-interactive
+Remove-AzResourceGroup $imageResourceGroup -Force
+```
+
+## Next steps
+
+You can try more examples [on GitHub](https://github.com/danielsollondon/azvmimagebuilder/tree/master/quickquickstarts).
virtual-network Ipv6 Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-network/ipv6-overview.md
The current IPv6 for Azure virtual network release has the following limitations
- IPv6 for Azure virtual network is available in all global Azure Commercial and US Government regions using all deployment methods. - ExpressRoute gateways CAN be used for IPv4-only traffic in a VNET with IPv6 enabled. Support for IPv6 traffic is on our roadmap. - VPN gateways CANNOT be used in a VNET with IPv6 enabled, either directly or peered with "UseRemoteGateway".-- The Azure platform (AKS, etc.) does not support IPv6 communication for Containers. -- IPv6 can be load balanced only to the primary network interface (NIC) on Azure VMs. Load balancing IPv6 traffic to secondary NICs is not supported.
+- The Azure platform (AKS, etc.) does not support IPv6 communication for Containers.
- IPv6-only Virtual Machines or Virtual Machines Scale Sets are not supported, each NIC must include at least one IPv4 IP configuration. - When adding IPv6 to existing IPv4 deployments, IPv6 ranges can not be added to a VNET with existing resource navigation links. - Forward DNS for IPv6 is supported for Azure public DNS today but Reverse DNS is not yet supported.
virtual-network Public Ip Address Prefix https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-network/public-ip-address-prefix.md
You can associate the following resources to a static public IP address from a p
|Virtual machines| Associating public IPs from a prefix to your virtual machines in Azure reduces management overhead when adding IP addresses to an allow list in the firewall. You can add an entire prefix with a single firewall rule. As you scale with virtual machines in Azure, you can associate IPs from the same prefix saving cost, time, and management overhead.| To associate IPs from a prefix to your virtual machine: </br> 1. [Create a prefix.](manage-public-ip-address-prefix.md) </br> 2. [Create an IP from the prefix.](manage-public-ip-address-prefix.md) </br> 3. [Associate the IP to your virtual machine's network interface.](virtual-network-network-interface-addresses.md#add-ip-addresses) </br> You can also [associate the IPs to a Virtual Machine Scale Set](https://azure.microsoft.com/resources/templates/101-vmms-with-public-ip-prefix/). | Standard load balancers | Associating public IPs from a prefix to your frontend IP configuration or outbound rule of a load balancer ensures simplification of your Azure public IP address space. Simplify your scenario by grooming outbound connections from a range of contiguous IP addresses. | To associate IPs from a prefix to your load balancer: </br> 1. [Create a prefix.](manage-public-ip-address-prefix.md) </br> 2. [Create an IP from the prefix.](manage-public-ip-address-prefix.md) </br> 3. When creating the load balancer, select or update the IP created in step 2 above as the frontend IP of your load balancer. | | Azure Firewall | You can use a public IP from a prefix for outbound SNAT. All outbound virtual network traffic is translated to the [Azure Firewall](../firewall/overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json) public IP. | To associate an IP from a prefix to your firewall: </br> 1. [Create a prefix.](manage-public-ip-address-prefix.md) </br> 2. [Create an IP from the prefix.](manage-public-ip-address-prefix.md) </br> 3. When you [deploy the Azure firewall](../firewall/tutorial-firewall-deploy-portal.md?toc=%2fazure%2fvirtual-network%2ftoc.json#deploy-the-firewall), be sure to select the IP you previously gave from the prefix.|
-| Application Gateway v2 | You can use a public IP from a prefix for your autoscaling and zone-redundant Application gateway v2. | To associate an IP from a prefix to your gateway: </br> 1. [Create a prefix.](manage-public-ip-address-prefix.md) </br> 2. [Create an IP from the prefix.](manage-public-ip-address-prefix.md) </br> 3. When you [deploy the Application Gateway](../application-gateway/quick-create-portal.md#create-an-application-gateway), be sure to select the IP you previously gave from the prefix.|
+| VPN Gateway (AZ SKU) or Application Gateway v2 | You can use a public IP from a prefix for your zone-redundant VPN or Application gateway v2. | To associate an IP from a prefix to your gateway: </br> 1. [Create a prefix.](manage-public-ip-address-prefix.md) </br> 2. [Create an IP from the prefix.](manage-public-ip-address-prefix.md) </br> 3. When you deploy the [VPN Gateway](https://docs.microsoft.com/azure/vpn-gateway/tutorial-create-gateway-portal) or [Application Gateway](../application-gateway/quick-create-portal.md#create-an-application-gateway), be sure to select the IP you previously gave from the prefix.|
## Constraints
virtual-network Public Ip Addresses https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-network/public-ip-addresses.md
Standard SKU public IP addresses:
- Have an adjustable inbound originated flow idle timeout of 4-30 minutes, with a default of 4 minutes, and fixed outbound originated flow idle timeout of 4 minutes. - Secure by default and closed to inbound traffic. Allow list inbound traffic with a [network security group](./network-security-groups-overview.md#network-security-groups). - Assigned to network interfaces, standard public load balancers, or Application Gateways. For more information about Standard load balancer, see [Azure Standard Load Balancer](../load-balancer/load-balancer-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json).-- Can be zone-redundant (advertized from all 3 zones), zonal (guaranteed in a specific pre-selected availability zone), or no-zone (not associated with a specific pre-selected availability zone). To learn more about availability zones, see [Availability zones overview](../availability-zones/az-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json) and [Standard Load Balancer and Availability Zones](../load-balancer/load-balancer-standard-availability-zones.md?toc=%2fazure%2fvirtual-network%2ftoc.json). **Zone redundant IPs can only be created in [regions where 3 availability zones](../availability-zones/az-region.md) are live.** IPs created before zones are live will not be zone redundant.
+- Can be zone-redundant (advertised from all 3 zones), zonal (guaranteed in a specific pre-selected availability zone), or no-zone (not associated with a specific pre-selected availability zone). To learn more about availability zones, see [Availability zones overview](../availability-zones/az-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json) and [Standard Load Balancer and Availability Zones](../load-balancer/load-balancer-standard-availability-zones.md?toc=%2fazure%2fvirtual-network%2ftoc.json). **Zone redundant IPs can only be created in [regions where 3 availability zones](../availability-zones/az-region.md) are live.** IPs created before zones are live will not be zone redundant.
- Can be used as anycast frontend IPs for [cross-region load balancers](../load-balancer/cross-region-overview.md) (preview functionality). > [!NOTE]
For more information about Azure load balancer SKUs, see [Azure load balancer st
* Azure virtual networks * On-premises network(s).
-A public IP address is assigned to the VPN Gateway to enable communication with the remote network. You can only assign a *dynamic* basic public IP address to a VPN gateway.
+A public IP address is assigned to the VPN Gateway to enable communication with the remote network.
+
+* Assign a **dynamic** basic public IP to a VPNGw 1-5 SKU front-end configuration.
+* Assign a **static** standard public IP address to a a VPNGwAZ 1-5 SKU front-end configuration.
## Application gateways You can associate a public IP address with an Azure [Application Gateway](../application-gateway/overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json), by assigning it to the gateway's **frontend** configuration. * Assign a **dynamic** basic public IP to an application gateway V1 front-end configuration.
-* Assign a **static** standard SKU address to a V2 front-end configuration.
+* Assign a **static** standard public IP address to a V2 front-end configuration.
## Azure Firewall
virtual-network Virtual Network Network Interface Addresses https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-network/virtual-network-network-interface-addresses.md
You can assign zero or one private [IPv6](#ipv6) address to one secondary IP con
> [!NOTE] > Though you can create a network interface with an IPv6 address using the portal, you can't add an existing network interface to a new, or existing virtual machine, using the portal. Use PowerShell or the Azure CLI to create a network interface with a private IPv6 address, then attach the network interface when creating a virtual machine. You cannot attach a network interface with a private IPv6 address assigned to it to an existing virtual machine. You cannot add a private IPv6 address to an IP configuration for any network interface attached to a virtual machine using any tools (portal, CLI, or PowerShell).
-You can't assign a public IPv6 address to a primary or secondary IP configuration.
- ## SKUs A public IP address is created with the basic or standard SKU. For more information about SKU differences, see [Manage public IP addresses](virtual-network-public-ip-address.md).