Updates from: 05/04/2021 03:06:03
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Add Api Connector https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/add-api-connector.md
Previously updated : 04/28/2021 Last updated : 05/03/2021
As a developer or IT administrator, you can use API connectors to integrate your
::: zone pivot="b2c-user-flow"
-In this scenario, the REST API validates whether email address' domain is fabrikam.com, or fabricam.com. The user-provided job title is greater than five characters.
+In this scenario, the REST API validates whether email address' domain is fabrikam.com, or fabricam.com. The user-provided display name is greater than five characters. Then returns the job title with a static value.
> [!IMPORTANT] > API connectors for sign-up is a public preview feature of Azure AD B2C. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
In general, it's helpful to use the logging tools enabled by your web API servic
- [Secure your API Connector](secure-rest-api.md) - [Reference: RESTful technical profile](restful-technical-profile.md)
active-directory-b2c Add Password Change Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/add-password-change-policy.md
Previously updated : 03/22/2021 Last updated : 05/03/2021 zone_pivot_groups: b2c-policy-type
In Azure Active Directory B2C (Azure AD B2C), you can enable users who are signe
1. Open the policy that you changed. For example, *B2C_1A_profile_edit_password_change*. 2. For **Application**, select your application that you previously registered. To see the token, the **Reply URL** should show `https://jwt.ms`.
-3. Click **Run now**. Sign in with the account that you previously created. You should now have the opportunity to change the password.
+3. Click **Run now**. In the new tab that opens, remove "&prompt=login" from the URL and refresh the tab. Then sign in with the account you previously created. You will now have the opportunity to change the password.
## Next steps
active-directory-b2c Partner Biocatch https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/partner-biocatch.md
Contact [BioCatch](https://www.biocatch.com/contact-us) and create an account.
It's recommended to hide the client session ID field. Use CSS, JavaScript, or any other method to hide the field. For testing purposes, you may unhide the field. For example, JavaScript is used to hide the input field as:
-```
+```JavaScript
document.getElementById("clientSessionId").style.displayΓÇ»=ΓÇ»'none'; ```
document.getElementById("clientSessionId").style.displayΓÇ»=ΓÇ»'none';
2. Create a new file, which inherits from the extensions file.
- ```
+ ```XML
<BasePolicy> <TenantId>tenant.onmicrosoft.com</TenantId>
document.getElementById("clientSessionId").style.displayΓÇ»=ΓÇ»'none';
3. Create a reference to the custom UI to hide the input box, under the BuildingBlocks resource.
- ```
+ ```XML
<ContentDefinitions> <ContentDefinition Id="api.selfasserted">
document.getElementById("clientSessionId").style.displayΓÇ»=ΓÇ»'none';
4. Add the following claims under the BuildingBlocks resource.
- ```
+ ```XML
<ClaimsSchema> <ClaimType Id="riskLevel">
document.getElementById("clientSessionId").style.displayΓÇ»=ΓÇ»'none';
5. Configure self-asserted claims provider for the client session ID field.
- ```
+ ```XML
<ClaimsProvider> <DisplayName>Client Session ID Claims Provider</DisplayName>
document.getElementById("clientSessionId").style.displayΓÇ»=ΓÇ»'none';
6. Configure REST API claims provider for BioCatch.
- ```
+ ```XML
<TechnicalProfile Id="BioCatch-API-GETSCORE"> <DisplayName>Technical profile for BioCatch API to return session information</DisplayName>
document.getElementById("clientSessionId").style.displayΓÇ»=ΓÇ»'none';
1. If the returned claim *risk* equals *low*, skip the step for MFA, else force user MFA
- ```
+ ```XML
<OrchestrationStep Order="8" Type="ClaimsExchange"> <ClaimsExchanges>
document.getElementById("clientSessionId").style.displayΓÇ»=ΓÇ»'none';
It is useful to pass the BioCatch returned information to your application as claims in the token, specifically *risklevel* and *score*.
- ```
+ ```XML
<RelyingParty> <DefaultUserJourney ReferenceId="SignUpOrSignInMfa" />
Follow these steps to add the policy files to Azure AD B2C
4. Go through sign-up flow and create an account. Token returned to JWT.MS should have 2x claims for riskLevel and score. Follow the example.
- ```
+ ```JavaScript
{ "typ": "JWT",
Follow these steps to add the policy files to Azure AD B2C
- [Custom policies in Azure AD B2C](./custom-policy-overview.md) -- [Get started with custom policies in Azure AD B2C](./tutorial-create-user-flows.md?pivots=b2c-custom-policy&tabs=applications)
+- [Get started with custom policies in Azure AD B2C](./tutorial-create-user-flows.md?pivots=b2c-custom-policy&tabs=applications)
active-directory-b2c Restful Technical Profile https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/restful-technical-profile.md
Previously updated : 12/11/2020 Last updated : 05/03/2021
The technical profile also returns claims, that aren't returned by the identity
| ServiceUrl | Yes | The URL of the REST API endpoint. | | AuthenticationType | Yes | The type of authentication being performed by the RESTful claims provider. Possible values: `None`, `Basic`, `Bearer`, `ClientCertificate`, or `ApiKeyHeader`. <br /><ul><li>The `None` value indicates that the REST API is anonymous. </li><li>The `Basic` value indicates that the REST API is secured with HTTP basic authentication. Only verified users, including Azure AD B2C, can access your API. </li><li>The `ClientCertificate` (recommended) value indicates that the REST API restricts access by using client certificate authentication. Only services that have the appropriate certificates, for example Azure AD B2C, can access your API. </li><li>The `Bearer` value indicates that the REST API restricts access using client OAuth2 Bearer token. </li><li>The `ApiKeyHeader` value indicates that the REST API is secured with API key HTTP header, such as *x-functions-key*. </li></ul> | | AllowInsecureAuthInProduction| No| Indicates whether the `AuthenticationType` can be set to `none` in production environment (`DeploymentMode` of the [TrustFrameworkPolicy](trustframeworkpolicy.md) is set to `Production`, or not specified). Possible values: true, or false (default). |
-| SendClaimsIn | No | Specifies how the input claims are sent to the RESTful claims provider. Possible values: `Body` (default), `Form`, `Header`, `Url` or `QueryString`. The `Body` value is the input claim that is sent in the request body in JSON format. The `Form` value is the input claim that is sent in the request body in an ampersand '&' separated key value format. The `Header` value is the input claim that is sent in the request header. The `Url` value is the input claim that is sent in the URL, for example, https://{claim1}.example.com/{claim2}/{claim3}?{claim4}={claim5}. The `QueryString` value is the input claim that is sent in the request query string. The HTTP verbs invoked by each are as follows:<br /><ul><li>`Body`: POST</li><li>`Form`: POST</li><li>`Header`: GET</li><li>`Url`: GET</li><li>`QueryString`: GET</li></ul> |
+| SendClaimsIn | No | Specifies how the input claims are sent to the RESTful claims provider. Possible values: `Body` (default), `Form`, `Header`, `Url` or `QueryString`. <br /> The `Body` value is the input claim that is sent in the request body in JSON format. <br />The `Form` value is the input claim that is sent in the request body in an ampersand '&' separated key value format. <br />The `Header` value is the input claim that is sent in the request header. <br />The `Url` value is the input claim that is sent in the URL, for example, https://api.example.com/{claim1}/{claim2}?{claim3}={claim4}. The host name part of the URL cannot contain claims. <br />The `QueryString` value is the input claim that is sent in the request query string. <br />The HTTP verbs invoked by each are as follows:<br /><ul><li>`Body`: POST</li><li>`Form`: POST</li><li>`Header`: GET</li><li>`Url`: GET</li><li>`QueryString`: GET</li></ul> |
| ClaimsFormat | No | Not currently used, can be ignored. | | ClaimUsedForRequestPayload| No | Name of a string claim that contains the payload to be sent to the REST API. | | DebugMode | No | Runs the technical profile in debug mode. Possible values: `true`, or `false` (default). In debug mode, the REST API can return more information. See the [Returning error message](#returning-validation-error-message) section. |
active-directory-b2c Tenant Management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/tenant-management.md
Previously updated : 04/19/2021 Last updated : 05/03/2021
When planning your access control strategy, it's best to assign users the least
|Resource |Description |Role | ||||
-|[Application registrations](tutorial-register-applications.md) | Create and manage all aspects of your web, mobile, and native application registrations within Azure AD B2C.|[Application Administrator](../active-directory/roles/permissions-reference.md#global-administrator)|
+|[Application registrations](tutorial-register-applications.md) | Create and manage all aspects of your web, mobile, and native application registrations within Azure AD B2C.|[Application Administrator](../active-directory/roles/permissions-reference.md#application-administrator)|
|[Identity providers](add-identity-provider.md)| Configure the [local identity provider](identity-provider-local.md) and external social or enterprise identity providers. | [External Identity Provider Administrator](../active-directory/roles/permissions-reference.md#external-identity-provider-administrator)|
-|[API connectors](add-api-connector.md)| Integrate your user flows with web APIs to customize the user experience and integrate with external systems.|[External ID User Flow Attribute Administrator](../active-directory/roles/permissions-reference.md#external-id-user-flow-administrator)|
+|[API connectors](add-api-connector.md)| Integrate your user flows with web APIs to customize the user experience and integrate with external systems.|[External ID User Flow Administrator](../active-directory/roles/permissions-reference.md#external-id-user-flow-administrator)|
|[Company branding](customize-ui.md#configure-company-branding)| Customize your user flow pages.| [Global Administrator](../active-directory/roles/permissions-reference.md#global-administrator)| |[User attributes](user-flow-custom-attributes.md)| Add or delete custom attributes available to all user flows.| [External ID User Flow Attribute Administrator](../active-directory/roles/permissions-reference.md#external-id-user-flow-attribute-administrator)| |Manage users| Manage [consumer accounts](manage-users-portal.md) and administrative accounts as described in this article.| [User Administrator](../active-directory/roles/permissions-reference.md#user-administrator)| |Roles and administrators| Manage role assignments in Azure AD B2C directory. Create and manage groups that can be assigned to Azure AD B2C roles. |[Global Administrator](../active-directory/roles/permissions-reference.md#global-administrator), [Privileged Role Administrator](../active-directory/roles/permissions-reference.md#privileged-role-administrator)|
-|[User flows](user-flow-overview.md)|For quick configuration and enablement of common identity tasks, like sign-up, sign-in, and profile editing.| [External ID User Flow Attribute Administrator](../active-directory/roles/permissions-reference.md#external-id-user-flow-administrator)|
+|[User flows](user-flow-overview.md)|For quick configuration and enablement of common identity tasks, like sign-up, sign-in, and profile editing.| [External ID User Flow Administrator](../active-directory/roles/permissions-reference.md#external-id-user-flow-administrator)|
|[Custom policies](user-flow-overview.md)| Create, read, update, and delete all custom policies in Azure AD B2C.| [B2C IEF Policy Administrator](../active-directory/roles/permissions-reference.md#b2c-ief-policy-administrator)| |[Policy keys](policy-keys-overview.md)|Add and manage encryption keys for signing and validating tokens, client secrets, certificates, and passwords used in custom policies.|[B2C IEF Keyset Administrator](../active-directory/roles/permissions-reference.md#b2c-ief-keyset-administrator)|
active-directory Concept Authentication Methods https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/concept-authentication-methods.md
To learn more about how each authentication method works, see the following sepa
> [!NOTE] > In Azure AD, a password is often one of the primary authentication methods. You can't disable the password authentication method. If you use a password as the primary authentication factor, increase the security of sign-in events using Azure AD Multi-Factor Authentication.
+> [!IMPORTANT]
+> While FIDO2 meets the requirements necessary to serve as a form of MFA, FIDO2 can only be used as a passwordless form of authentication.
+ The following additional verification methods can be used in certain scenarios: * [App passwords](howto-mfa-app-passwords.md) - used for old applications that don't support modern authentication and can be configured for per-user Azure AD Multi-Factor Authentication.
active-directory Howto Mfa Getstarted https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/howto-mfa-getstarted.md
Previously updated : 11/21/2019 Last updated : 05/03/2021
A text message that contains a verification code is sent to the user, the user i
1. Click on **Save**. 1. Close the **service settings** tab.
+> [!WARNING]
+> Do not disable methods for your organization if you are using [Security Defaults](../fundamentals/concept-fundamentals-security-defaults.md). Disabling methods may lead to locking yourself out of your tenant. Leave all **Methods available to users** enabled in the MFA service settings portal.
+ ## Plan registration policy Administrators must determine how users will register their methods. Organizations should [enable the new combined registration experience](howto-registration-mfa-sspr-combined.md) for Azure AD MFA and self-service password reset (SSPR). SSPR allows users to reset their password in a secure way using the same methods they use for multi-factor authentication. We recommend this combined registration because it's a great experience for users, with the ability to register once for both services. Enabling the same methods for SSPR and Azure AD MFA will allow your users to be registered to use both features.
active-directory Quickstart V2 Dotnet Native Aspnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-dotnet-native-aspnet.md
The article also uses a Windows Presentation Foundation (WPF) app to demonstrate
You can obtain the sample in either of two ways: * Clone it from your shell or command line:+ ```console git clone https://github.com/AzureADQuickStarts/AppModelv2-NativeClient-DotNet.git ```+ * [Download it as a ZIP file](https://github.com/AzureADQuickStarts/AppModelv2-NativeClient-DotNet/archive/complete.zip). [!INCLUDE [active-directory-develop-path-length-tip](../../../includes/active-directory-develop-path-length-tip.md)]
-## Register your web API
-
-In this section, you register your web API in **App registrations** in the Azure portal.
-
-### Choose your Azure AD tenant
+## Register the web API (TodoListService)
-To register your apps manually, choose the Azure Active Directory (Azure AD) tenant where you want to create your apps.
-
-1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
-1. If you have access to multiple tenants, use the **Directory + subscription** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to select the tenant that you want to use.
-
-### Register the TodoListService app
+Register your web API in **App registrations** in the Azure portal.
1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>. 1. If you have access to multiple tenants, use the **Directory + subscription** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to select the tenant in which you want to register an application.
-1. Search for and select **Azure Active Directory**.
+1. Find and select **Azure Active Directory**.
1. Under **Manage**, select **App registrations** > **New registration**. 1. Enter a **Name** for your application, for example `AppModelv2-NativeClient-DotNet-TodoListService`. Users of your app might see this name, and you can change it later. 1. For **Supported account types**, select **Accounts in any organizational directory**.
To register your apps manually, choose the Azure Active Directory (Azure AD) ten
### Configure the service project
-Configure the service project to match the registered web API by doing the following:
+Configure the service project to match the registered web API.
1. Open the solution in Visual Studio, and then open the *Web.config* file under the root of the TodoListService project.
-1. Replace the value of the `ida:ClientId` parameter with the Client ID (Application ID) value from the application you just registered in the **App registrations** portal.
+1. Replace the value of the `ida:ClientId` parameter with the Client ID (Application ID) value from the application you registered in the **App registrations** portal.
### Add the new scope to the app.config file
-To add the new scope to the TodoListClient *app.config* file, do the following:
+To add the new scope to the TodoListClient *app.config* file, follow these steps:
1. In the TodoListClient project root folder, open the *app.config* file.
-1. Paste the Application ID from the application you just registered for your TodoListService project in the `TodoListServiceScope` parameter, replacing the `{Enter the Application ID of your TodoListService from the app registration portal}` string.
+1. Paste the Application ID from the application that you registered for your TodoListService project in the `TodoListServiceScope` parameter, replacing the `{Enter the Application ID of your TodoListService from the app registration portal}` string.
> [!NOTE] > Make sure that the Application ID uses the following format: `api://{TodoListService-Application-ID}/access_as_user` (where `{TodoListService-Application-ID}` is the GUID representing the Application ID for your TodoListService app).
-## Register the TodoListClient client app
+## Register the web app (TodoListClient)
-In this section, you register your TodoListClient app in **App registrations** in the Azure portal, and then configure the code in the TodoListClient project. If the client and server are considered *the same application*, you can reuse the application that's registered in step 2. Use the same application if you want users to sign in with a Microsoft personal account.
+Register your TodoListClient app in **App registrations** in the Azure portal, and then configure the code in the TodoListClient project. If the client and server are considered the same application, you can reuse the application that's registered in step 2. Use the same application if you want users to sign in with a personal Microsoft account.
### Register the app
-To register the TodoListClient app, do the following:
+To register the TodoListClient app, follow these steps:
1. Go to the Microsoft identity platform for developers [App registrations](https://go.microsoft.com/fwlink/?linkid=2083908) portal. 1. Select **New registration**.
To register the TodoListClient app, do the following:
> [!NOTE] > In the TodoListClient project *app.config* file, the default value of `ida:Tenant` is set to `common`. The possible values are:
- > - `common`: You can sign in by using a work or school account or a Microsoft personal account (because you selected **Accounts in any organizational directory** in step 3b).
+ >
+ > - `common`: You can sign in by using a work or school account or a personal Microsoft account (because you selected **Accounts in any organizational directory** in a previous step).
> - `organizations`: You can sign in by using a work or school account. > - `consumers`: You can sign in only by using a Microsoft personal account.
-1. On the app **Overview** page, select **Authentication**, and then do the following:
+1. On the app **Overview** page, select **Authentication**, and then complete these steps to add a platform:
1. Under **Platform configurations**, select the **Add a platform** button. 1. For **Mobile and desktop applications**, select **Mobile and desktop applications**.
- 1. For **Redirect URIs**, select the **https://login.microsoftonline.com/common/oauth2/nativeclient** check box.
+ 1. For **Redirect URIs**, select the `https://login.microsoftonline.com/common/oauth2/nativeclient` check box.
1. Select **Configure**.
-1. Select **API permissions**, and then do the following:
+1. Select **API permissions**, and then complete these steps to add permissions:
1. Select the **Add a permission** button. 1. Select the **My APIs** tab.
To register the TodoListClient app, do the following:
### Configure your project
-To configure your TodoListClient project, do the following:
+Configure your TodoListClient project by adding the Application ID to the *app.config* file.
1. In the **App registrations** portal, on the **Overview** page, copy the value of the **Application (client) ID**.
To configure your TodoListClient project, do the following:
## Run your TodoListClient project
-To run your TodoListClient project, do the following:
+Sign in to run your TodoListClient project.
-1. Press F5 to open your TodoListClient project. The project page should open.
+1. Press F5 to open your TodoListClient project. The project page opens.
1. At the upper right, select **Sign in**, and then sign in with the same credentials you used to register your application, or sign in as a user in the same directory.
To run your TodoListClient project, do the following:
## Pre-authorize your client application
-One way you can allow users from other directories to access your web API is to pre-authorize the client application to access your web API. You do this by adding the Application ID from the client app to the list of pre-authorized applications for your web API. By adding a pre-authorized client, you're allowing users to access your web API without having to provide consent. To pre-authorize your client app, do the following:
+You can allow users from other directories to access your web API by pre-authorizing the client application to access your web API. You do this by adding the Application ID from the client app to the list of pre-authorized applications for your web API. By adding a pre-authorized client, you're allowing users to access your web API without having to provide consent.
1. In the **App registrations** portal, open the properties of your TodoListService app. 1. In the **Expose an API** section, under **Authorized client applications**, select **Add a client application**.
One way you can allow users from other directories to access your web API is to
### Run your project
-1. Press F5 to run your project. Your TodoListClient app should open.
-1. At the upper right, select **Sign in**, and then sign in by using a personal Microsoft account, such as live.com or hotmail.com, or a work or school account.
+1. Press <kbd>F5</kbd> to run your project. Your TodoListClient app opens.
+1. At the upper right, select **Sign in**, and then sign in by using a personal Microsoft account, such as a *live.com* or *hotmail.com* account, or a work or school account.
## Optional: Limit sign-in access to certain users
-By default, when you've followed the preceding steps, any personal accounts, such as outlook.com or live.com, or work or school accounts from organizations that are integrated with Azure AD can request tokens and access your web API.
+By default, any personal accounts, such as *outlook.com* or *live.com* accounts, or work or school accounts from organizations that are integrated with Azure AD can request tokens and access your web API.
To specify who can sign in to your application, use one of the following options: ### Option 1: Limit access to a single organization (single tenant)
-You can limit sign-in access to your application to user accounts that are in a single Azure AD tenant, including *guest accounts* of that tenant. This scenario is common for *line-of-business applications*.
+You can limit sign-in access to your application to user accounts that are in a single Azure AD tenant, including guest accounts of that tenant. This scenario is common for line-of-business applications.
-1. Open the *App_Start\Startup.Auth* file, and then change the value of the metadata endpoint that's passed into the `OpenIdConnectSecurityTokenProvider` to `"https://login.microsoftonline.com/{Tenant ID}/v2.0/.well-known/openid-configuration"`. You can also use the tenant name, such as `contoso.onmicrosoft.com`.
-2. In the same file, set the `ValidIssuer` property on the `TokenValidationParameters` to `"https://sts.windows.net/{Tenant ID}/"`, and set the `ValidateIssuer` argument to `true`.
+1. Open the *App_Start\Startup.Auth* file, and then change the value of the metadata endpoint that's passed into the `OpenIdConnectSecurityTokenProvider` to `https://login.microsoftonline.com/{Tenant ID}/v2.0/.well-known/openid-configuration`. You can also use the tenant name, such as `contoso.onmicrosoft.com`.
+1. In the same file, set the `ValidIssuer` property on the `TokenValidationParameters` to `https://sts.windows.net/{Tenant ID}/`, and set the `ValidateIssuer` argument to `true`.
### Option 2: Use a custom method to validate issuers
You can implement a custom method to validate issuers by using the `IssuerValida
[!INCLUDE [Help and support](../../../includes/active-directory-develop-help-support-include.md)] ## Next steps
-Learn more about the protected web API scenario that the Microsoft identity platform supports:
+
+Learn more about the protected web API scenario that the Microsoft identity platform supports.
> [!div class="nextstepaction"] > [Protected web API scenario](scenario-protected-web-api-overview.md)
active-directory Invite Internal Users https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/invite-internal-users.md
Sending an invitation to an existing internal account lets you retain that user
- **On-premises synced users**: For user accounts that are synced between on-premises and the cloud, the on-premises directory remains the source of authority after theyΓÇÖre invited to use B2B collaboration. Any changes you make to the on-premises account will sync to the cloud account, including disabling or deleting the account. Therefore, you canΓÇÖt prevent the user from signing into their on-premises account while retaining their cloud account by simply deleting the on-premises account. Instead, you can set the on-premises account password to a random GUID or other unknown value.
+> [!NOTE]
+> If you are using Azure AD Connect Cloud Sync, there is a rule by default that writes the OnPremUserPrincipalName onto the user. This rule needs to be modified so that it does not write this property for users that you want to convert from internal to external users.
+ ## How to invite internal users to B2B collaboration You can use PowerShell or the invitation API to send a B2B invitation to the internal user. Make sure the email address you want to use for the invitation is set as the external email address on the internal user object.
active-directory Concept Fundamentals Security Defaults https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/concept-fundamentals-security-defaults.md
Previously updated : 04/20/2021 Last updated : 05/03/2021
More details on why security defaults are being made available can be found in A
## Availability
-Microsoft is making security defaults available to everyone. The goal is to ensure that all organizations have a basic level of security enabled at no extra cost. You turn on security defaults in the Azure portal. If your tenant was created on or after October 22, 2019, it is possible security defaults are already enabled in your tenant. To protect all of our users, security defaults is being rolled out to all new tenants created.
+Microsoft is making security defaults available to everyone. The goal is to ensure that all organizations have a basic level of security enabled at no extra cost. You turn on security defaults in the Azure portal. If your tenant was created on or after October 22, 2019, it is possible security defaults are already enabled in your tenant. To protect all of our users, security defaults are being rolled out to all new tenants created.
### Who's it for?
These free security defaults allow registration and use of Azure AD Multi-Factor
- ** Users may use verification codes from the Microsoft Authenticator app but can only register using the notification option. - *** App passwords are only available in per-user MFA with legacy authentication scenarios only if enabled by administrators.
+> [!WARNING]
+> Do not disable methods for your organization if you are using Security Defaults. Disabling methods may lead to locking yourself out of your tenant. Leave all **Methods available to users** enabled in the [MFA service settings portal](../authentication/howto-mfa-getstarted.md#choose-verification-options).
+ ### Disabled MFA status If your organization is a previous user of per-user based Azure AD Multi-Factor Authentication, do not be alarmed to not see users in an **Enabled** or **Enforced** status if you look at the Multi-Factor Auth status page. **Disabled** is the appropriate status for users who are using security defaults or Conditional Access based Azure AD Multi-Factor Authentication.
active-directory Reference Connect Accounts Permissions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/reference-connect-accounts-permissions.md
na ms.devlang: na Previously updated : 01/04/2021 Last updated : 05/03/2021
If you are upgrading from DirSync, the AD DS Enterprise Admins credentials are u
### Azure AD Global Admin credentials These credentials are only used during the installation and are not used after the installation has completed. It is used to create the Azure AD Connector account used for synchronizing changes to Azure AD. The account also enables sync as a feature in Azure AD.
+For more information on Global Administrator accounts, see [Global Administrator](../../active-directory/roles/permissions-reference.md#global-administrator).
+ ### AD DS Connector account required permissions for express settings The AD DS Connector account is created for reading and writing to Windows Server AD and has the following permissions when created by express settings:
active-directory Concept Sign Ins https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/reports-monitoring/concept-sign-ins.md
na Previously updated : 04/26/2021 Last updated : 04/29/2021
On the **Users** page, you get a complete overview of all user sign-ins by click
![Screenshot shows the Activity section where you can select Sign-ins.](./media/concept-sign-ins/08.png "Sign-in activity")
+## Authentication details
+
+The **Authentication Details** tab located within the sign-ins report provides the following information, for each authentication attempt:
+
+- A list of authentication policies applied (such as Conditional Access, per-user MFA, Security Defaults)
+- The sequence of authentication methods used to sign-in
+- Whether or not the authentication attempt was successful
+- Detail about why the authentication attempt succeeded or failed
+
+This information allows admins to troubleshoot each step in a userΓÇÖs sign-in, and track:
+
+- Volume of sign-ins protected by multi-factor authentication
+- Usage and success rates for each authentication method
+- Usage of passwordless authentication methods (such as Passwordless Phone Sign-in, FIDO2, and Windows Hello for Business)
+- How frequently authentication requirements are satisfied by token claims (where users are not interactively prompted to enter a password, enter an SMS OTP, and so on)
+
+While viewing the Sign-ins report, select the **Authentication Details** tab:
+
+![Screenshot of the Authentication Details tab](media/concept-sign-ins/auth-details-tab.png)
+
+>[!NOTE]
+>**OATH verification code** is logged as the authentication method for both OATH hardware and software tokens (such as the Microsoft Authenticator app).
+
+>[!IMPORTANT]
+>The **Authentication details** tab can initially show incomplete or inaccurate data, until log information is fully aggregated. Known examples include:
+>- A **satisfied by claim in the token** message is incorrectly displayed when sign-in events are initially logged.
+>- The **Primary authentication** row is not initially logged.
++ ## Usage of managed applications With an application-centric view of your sign-in data, you can answer questions such as:
active-directory Alacritylaw Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/alacritylaw-tutorial.md
+
+ Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with AlacrityLaw | Microsoft Docs'
+description: Learn how to configure single sign-on between Azure Active Directory and AlacrityLaw.
++++++++ Last updated : 04/30/2021++++
+# Tutorial: Azure Active Directory single sign-on (SSO) integration with AlacrityLaw
+
+In this tutorial, you'll learn how to integrate AlacrityLaw with Azure Active Directory (Azure AD). When you integrate AlacrityLaw with Azure AD, you can:
+
+* Control in Azure AD who has access to AlacrityLaw.
+* Enable your users to be automatically signed-in to AlacrityLaw with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* AlacrityLaw single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* AlacrityLaw supports **SP** initiated SSO.
+
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+
+## Adding AlacrityLaw from the gallery
+
+To configure the integration of AlacrityLaw into Azure AD, you need to add AlacrityLaw from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **AlacrityLaw** in the search box.
+1. Select **AlacrityLaw** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
++
+## Configure and test Azure AD SSO for AlacrityLaw
+
+Configure and test Azure AD SSO with AlacrityLaw using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in AlacrityLaw.
+
+To configure and test Azure AD SSO with AlacrityLaw, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure AlacrityLaw SSO](#configure-alacritylaw-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create AlacrityLaw test user](#create-alacritylaw-test-user)** - to have a counterpart of B.Simon in AlacrityLaw that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **AlacrityLaw** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, enter the values for the following fields:
+
+ a. In the **Sign-on URL** text box, type a URL using the following pattern:
+ `https://app.alacritylaw.com/auth/saml/<ID>`
+
+ b. In the **Reply URL** text box, type a URL using the following pattern:
+ `https://app.alacritylaw.com/auth/saml/<ID>/callback`
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Sign-On URL and Reply URL. Contact [AlacrityLaw Client support team](mailto:infrastructure@alacritylaw.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
+
+ ![The Certificate download link](common/certificatebase64.png)
+
+1. On the **Set up AlacrityLaw** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Copy configuration URLs](common/copy-configuration-urls.png)
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to AlacrityLaw.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **AlacrityLaw**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure AlacrityLaw SSO
+
+To configure single sign-on on **AlacrityLaw** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [AlacrityLaw support team](mailto:infrastructure@alacritylaw.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create AlacrityLaw test user
+
+In this section, you create a user called Britta Simon in AlacrityLaw. Work with [AlacrityLaw support team](mailto:infrastructure@alacritylaw.com) to add the users in the AlacrityLaw platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on **Test this application** in Azure portal. This will redirect to AlacrityLaw Sign-on URL where you can initiate the login flow.
+
+* Go to AlacrityLaw Sign-on URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you click the AlacrityLaw tile in the My Apps, this will redirect to AlacrityLaw Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
++
+## Next steps
+
+Once you configure AlacrityLaw you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
++
active-directory Check Point Identity Awareness Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/check-point-identity-awareness-tutorial.md
Previously updated : 04/08/2021 Last updated : 04/15/2021
Follow these steps to enable Azure AD SSO in the Azure portal.
> [!NOTE] > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [Check Point Identity Awareness Client support team](mailto:support@checkpoint.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
-1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
- ![The Certificate download link](common/certificatebase64.png)
+ ![The Certificate download link](common/metadataxml.png)
1. On the **Set up Check Point Identity Awareness** section, copy the appropriate URL(s) based on your requirement. ![Copy configuration URLs](common/copy-configuration-urls.png)+ ### Create an Azure AD test user In this section, you'll create a test user in the Azure portal called B.Simon.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
d. Copy **Reply URL** value, paste this value into the **Reply URL** text box in the **Basic SAML Configuration** section in the Azure portal.
- e. Select **Import the Metadata File** to upload the downloaded **Certificate (Base64)** from the Azure portal.
+ e. Select **Import Metadata File** to upload the downloaded **Federation Metadata XML** from the Azure portal.
> [!NOTE] > Alternatively you can also select **Insert Manually** to paste manually the **Entity ID** and **Login URL** values into the corresponding fields, and to upload the **Certificate File** from the Azure portal.
active-directory Check Point Remote Access Vpn Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/check-point-remote-access-vpn-tutorial.md
+
+ Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Check Point Remote Access VPN | Microsoft Docs'
+description: Learn how to configure single sign-on between Azure Active Directory and Check Point Remote Access VPN.
++++++++ Last updated : 04/16/2021++++
+# Tutorial: Azure Active Directory single sign-on (SSO) integration with Check Point Remote Access VPN
+
+In this tutorial, you'll learn how to integrate Check Point Remote Access VPN with Azure Active Directory (Azure AD). When you integrate Check Point Remote Access VPN with Azure AD, you can:
+
+* Control in Azure AD who has access to Check Point Remote Access VPN.
+* Enable your users to be automatically signed-in to Check Point Remote Access VPN with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Check Point Remote Access VPN single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Check Point Remote Access VPN supports **SP** initiated SSO.
+
+## Adding Check Point Remote Access VPN from the gallery
+
+To configure the integration of Check Point Remote Access VPN into Azure AD, you need to add Check Point Remote Access VPN from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Check Point Remote Access VPN** in the search box.
+1. Select **Check Point Remote Access VPN** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for Check Point Remote Access VPN
+
+Configure and test Azure AD SSO with Check Point Remote Access VPN using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Check Point Remote Access VPN.
+
+To configure and test Azure AD SSO with Check Point Remote Access VPN, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Check Point Remote Access VPN SSO](#configure-check-point-remote-access-vpn-sso)** - to enable your users to use this feature.
+
+ 1. **[Create Check Point Remote Access VPN test user](#create-check-point-remote-access-vpn-test-user)** - to have a counterpart of B.Simon in Check Point Remote Access VPN that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Check Point Remote Access VPN** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, enter the values for the following fields:
+
+ a. In the **Identifier (Entity ID)** text box, type a URL using the following pattern:
+ `https://<GATEWAY_IP>/saml-vpn/spPortal/ACS/ID/<IDENTIFIER_UID>`
+
+ b. In the **Reply URL** text box, type a URL using the following pattern:
+ `https://<GATEWAY_IP>/saml-vpn/spPortal/ACS/Login/<IDENTIFIER_UID>`
+
+ c. In the **Sign on URL** text box, type a URL using the following pattern:
+ `https://<GATEWAY_IP>/saml-vpn/`
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [Check Point Remote Access VPN Client support team](mailto:support@checkpoint.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
+
+ ![The Certificate download link](common/metadataxml.png)
+
+1. On the **Set up Check Point Remote Access VPN** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Copy configuration URLs](common/copy-configuration-urls.png)
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Check Point Remote Access VPN.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Check Point Remote Access VPN**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Check Point Remote Access VPN SSO
+
+### Configure an External User Profile object
+
+> [!NOTE]
+> This section is needed only if you do not want to use an on-premises Active Directory (LDAP).
+
+**Configure a generic user profile in the Legacy SmartDashboard**:
+
+1. In SmartConsole, go to **Manage & Settings > Blades**.
+
+1. In the **Mobile Access** section, click **Configure in SmartDashboard**. The Legacy SmartDashboard opens.
+
+1. In the Network **Objects** pane, and click **Users**.
+
+1. Right-click on an empty space and select **New > External User Profile > Match all users**.
+
+1. Configure the **External User Profile** properties:
+
+ 1. On the **General Properties** page:
+ * In the **External User Profile** name field, leave the default name `generic`*
+ * In the **Expiration Date** field, set the applicable date
+
+ 1. On the **Authentication** page:
+ * From the **Authentication Scheme** drop-down list, select `undefined`
+ 1. On the **Location**, **Time**, and **Encryption** pages:
+ * Configure other applicable settings
+ 1. Click **OK**.
+
+1. From the top toolbar, click **Update** (or press Ctrl + S).
+
+1. Close SmartDashboard.
+
+1. In SmartConsole, install the Access Control Policy.
+
+### Configure Remote Access VPN
+
+1. Open the object of the applicable Security Gateway.
+
+1. On the General Properties page, enable the **IPSec VPN** Software Blade.
+
+1. From the left tree, click the **IPSec VPN** page.
+
+1. In the section **This Security Gateway participates in the following VPN communities**, click **Add** and select **Remote Access Community**.
+
+1. From the left tree, click **VPN clients > Remote Access**.
+
+1. Enable **Support Visitor Mode**.
+
+1. From the left tree, click **VPN clients > Office Mode**.
+
+1. Select **Allow Office Mode** and select the applicable Office Mode Method.
+
+1. From the left tree, click **VPN Clients > SAML Portal Settings**.
+
+1. Make sure the Main URL contains the fully qualified domain name of the gateway.
+This domain name should end with a DNS suffix registered by your organization.
+For example:
+`https://gateway1.company.com/saml-vpn`
+
+1. Make sure the certificate is trusted by the end usersΓÇÖ browser.
+
+1. Click **OK**.
++
+### Configure an Identity Provider object
+
+1. Do the following steps for each Security Gateway that participates in Remote Access VPN.
+
+1. In SmartConsole > **Gateways & Servers** view, click **New > More > User/Identity > Identity Provider**.
+
+ ![screenshot for new Identity Provider.](./media/check-point-remote-access-vpn-tutorial/identity-provider.png)
+
+1. Perform the following steps in **New Identity Provider** window.
+
+ ![screenshot for Identity Provider section.](./media/check-point-remote-access-vpn-tutorial/new-identity-provider.png)
+
+ a. In the **Gateway** field, select the Security Gateway, which needs to perform the SAML authentication.
+
+ b. In the **Service** field, select **Remote Access VPN** from the dropdown.
+
+ c. Copy **Identifier(Entity ID)** value, paste this value into the **Identifier** text box in the **Basic SAML Configuration** section in the Azure portal.
+
+ d. Copy **Reply URL** value, paste this value into the **Reply URL** text box in the **Basic SAML Configuration** section in the Azure portal.
+
+ e. Select **Import Metadata File** to upload the downloaded **Federation Metadata XML** from the Azure portal.
+
+ > [!NOTE]
+ > Alternatively you can also select **Insert Manually** to paste manually the **Entity ID** and **Login URL** values into the corresponding fields, and to upload the **Certificate File** from the Azure portal.
+
+ f. Click **OK**.
+
+### Configure the Identity Provider as an authentication method
+
+1. Open the object of the applicable Security Gateway.
+
+1. On the **VPN Clients > Authentication** page:
+
+ a. Clear the checkbox **Allow older clients to connect to this gateway**.
+
+ b. Add a new object or edit an existing realm.
+
+ ![screenshot for to Add a new object.](./media/check-point-remote-access-vpn-tutorial/add-new-object.png)
+
+1. Enter a name and a display name, and add/edit an authentication method:
+ In case the Login Option will be use on GWs who participate in MEP, in order to allow smooth user experience the Name should start with ΓÇ£SAMLVPN_ΓÇ¥ prefix.
+
+ ![screenshot about Login Option.](./media/check-point-remote-access-vpn-tutorial/login-option.png)
+
+1. Select the option **Identity Provider**, click the green `+` button and select the applicable Identity Provider object.
+
+ ![screenshot to select the applicable Identity Provider object.](./media/check-point-remote-access-vpn-tutorial/green-button.png)
+
+1. In the Multiple Logon Options window:
+From the left pane, click **User Directories** and then select **Manual configuration**.
+There are two options:
+ 1. If you do not want to use an on-premises Active Directory (LDAP), select only External User Profiles and click OK.
+ 2. If you do want to use an on-premises Active Directory (LDAP), select only LDAP users and in the LDAP Lookup Type select email. Then click OK.
+
+ ![screenshot to manual configuration.](./media/check-point-remote-access-vpn-tutorial/manual-configuration.png)
+
+1. Configure the required settings in the management database:
+
+ 1. Close SmartConsole.
+
+ 2. Connect with the GuiDBEdit Tool to the Management Server (see [sk13009](https://supportcenter.checkpoint.com/supportcenter/portal?eventSubmit_doGoviewsolutiondetails&solutionid=sk13009)).
+
+ 3. In the top left pane, go to **Edit > Network Objects**.
+
+ 4. In the top right pane, select the **Security Gateway object**.
+
+ 5. In the bottom pane, go to **realms_for_blades > vpn**.
+
+ 6. If you do not want to use an on-premises Active Directory (LDAP), set **do_ldap_fetch** to **false** and **do_generic_fetch** to **true**. Then click **OK**. If you do want to use an on-premises Active Directory (LDAP), set **do_ldap_fetch** to **true** and **do_generic_fetch** to **false**. Then click **OK**.
+
+ 7. Repeat the steps iv-vi for all applicable Security Gateways.
+
+ 8. Save all changes (click the **File** menu > **Save All**).
+
+1. Close the GuiDBEdit Tool.
+
+1. Each Security Gateway and each Software Blade have separate settings. Review the settings in each Security Gateway and each Software Blade that use authentication (VPN, Mobile Access, and Identity Awareness).
+
+ * Make sure to select the option **LDAP users** only for Software Blades that use LDAP.
+
+ * Make sure to select the option **External user profiles** only for Software Blades that do not use LDAP.
+
+1. Install the Access Control Policy on each Security Gateway.
+
+### VPN RA Client Installation and configuration
+
+1. Install the VPN client.
+
+1. Set the Identity Provider browser mode (optional)
+By default, the Windows client uses its embedded browser and the macOS client uses Safari to authenticate on the Identity Provider's portal.
+For Windows client to change this behavior to use Internet Explorer instead:
+
+ 1. On the client machine, open a plain-text editor as an Administrator.
+ 2. Open the trac.defaults file in the text editor.
+ * On 32-bit Windows:
+``%ProgramFiles%\CheckPoint\Endpoint Connect\trac.defaults``
+ * On 64-bit Windows:
+``%ProgramFiles(x86)%\CheckPoint\Endpoint Connect\trac.defaults``
+ 3. Change the idp_browser_mode attribute value from ΓÇ£embeddedΓÇ¥ to ΓÇ£IEΓÇ¥:
+ 4. Save the file.
+ 5. Restart the Check Point Endpoint Security VPN client service.
+Open the Windows Command Prompt as an Administrator and run these commands:
+
+ `# net stop TracSrvWrapper `
+
+ `# net start TracSrvWrapper`
+
+
+1. Start authentication with browser running in background:
+
+ 1. On the client machine, open a plain-text editor as an Administrator.
+ 2. Open the trac.defaults file in the text editor.
+ * On 32-bit Windows: `%ProgramFiles%\CheckPoint\Endpoint Connect\trac.defaults`
+ * On 64-bit Windows: `%ProgramFiles(x86)%\CheckPoint\Endpoint Connect\trac.defaults`
+
+ * On macOS: `/Library/Application Support/Checkpoint/Endpoint Security/Endpoint Connect/Trac.defaults`
+
+ 3. Change the value of **idp_show_browser_primary_auth_flow** to **false**
+ 4. Save the file.
+ 5. Restart the Check Point Endpoint Security VPN client service
+ * On Windows clients
+Open the Windows Command Prompt as an Administrator and run these commands:
+
+ `# net stop TracSrvWrapper`
+
+ `# net start TracSrvWrapper`
+
+ * On macOS clients
+
+ `sudo launchctl stop com.checkpoint.epc.service`
+
+ `sudo launchctl start com.checkpoint.epc.service`
++
+### Create Check Point Remote Access VPN test user
+
+In this section, you create a user called Britta Simon in Check Point Remote Access VPN. Work with [Check Point Remote Access VPN support team](mailto:support@checkpoint.com) to add the users in the Check Point Remote Access VPN platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+1. Open the VPN client and click **Connect to…**.
+
+ ![screenshot for Connect to.](./media/check-point-remote-access-vpn-tutorial/connect.png)
+
+1. Select **Site** from the dropdown and click **Connect**.
+
+ ![screenshot for selecting site.](./media/check-point-remote-access-vpn-tutorial/site.png)
+
+1. In Azure AD login pop up, sign in using Azure AD credentials which you have created in the **Create an Azure AD test user** section.
+
+## Next steps
+
+Once you configure Check Point Remote Access VPN you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
++
active-directory Cognician Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/cognician-tutorial.md
+
+ Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Cognician | Microsoft Docs'
+description: Learn how to configure single sign-on between Azure Active Directory and Cognician.
++++++++ Last updated : 04/28/2021++++
+# Tutorial: Azure Active Directory single sign-on (SSO) integration with Cognician
+
+In this tutorial, you'll learn how to integrate Cognician with Azure Active Directory (Azure AD). When you integrate Cognician with Azure AD, you can:
+
+* Control in Azure AD who has access to Cognician.
+* Enable your users to be automatically signed-in to Cognician with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Cognician single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Cognician supports **SP** initiated SSO.
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
++
+## Adding Cognician from the gallery
+
+To configure the integration of Cognician into Azure AD, you need to add Cognician from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Cognician** in the search box.
+1. Select **Cognician** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
++
+## Configure and test Azure AD SSO for Cognician
+
+Configure and test Azure AD SSO with Cognician using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Cognician.
+
+To configure and test Azure AD SSO with Cognician, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Cognician SSO](#configure-cognician-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Cognician test user](#create-cognician-test-user)** - to have a counterpart of B.Simon in Cognician that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Cognician** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, enter the values for the following fields:
+
+ a. In the **Sign-on URL** text box, type a URL using the following pattern:
+ `https://www.cognician.com/saml-sso/<INSTANCE NAME>`
+
+ b. In the **Reply URL** text box, type a URL using the following pattern:
+ `https://www.cognician.com/saml-sso/<INSTANCE NAME>/saml`
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Sign-On URL and Reply URL. Contact [Cognician Client support team](mailto:support@cognician.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
+
+ ![The Certificate download link](common/copy-metadataurl.png)
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Cognician.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Cognician**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Cognician SSO
+
+To configure single sign-on on **Cognician** side, you need to send the **App Federation Metadata Url** to [Cognician support team](mailto:support@cognician.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Cognician test user
+
+In this section, you create a user called Britta Simon in Cognician. Work with [Cognician support team](mailto:support@cognician.com) to add the users in the Cognician platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on **Test this application** in Azure portal. This will redirect to Cognician Sign-on URL where you can initiate the login flow.
+
+* Go to Cognician Sign-on URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you click the Cognician tile in the My Apps, this will redirect to Cognician Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
+
+## Next steps
+
+Once you configure Cognician you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
++
active-directory Equisolve Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/equisolve-tutorial.md
+
+ Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Equisolve | Microsoft Docs'
+description: Learn how to configure single sign-on between Azure Active Directory and Equisolve.
++++++++ Last updated : 04/29/2021++++
+# Tutorial: Azure Active Directory single sign-on (SSO) integration with Equisolve
+
+In this tutorial, you'll learn how to integrate Equisolve with Azure Active Directory (Azure AD). When you integrate Equisolve with Azure AD, you can:
+
+* Control in Azure AD who has access to Equisolve.
+* Enable your users to be automatically signed-in to Equisolve with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Equisolve single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Equisolve supports **SP and IDP** initiated SSO.
+
+## Adding Equisolve from the gallery
+
+To configure the integration of Equisolve into Azure AD, you need to add Equisolve from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Equisolve** in the search box.
+1. Select **Equisolve** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
++
+## Configure and test Azure AD SSO for Equisolve
+
+Configure and test Azure AD SSO with Equisolve using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Equisolve.
+
+To configure and test Azure AD SSO with Equisolve, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Equisolve SSO](#configure-equisolve-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Equisolve test user](#create-equisolve-test-user)** - to have a counterpart of B.Simon in Equisolve that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Equisolve** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, enter the values for the following fields:
+
+ a. In the **Identifier** text box, type a URL using the following pattern:
+ `https://clients.equisolve.com/auth/saml/<ID>/metadata`
+
+ b. In the **Reply URL** text box, type a URL using the following pattern:
+ `https://clients.equisolve.com/auth/saml/<ID>/auth`
+
+1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
+
+ a. In the **Sign-on URL** text box, type a URL using the following pattern:
+ `https://clients.equisolve.com/auth/saml/<ID>/sign_in`
+
+ b. In the **Logout URL** text box, type a URL using the following pattern:
+ `https://clients.equisolve.com/auth/saml/<ID>/idp_sign_out`
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier, Reply URL, Sign-on URL and Logout URL. Contact [Equisolve Client support team](mailto:help@equisolve.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
+
+ ![The Certificate download link](common/certificatebase64.png)
+
+1. On the **Set up Equisolve** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Copy configuration URLs](common/copy-configuration-urls.png)
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Equisolve.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Equisolve**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Equisolve SSO
+
+To configure single sign-on on **Equisolve** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [Equisolve support team](mailto:help@equisolve.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Equisolve test user
+
+In this section, you create a user called Britta Simon in Equisolve. Work with [Equisolve support team](mailto:help@equisolve.com) to add the users in the Equisolve platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Equisolve Sign on URL where you can initiate the login flow.
+
+* Go to Equisolve Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Equisolve for which you set up the SSO
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the Equisolve tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Equisolve for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
+
+## Next steps
+
+Once you configure Equisolve you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
++
active-directory Qmarkets Idea Innovation Management Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/qmarkets-idea-innovation-management-tutorial.md
Previously updated : 11/20/2019 Last updated : 04/30/2021
In this tutorial, you'll learn how to integrate Qmarkets Idea & Innovation Manag
* Enable your users to be automatically signed-in to Qmarkets Idea & Innovation Management with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
## Prerequisites
To get started, you need the following items:
* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). * Qmarkets Idea & Innovation Management single sign-on (SSO) enabled subscription.
+> [!NOTE]
+> This integration is also available to use from Azure AD US Government Cloud environment. You can find this application in the Azure AD US Government Cloud Application Gallery and configure it in the same way as you do from public cloud.
+ ## Scenario description In this tutorial, you configure and test Azure AD SSO in a test environment. -
-* Qmarkets Idea & Innovation Management supports **SP and IDP** initiated SSO
-* Qmarkets Idea & Innovation Management supports **Just In Time** user provisioning
+* Qmarkets Idea & Innovation Management supports **SP and IDP** initiated SSO.
+* Qmarkets Idea & Innovation Management supports **Just In Time** user provisioning.
## Adding Qmarkets Idea & Innovation Management from the gallery To configure the integration of Qmarkets Idea & Innovation Management into Azure AD, you need to add Qmarkets Idea & Innovation Management from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**.
To configure the integration of Qmarkets Idea & Innovation Management into Azure
1. Select **Qmarkets Idea & Innovation Management** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD single sign-on for Qmarkets Idea & Innovation Management
+## Configure and test Azure AD SSO for Qmarkets Idea & Innovation Management
Configure and test Azure AD SSO with Qmarkets Idea & Innovation Management using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Qmarkets Idea & Innovation Management.
-To configure and test Azure AD SSO with Qmarkets Idea & Innovation Management, complete the following building blocks:
+To configure and test Azure AD SSO with Qmarkets Idea & Innovation Management, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
To configure and test Azure AD SSO with Qmarkets Idea & Innovation Management, c
Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **Qmarkets Idea & Innovation Management** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **Qmarkets Idea & Innovation Management** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png)
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. 1. In the applications list, select **Qmarkets Idea & Innovation Management**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**.-
- ![The "Users and groups" link](common/users-groups-blade.png)
- 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.-
- ![The Add User link](common/add-assign-user.png)
- 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
-1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
1. In the **Add Assignment** dialog, click the **Assign** button. ## Configure Qmarkets Idea & Innovation Management SSO
In this section, a user called Britta Simon is created in Qmarkets Idea & Innova
## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Qmarkets Idea & Innovation Management Sign on URL where you can initiate the login flow.
+
+* Go to Qmarkets Idea & Innovation Management Sign-on URL directly and initiate the login flow from there.
-When you click the Qmarkets Idea & Innovation Management tile in the Access Panel, you should be automatically signed in to the Qmarkets Idea & Innovation Management for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+#### IDP initiated:
-## Additional resources
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Qmarkets Idea & Innovation Management for which you set up the SSO
-- [ List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory ](./tutorial-list.md)
+You can also use Microsoft My Apps to test the application in any mode. When you click the Qmarkets Idea & Innovation Management tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Qmarkets Idea & Innovation Management for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
-- [What is application access and single sign-on with Azure Active Directory? ](../manage-apps/what-is-single-sign-on.md) -- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+## Next steps
-- [Try Qmarkets Idea & Innovation Management with Azure AD](https://aad.portal.azure.com/)
+Once you configure Qmarkets Idea & Innovation Management you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
active-directory Rescana Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/rescana-tutorial.md
+
+ Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Rescana | Microsoft Docs'
+description: Learn how to configure single sign-on between Azure Active Directory and Rescana.
++++++++ Last updated : 04/29/2021++++
+# Tutorial: Azure Active Directory single sign-on (SSO) integration with Rescana
+
+In this tutorial, you'll learn how to integrate Rescana with Azure Active Directory (Azure AD). When you integrate Rescana with Azure AD, you can:
+
+* Control in Azure AD who has access to Rescana.
+* Enable your users to be automatically signed-in to Rescana with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Rescana single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Rescana supports **SP and IDP** initiated SSO.
+* Rescana supports **Just In Time** user provisioning.
+
+## Adding Rescana from the gallery
+
+To configure the integration of Rescana into Azure AD, you need to add Rescana from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Rescana** in the search box.
+1. Select **Rescana** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
++
+## Configure and test Azure AD SSO for Rescana
+
+Configure and test Azure AD SSO with Rescana using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Rescana.
+
+To configure and test Azure AD SSO with Rescana, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Rescana SSO](#configure-rescana-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Rescana test user](#create-rescana-test-user)** - to have a counterpart of B.Simon in Rescana that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Rescana** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, enter the values for the following fields:
+
+ a. In the **Identifier** text box, type one of the following URLs:
+
+ | Identifier |
+ |-|
+ | `https://pl.rescana.com/saml/metadata.xml` |
+ | `https://portal.rescana.com/saml/metadata.xml` |
+ | `https://qa.rescana.com/saml/metadata.xml` |
+ |
+
+ b. In the **Reply URL** text box, type one of the following URLs:
+
+ | Reply URL |
+ |-|
+ | `https://pl.rescana.com/authorization-code/callback` |
+ | `https://portal.rescana.com/authorization-code/callback` |
+ | `https://qa.rescana.com/authorization-code/callback` |
+ |
++
+1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
+
+ a. In the **Sign-on URL** text box, type one of the following URLs:
+
+ | Sign-on URL |
+ ||
+ | `https://pl.rescana.com/authorization-code/callback` |
+ | `https://portal.rescana.com/authorization-code/callback` |
+ | `https://qa.rescana.com/authorization-code/callback` |
+ |
+
+ b. In the **Relay State** text box, type a value using the following pattern: `<INSTANCE_ID>`
+
+ > [!NOTE]
+ > The value is not real. Update the value with the actual Relay State. Contact [Rescana Client support team](mailto:ops@rescana.com) to get the value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
+
+ ![The Certificate download link](common/certificatebase64.png)
+
+1. On the **Set up Rescana** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Copy configuration URLs](common/copy-configuration-urls.png)
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Rescana.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Rescana**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Rescana SSO
+
+To configure single sign-on on **Rescana** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [Rescana support team](mailto:ops@rescana.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Rescana test user
+
+In this section, a user called B.Simon is created in Rescana. Rescana supports just-in-time provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Rescana, a new one is created when you attempt to access Rescana.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Rescana Sign on URL where you can initiate the login flow.
+
+* Go to Rescana Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Rescana for which you set up the SSO
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the Rescana tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Rescana for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://docs.microsoft.com/azure/active-directory/active-directory-saas-access-panel-introduction).
++
+## Next steps
+
+Once you configure Rescana you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](https://docs.microsoft.com/cloud-app-security/proxy-deployment-any-app).
++
active-directory Workday Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/workday-tutorial.md
Previously updated : 08/31/2020 Last updated : 04/06/2021
To configure the integration of Workday into Azure AD, you need to add Workday f
Configure and test Azure AD SSO with Workday using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Workday.
-To configure and test Azure AD SSO with Workday, perform following steps:
+To configure and test Azure AD SSO with Workday, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** to test Azure AD single sign-on with B.Simon.
Follow these steps to enable Azure AD SSO in the Azure portal.
1. In the Azure portal, on the **Workday** application integration page, find the **Manage** section and select **Single sign-on**. 1. On the **Select a Single sign-on method** page, select **SAML**.
-1. On the **Set up Single Sign-On with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up Single Sign-On with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png)
Follow these steps to enable Azure AD SSO in the Azure portal.
> [!NOTE] > Here we have mapped the Name ID with UPN (user.userprincipalname) as default. You need to map the Name ID with actual User ID in your Workday account (your email, UPN, etc.) for successful working of SSO.
-1. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
+1. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
- ![The Certificate download link](common/certificatebase64.png)
+ ![The Certificate download link](common/metadataxml.png)
1. To modify the **Signing** options as per your requirement, click **Edit** button to open **SAML Signing Certificate** dialog.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In a different web browser window, sign in to your Workday company site as an administrator.
-1. In the **Search box** search with the name **Edit Tenant Setup ΓÇô Security** on the top left side of the home page.
+1. In the **Search box**, search with the name **Edit Tenant Setup ΓÇô Security** on the top left side of the home page.
- ![Edit Tenant Security](./media/workday-tutorial/IC782925.png "Edit Tenant Security")
+ ![Edit Tenant Security](./media/workday-tutorial/search-box.png "Edit Tenant Security")
-1. In the **SAML Setup** section, perform the following steps:
+1. In the **SAML Setup** section, click on **Import Identity Provider**.
- ![SAML Setup](./media/workday-tutorial/IC782926.png "SAML Setup")
+ ![SAML Setup](./media/workday-tutorial/saml-setup.png "SAML Setup")
- a. Select **Enable SAML Authentication**.
+1. In **Import Identity Provider** section, perform the below steps:
- b. Click **Add Row**.
+ ![Importing Identity Provider](./media/workday-tutorial/import-identity-provider.png)
-1. In the **SAML Identity Providers** section, please perform the following actions for the newly created row.
+ a. Give the **Identity Provider Name** like `AzureAD` in the textbox.
- a. Perform following actions for the fields, that are shown below.
+ b. In **Used for Environments** textbox, select the appropriate environment names from the dropdown.
- ![SAML Identity Providers 1](./media/workday-tutorial/IC7829271.png "SAML Identity Providers")
+ c. Click on **Select files** to upload the downloaded **Federation Metadata XML** file.
- * In the **Identity Provider Name** textbox, type a provider name (for example: *SPInitiatedSSO*).
+ d. Click on **OK** and then **Done**.
- * In the Azure portal, on the **Set up Workday** section, copy the **Azure AD Identifier** value, and then paste it into the **Issuer** textbox.
+1. After clicking **Done**, a new row will be added in the **SAML Identity Providers** and then you can add the below steps for the newly created row.
- * Open the downloaded **Certificate** from the Azure portal into Notepad and paste the content into the **x.509 Certificate** textbox.
+ ![SAML Identity Providers.](./media/workday-tutorial/saml-identity-providers.png "SAML Identity Providers")
- b. Perform following actions for the fields, that are shown below.
+ a. Click on **Enable IDP Initiated Logout** checkbox.
- ![SAML Identity Providers 2](./media/workday-tutorial/saml-identity-provider-2.png "SAML Identity Providers")
+ b. In the **Logout Response URL** textbox, type **http://www.workday.com**.
- * Click on **Enable IDP Initiated Logout** checkbox.
+ c. Click on **Enable Workday Initiated Logout** checkbox.
- * In the **Logout Response URL** textbox, type **http://www.workday.com**.
+ d. In the **Logout Request URL** textbox, paste the **Logout URL** value, which you have copied from Azure portal.
- * In the **Logout Request URL** textbox, paste the **Logout URL** value, which you have copied from Azure portal.
+ e. Click on **SP Initiated** checkbox.
- * Click on **SP Initiated** checkbox.
+ f. In the **Service Provider ID** textbox, type **http://www.workday.com**.
- * In the **Service Provider ID** textbox, type **http://www.workday.com**.
--
- * Select **Do Not Deflate SP-initiated Authentication Request**.
-
- c. Perform following actions for the fields, that are shown below.
-
- ![SAML Identity Providers 3](./media/workday-tutorial/saml-identity-provider-3.png "SAML Identity Providers")
-
- * In the Azure portal, on the **Set up Workday** section, copy the **Login URL** value, and then paste it into the **IdP SSO Service URL** textbox.
-
- * In **Used for Environments** textbox, select the appropriate environment names from the dropdown.
+ g Select **Do Not Deflate SP-initiated Authentication Request**.
1. Perform the following steps in the below image.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
In this section, you test your Azure AD single sign-on configuration with following options.
-1. Click on **Test this application** in Azure portal. This will redirect to Workday Sign-on URL where you can initiate the login flow.
+* Click on **Test this application** in Azure portal. This will redirect to Workday Sign-on URL where you can initiate the login flow.
-2. Go to Workday Sign-on URL directly and initiate the login flow from there.
+* Go to Workday Sign-on URL directly and initiate the login flow from there.
-3. You can use Microsoft Access Panel. When you click the Workday tile in the Access Panel, you should be automatically signed in to the Workday for which you set up the SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+* You can use Microsoft My Apps. When you click the Workday tile in the My Apps, you should be automatically signed in to the Workday for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
## Next steps
active-directory User Help Auth App Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/user-help/user-help-auth-app-faq.md
Previously updated : 04/28/2021 Last updated : 04/30/2021
On Android, Microsoft recommends allowing the app to access location all the tim
**Q**: Why I am having issues with Apple Watch on watchOS 7?
-**A**: There is an issue with approving notifications on watchOS 7, and weΓÇÖre working with Apple to get this fixed. In the meantime, any notifications that require the Microsoft Authenticator watchOS app should be approved on your phone instead.
+**A**: Sometimes, approving or denying a session on watchOS 7 fails with the error message "Failed to communicate with the phone. Make sure to keep your Watch screen awake during future requests. See the FAQs for more info.". There is a known issue with notifications when app lock is enabled or when number matching is required, and weΓÇÖre working with Apple to get this fixed. In the meantime, any notifications that require the Microsoft Authenticator watchOS app should be approved on your phone instead.
+
+### Signing into an iOS app
+
+**Q**: IΓÇÖm trying to sign into an iOS app, and I need to approve a notification on the Authenticator app. When I go back to the iOS app, I get stuck. What can I do?
+
+**A**: This is a known issue on iOS 13+. Reach out to your support admin for help, and provide the following details: `Use Azure MFA, not MFA server.`
### Apple Watch doesn't show accounts
aks Csi Secrets Store Driver https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/csi-secrets-store-driver.md
az provider register --namespace Microsoft.ContainerService
## Install the aks-preview CLI extension
-You also need the *aks-preview* Azure CLI extension version 0.5.10 or later. Install the *aks-preview* Azure CLI extension by using the [az extension add][az-extension-add] command. If you already have the extension installed, update to the latest available version by using the [az extension update][az-extension-update] command.
+You also need the *aks-preview* Azure CLI extension version 0.5.9 or later. Install the *aks-preview* Azure CLI extension by using the [az extension add][az-extension-add] command. If you already have the extension installed, update to the latest available version by using the [az extension update][az-extension-update] command.
```azurecli-interactive # Install the aks-preview extension
az extension update --name aks-preview
> [!NOTE] > If you plan to provide access to the cluster via a user-assigned or system-assigned managed identity, enable Azure Active Directory on your cluster with the flag `enable-managed-identity`. See [Use managed identities in Azure Kubernetes Service][aks-managed-identity] for more.
-To create an AKS cluster with Secrets Store CSI Driver capability, use the [az-aks-create][az-aks-create] command with the addon `azure-keyvault-secrets-provider`:
+To create an AKS cluster with Secrets Store CSI Driver capability, use the [az aks create][az-aks-create] command with the addon `azure-keyvault-secrets-provider`:
```azurecli-interactive az aks create -n myAKSCluster -g myResourceGroup --enable-addons azure-keyvault-secrets-provider
az aks create -n myAKSCluster -g myResourceGroup --enable-addons azure-keyvault-
> [!NOTE] > If you plan to provide access to the cluster via a user-assigned or system-assigned managed identity, enable Azure Active Directory on your cluster with the flag `enable-managed-identity`. See [Use managed identities in Azure Kubernetes Service][aks-managed-identity] for more.
-To upgrade an existing AKS cluster with Secrets Store CSI Driver capability, use the [az-aks-create][az-aks-create] command with the addon `azure-keyvault-secrets-provider`:
+To upgrade an existing AKS cluster with Secrets Store CSI Driver capability, use the [az aks enable-addons][az-aks-enable-addons] command with the addon `azure-keyvault-secrets-provider`:
```azurecli-interactive
-az aks upgrade -n myAKSCluster -g myResourceGroup --enable-addons azure-keyvault-secrets-provider
+az aks enable-addons --addons azure-keyvault-secrets-provider --name myAKSCluster --group myResourceGroup
```
+## Verify Secrets Store CSI Driver installation
+ These commands will install the Secrets Store CSI Driver and the Azure Key Vault provider on your nodes. Verify by listing all pods from all namespaces and ensuring your output looks similar to the following: ```bash
kube-system aks-secrets-store-provider-azure-6pqmv 1/1 Running 0
kube-system aks-secrets-store-provider-azure-f5qlm 1/1 Running 0 4m25s ```
-### Enabling autorotation
+
+## Enabling and disabling autorotation
> [!NOTE] > When enabled, the Secrets Store CSI Driver will update the pod mount and the Kubernetes Secret defined in secretObjects of the SecretProviderClass by polling for changes every two minutes.
kube-system aks-secrets-store-provider-azure-f5qlm 1/1 Running 0
To enable autorotation of secrets, use the flag `enable-secret-rotation` when creating your cluster: ```azurecli-interactive
-az aks create -n myAKSCluster2 -g myResourceGroup --enable-addons azure-keyvault-secrets-provider --enable-secret-rotation --rotation-poll-interval 5m
+az aks create -n myAKSCluster2 -g myResourceGroup --enable-addons azure-keyvault-secrets-provider --enable-secret-rotation
+```
+
+Or update an existing cluster with the addon enabled:
+
+```azurecli-interactive
+az aks update -g myResourceGroup -n myAKSCluster2 --enable-secret-rotation
+```
+
+To disable, use the flag `disable-secret-rotation`:
+
+```azurecli-interactive
+az aks update -g myResourceGroup -n myAKSCluster2 --disable-secret-rotation
``` ## Create or use an existing Azure Key Vault
spec:
- name: busybox image: k8s.gcr.io/e2e-test-images/busybox:1.29 command:
- - "/bin/sh"
+ - "/bin/sleep"
- "10000" volumeMounts: - name: secrets-store-inline
kubectl exec busybox-secrets-store-inline -- ls /mnt/secrets-store/
kubectl exec busybox-secrets-store-inline -- cat /mnt/secrets-store/secret1 ```
-## Disable Secrets Store CSI Driver
+## Disable Secrets Store CSI Driver on an existing AKS Cluster
-To disable the Secrets Store CSI Driver capability in an existing cluster, use the az aks command with the disable-addon `azure-keyvault-secrets-provider`:
+To disable the Secrets Store CSI Driver capability in an existing cluster, use the [az aks disable-addons][az-aks-disable-addons] command with the `azure-keyvault-secrets-provider` flag:
```azurecli-interactive
-az aks disable-addons -n myAKSCluster -g myResourceGroup --addons azure-keyvault-secrets-provider
+az aks disable-addons --addons azure-keyvault-secrets-provider -g myResourceGroup -n myAKSCluster
``` ## Next steps <!-- Add a context sentence for the following links --> After learning how to use the CSI Secrets Store Driver with an AKS Cluster, see the following resources: -- [Run the Azure Key Vault provider for Secrets Store CSI Driver][key-vault-provider] - [Enable CSI drivers for Azure Disks and Azure Files on AKS][csi-storage-drivers] <!-- Links -->
After learning how to use the CSI Secrets Store Driver with an AKS Cluster, see
[az-extension-add]: /cli/azure/extension#az_extension_add [az-extension-update]: /cli/azure/extension#az_extension_update [az-aks-create]: /cli/azure/aks#az_aks_create
+[az-aks-enable-addons]: /cli/azure/aks#az_aks_enable_addons
[key-vault-provider]: ../key-vault/general/key-vault-integrate-kubernetes.md [csi-storage-drivers]: ./csi-storage-drivers.md [create-key-vault]: ../key-vault/general/quick-create-cli.md
aks Limit Egress Traffic https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/limit-egress-traffic.md
The following FQDN / application rules are required for AKS clusters that have t
|--|--|-| | **`data.policy.core.windows.net`** | **`HTTPS:443`** | This address is used to pull the Kubernetes policies and to report cluster compliance status to policy service. | | **`store.policy.core.windows.net`** | **`HTTPS:443`** | This address is used to pull the Gatekeeper artifacts of built-in policies. |
-| **`gov-prod-policy-data.trafficmanager.net`** | **`HTTPS:443`** | This address is used for correct operation of Azure Policy. |
-| **`raw.githubusercontent.com`** | **`HTTPS:443`** | This address is used to pull the built-in policies from GitHub to ensure correct operation of Azure Policy. |
-| **`dc.services.visualstudio.com`** | **`HTTPS:443`** | Azure Policy add-on that sends telemetry data to applications insights endpoint. |
+| **`dc.services.visualstudio.com`** | **`HTTPS:443`** | Azure Policy add-on that sends telemetry data to applications insights endpoint. |
#### Azure China 21Vianet Required FQDN / application rules
app-service Tutorial Dotnetcore Sqldb App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/tutorial-dotnetcore-sqldb-app.md
description: Learn how to get a .NET Core app working in Azure App Service, with
ms.devlang: dotnet Previously updated : 06/20/2020 Last updated : 04/29/2021 zone_pivot_groups: app-service-platform-windows-linux
automation Automation Dsc Config From Server https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-dsc-config-from-server.md
The solution builds on the
[SharePointDSC resource](https://github.com/powershell/sharepointdsc) and extends it to orchestrate [gathering information](https://github.com/Microsoft/sharepointDSC.reverse#how-to-use)
-from existing SharePoint servers.
+from existing servers running SharePoint.
The latest version has multiple [extraction modes](https://github.com/Microsoft/SharePointDSC.Reverse/wiki/Extraction-Modes) to determine what level of information to include.
automation Automation Dsc Create Composite https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-dsc-create-composite.md
Examples would be:
- create a web server - create a DNS server-- create a SharePoint server
+- create a server that runs SharePoint
- configure a SQL cluster - manage firewall settings - manage password settings
azure-app-configuration Concept Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/concept-private-endpoint.md
Using private endpoints for your App Configuration store enables you to:
## Conceptual overview
-A private endpoint is a special network interface for an Azure service in your [Virtual Network](../virtual-network/virtual-networks-overview.md) (VNet). When you create a private endpoint for your App Config store, it provides secure connectivity between clients on your VNet and your configuration store. The private endpoint is assigned an IP address from the IP address range of your VNet. The connection between the private endpoint and the configuration store uses a secure private link.
+A private endpoint is a special network interface for an Azure service in your [Virtual Network](../virtual-network/virtual-networks-overview.md) (VNet). When you create a private endpoint for your App Configuration store, it provides secure connectivity between clients on your VNet and your configuration store. The private endpoint is assigned an IP address from the IP address range of your VNet. The connection between the private endpoint and the configuration store uses a secure private link.
Applications in the VNet can connect to the configuration store over the private endpoint **using the same connection strings and authorization mechanisms that they would use otherwise**. Private endpoints can be used with all protocols supported by the App Configuration store.
While App Configuration doesn't support service endpoints, private endpoints can
When you create a private endpoint for a service in your VNet, a consent request is sent for approval to the service account owner. If the user requesting the creation of the private endpoint is also an owner of the account, this consent request is automatically approved.
-Service account owners can manage consent requests and private endpoints through the `Private Endpoints` tab of the config store in the [Azure portal](https://portal.azure.com).
+Service account owners can manage consent requests and private endpoints through the `Private Endpoints` tab of the App Configuration store in the [Azure portal](https://portal.azure.com).
### Private endpoints for App Configuration
-When creating a private endpoint, you must specify the App Configuration store to which it connects. If you have multiple App Configuration instances within an account, you need a separate private endpoint for each store.
+When creating a private endpoint, you must specify the App Configuration store to which it connects. If you have multiple App Configuration stores, you need a separate private endpoint for each store.
### Connecting to private endpoints
Azure relies upon DNS resolution to route connections from the VNet to the confi
> [!IMPORTANT] > Use the same connection string to connect to your App Configuration store using private endpoints as you would use for a public endpoint. Don't connect to the store using its `privatelink` subdomain URL.
+> [!NOTE]
+> By default, when a private endpoint is added to your App Configuration store, all requests for your App Configuration data over the public network are denied. You can enable public network access by using the following Azure CLI command. It's important to consider the security implications of enabling public network access in this scenario.
+>
+> ```azurecli-interactive
+> az appconfig update -g MyResourceGroup -n MyAppConfiguration --enable-public-network true
+> ```
+ ## DNS changes for private endpoints When you create a private endpoint, the DNS CNAME resource record for the configuration store is updated to an alias in a subdomain with the prefix `privatelink`. Azure also creates a [private DNS zone](../dns/private-dns-overview.md) corresponding to the `privatelink` subdomain, with the DNS A resource records for the private endpoints. When you resolve the endpoint URL from within the VNet hosting the private endpoint, it resolves to the private endpoint of the store. When resolved from outside the VNet, the endpoint URL resolves to the public endpoint. When you create a private endpoint, the public endpoint is disabled.
-If you are using a custom DNS server on your network, clients must be able to resolve the fully qualified domain name (FQDN) for the service endpoint to the private endpoint IP address. Configure your DNS server to delegate your private link subdomain to the private DNS zone for the VNet, or configure the A records for `AppConfigInstanceA.privatelink.azconfig.io` with the private endpoint IP address.
+If you are using a custom DNS server on your network, clients must be able to resolve the fully qualified domain name (FQDN) for the service endpoint to the private endpoint IP address. Configure your DNS server to delegate your private link subdomain to the private DNS zone for the VNet, or configure the A records for `[Your-store-name].privatelink.azconfig.io` with the private endpoint IP address.
> [!TIP] > When using a custom or on-premises DNS server, you should configure your DNS server to resolve the store name in the `privatelink` subdomain to the private endpoint IP address. You can do this by delegating the `privatelink` subdomain to the private DNS zone of the VNet, or configuring the DNS zone on your DNS server and adding the DNS A records.
Learn more about creating a private endpoint for your App Configuration store, r
Learn to configure your DNS server with private endpoints: - [Name resolution for resources in Azure virtual networks](../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md#name-resolution-that-uses-your-own-dns-server)-- [DNS configuration for Private Endpoints](../private-link/private-endpoint-overview.md#dns-configuration)
+- [DNS configuration for Private Endpoints](../private-link/private-endpoint-overview.md#dns-configuration)
azure-functions Functions Bindings Service Bus Trigger https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-bindings-service-bus-trigger.md
The following table explains the binding configuration properties that you set i
|**connection**|**Connection**|The name of an app setting that contains the Service Bus connection string to use for this binding. If the app setting name begins with "AzureWebJobs", you can specify only the remainder of the name. For example, if you set `connection` to "MyServiceBus", the Functions runtime looks for an app setting that is named "AzureWebJobsMyServiceBus". If you leave `connection` empty, the Functions runtime uses the default Service Bus connection string in the app setting that is named "AzureWebJobsServiceBus".<br><br>To obtain a connection string, follow the steps shown at [Get the management credentials](../service-bus-messaging/service-bus-quickstart-portal.md#get-the-connection-string). The connection string must be for a Service Bus namespace, not limited to a specific queue or topic. <br><br>If you are using [version 5.x or higher of the extension](./functions-bindings-service-bus.md#service-bus-extension-5x-and-higher), instead of a connection string, you can provide a reference to a configuration section which defines the connection. See [Connections](./functions-reference.md#connections).| |**accessRights**|**Access**|Access rights for the connection string. Available values are `manage` and `listen`. The default is `manage`, which indicates that the `connection` has the **Manage** permission. If you use a connection string that does not have the **Manage** permission, set `accessRights` to "listen". Otherwise, the Functions runtime might fail trying to do operations that require manage rights. In Azure Functions version 2.x and higher, this property is not available because the latest version of the Service Bus SDK doesn't support manage operations.| |**isSessionsEnabled**|**IsSessionsEnabled**|`true` if connecting to a [session-aware](../service-bus-messaging/message-sessions.md) queue or subscription. `false` otherwise, which is the default value.|
+|**autoComplete**|**AutoComplete**|`true` Whether the trigger should automatically call complete after processing, or if the function code will manually call complete.<br><br>Setting to `false` is only supported in C#.<br><br>If set to `true`, the trigger completes the message automatically if the function execution completes successfully, and abandons the message otherwise.<br><br>When set to `false`, you are responsible for calling [MessageReceiver](/dotnet/api/microsoft.azure.servicebus.core.messagereceiver) methods to complete, abandon, or deadletter the message. If an exception is thrown (and none of the `MessageReceiver` methods are called), then the lock remains. Once the lock expires, the message is re-queued with the `DeliveryCount` incremented and the lock is automatically renewed.<br><br>In non-C# functions, exceptions in the function results in the runtime calls `abandonAsync` in the background. If no exception occurs, then `completeAsync` is called in the background. This property is available only in Azure Functions 2.x and higher. |
[!INCLUDE [app settings to local.settings.json](../../includes/functions-app-settings-local.md)]
azure-government Documentation Government Csp List https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/documentation-government-csp-list.md
Below you can find a list of all the authorized Cloud Solution Providers, AOS-G
|[Viscon Networking Innovations Inc.](https://www.visconni.com/)| |[VisioLogix Corporation](https://www.visiologix.com)| |[VVL Systems & Consulting, LLC](https://www.vvlsystems.com/)|
-|[Vistronix, LLC](http://www.vistronix.com/)|
+|Vistronix, LLC|
|[Vology Inc.](https://www.vology.com/)| |vSolvIT| |[Warren Averett Technology Group](https://warrenaverett.com/warren-averett-technology-group/)|
azure-monitor Private Link Security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/private-link-security.md
This zone covers workspace-specific mapping to the agent service automation endp
This zone configures connectivity to the global agents' solution packs storage account. Through it, agents can download new or updated solution packs (also known as management packs). Only one entry is required to handle to Log Analytics agents, no matter how many workspaces are used. [![Screenshot of Private DNS zone blob-core-windows-net.](./media/private-link-security/dns-zone-privatelink-blob-core-windows-net.png)](./media/private-link-security/dns-zone-privatelink-blob-core-windows-net-expanded.png#lightbox) > [!NOTE]
-> This entry is only added to Private Links setups created at or after April 19, 2021.
+> This entry is only added to Private Links setups created at or after April 19, 2021 (or starting June, 2021 on Azure Sovereign clouds).
### Validating you are communicating over a Private Link
Go to the Azure portal. In your Log Analytics workspace resource menu, there's a
All scopes connected to the workspace show up in this screen. Connecting to scopes (AMPLSs) allows network traffic from the virtual network connected to each AMPLS to reach this workspace. Creating a connection through here has the same effect as setting it up on the scope, as we did in [Connecting Azure Monitor resources](#connect-azure-monitor-resources). To add a new connection, select **Add** and select the Azure Monitor Private Link Scope. Select **Apply** to connect it. Note that a workspace can connect to 5 AMPLS objects, as mentioned in [Restrictions and limitations](#restrictions-and-limitations). ### Manage access from outside of private links scopes
-The settings on the bottom part of this page control access from public networks, meaning networks not connected through the listed scopes (AMPLSs). Setting **Allow public network access for ingestion** to **No** blocks ingestion of logs from machines outside of the connected scopes. Setting **Allow public network access for queries** to **No** blocks queries coming from machines outside of the scopes. That includes queries run via workbooks, dashboards, API-based client experiences, insights in the Azure portal, and more. Experiences running outside the Azure portal, and that query Log Analytics data also have to be running within the private-linked VNET.
+The settings on the bottom part of this page control access from public networks, meaning networks not connected to the listed scopes (AMPLSs). Setting **Allow public network access for ingestion** to **No** blocks ingestion of logs from machines outside of the connected scopes. Setting **Allow public network access for queries** to **No** blocks queries coming from machines outside of the scopes. That includes queries run via workbooks, dashboards, API-based client experiences, insights in the Azure portal, and more. Experiences running outside the Azure portal, and that query Log Analytics data also have to be running within the private-linked VNET.
### Exceptions Restricting access as explained above doesn't apply to the Azure Resource Manager and therefore has the following limitations:
Restricting access as explained above doesn't apply to the Azure Resource Manage
> Logs and metrics uploaded to a workspace via [Diagnostic Settings](../essentials/diagnostic-settings.md) go over a secure private Microsoft channel, and are not controlled by these settings. ### Log Analytics solution packs download
-Log Analytics agents need to access a global storage account to download solution packs. Private Link setups created at or after April 19, 2021 can reach the agents' solution packs storage over the private link. This is made possible through the new DNS zone created for [blob.core.windows.net](#privatelink-blob-core-windows-net).
+Log Analytics agents need to access a global storage account to download solution packs. Private Link setups created at or after April 19, 2021 (or starting June, 2021 on Azure Sovereign clouds) can reach the agents' solution packs storage over the private link. This is made possible through the new DNS zone created for [blob.core.windows.net](#privatelink-blob-core-windows-net).
If your Private Link setup was created before April 19, 2021, it won't reach the solution packs storage over a private link. To handle that you can do one of the following: * Re-create your AMPLS and the Private Endpoint connected to it
azure-monitor Partners https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/partners.md
Through this unified experience, you will be able to:
- Streamline single-sign on (SSO) to DatadogΓÇöa separate sign-on from the Datadog portal is no longer required. - Get unified billing for the Datadog service through Azure subscription invoicing.
-Sign up for the [Public Preview](https://forms.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR4z3T2aGXUZPslUNJ3YpcapURFBHSUJIMVJTWDM5VUFPMVkyTVhMVlYzMS4u) of the new Datadog integration with Azure. Public preview will be available on Azure Marketplace starting October 2020.
- Subscribe to the preview of "Datadog integration with Azure" available in the [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/datadog1591740804488.dd_liftr_v2?tab=Overview) If you are still using the previous manually configured integration, see the [documentation on the DataDog website](https://docs.datadoghq.com/integrations/azure/).
If you are still using the previous manually configured integration, see the [do
![DynaTrace Logo](./media/partners/dynatrace.png)
-The Dynatrace OneAgent integrates with Azure VMs and App Services via the Azure extension mechanism. This way Dynatrace OneAgent can gather performance metrics about hosts, network, and services. Besides just displaying metrics, Dynatrace visualizes environments end-to-end. It shows transactions from the client side to the database layer. Dynatrace provides AI-based correlation of problems and fully integrated root-cause-analysis to give method level insights into code and database. This insight makes troubleshooting and performance optimizations much easier.
+Dynatrace simplifies cloud complexity and is a single source of truth for your cloud platforms, allowing you to monitor the health of your entire Azure applications and infrastructure. Dynatrace integrates with Azure Monitor/App Insights by enriching the data and extending observability into the platform with additional metrics for cloud infrastructure, load balancers, API Management Services, and more. Dynatrace supports over 80 Azure Monitor services that span application and microservices workloads, as well as infrastructure-related services.
+
+Get automated, AI-assisted observability across Azure environments:
+
+- Full stack observability in minutes, everything in context including metrics, logs, and traces.
+- Auto-discovery, continuous dependency mapping and instant answers to automate monitoring of Azure cloud services including App Service, Database Performance, AKS, HDInsight, and many more.
+- Davis, DynatraceΓÇÖs AI, continuously analyzes billions of dependencies to provide precise root cause.
+- Single source of truth for teams to collaborate and innovate, wherever they may reside.
+- Accelerate Azure cloud migrations.
+ [Dynatrace documentation](https://www.dynatrace.com/support/help/technology-support/cloud-platforms/microsoft-azure-services/)
azure-netapp-files Azure Netapp Files Faqs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/azure-netapp-files-faqs.md
na ms.devlang: na Previously updated : 04/30/2021 Last updated : 05/03/2021 # FAQs About Azure NetApp Files
The requirements for data migration from on premises to Azure NetApp Files are a
- Create the target Azure NetApp Files volume. - Transfer the source data to the target volume by using your preferred file copy tool.
+### Where does Azure NetApp Files store customer data?
+
+By default, your data stays within the region where you deploy your Azure NetApp Files volumes. However, you can choose to replicate your data on a volume-by-volume basis to available destination regions using [cross-region replication](cross-region-replication-introduction.md).
+ ### How do I create a copy of an Azure NetApp Files volume in another Azure region? Azure NetApp Files provides NFS and SMB volumes. Any file based-copy tool can be used to replicate data between Azure regions.
azure-netapp-files Azure Netapp Files Solution Architectures https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/azure-netapp-files-solution-architectures.md
na ms.devlang: na Previously updated : 04/06/2021 Last updated : 05/03/2021 # Solution architectures using Azure NetApp Files
This section provides references for solutions for Linux OSS applications and da
### Machine Learning * [Cloudera Machine Learning](https://docs.cloudera.com/machine-learning/cloud/requirements-azure/topics/ml-requirements-azure.html)
+### Education
+* [Moodle on Azure NetApp Files NFS storage](https://techcommunity.microsoft.com/t5/azure-architecture-blog/azure-netapp-files-for-nfs-storage-with-moodle/ba-p/2300630)
+ ## Windows Apps and SQL Server solutions This section provides references for Windows applications and SQL Server solutions.
azure-percept Azure Percept Audio Datasheet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/azure-percept-audio-datasheet.md
Title: Azure Percept Audio datasheet description: Check out the Azure Percept Audio datasheet for detailed device specifications--++ Last updated 02/16/2021
azure-percept Azure Percept Dk Datasheet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/azure-percept-dk-datasheet.md
Title: Azure Percept DK datasheet description: Check out the Azure Percept DK datasheet for detailed device specifications--++ Last updated 02/16/2021
azure-percept Azure Percept Vision Datasheet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/azure-percept-vision-datasheet.md
Title: Azure Percept Vision datasheet description: Check out the Azure Percept Vision datasheet for detailed device specifications--++ Last updated 02/16/2021
azure-percept How To Capture Images https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/how-to-capture-images.md
Title: Capture images for a no-code vision solution in Azure Percept Studio description: Learn how to capture images with your Azure Percept DK in Azure Percept Studio for a no-code vision solution--++ Last updated 02/12/2021
azure-percept How To Configure Voice Assistant https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/how-to-configure-voice-assistant.md
Title: Configure voice assistant application using Azure IoT Hub description: Configure voice assistant application using Azure IoT Hub--++ Last updated 02/15/2021
azure-percept How To Connect To Percept Dk Over Serial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/how-to-connect-to-percept-dk-over-serial.md
Title: Connect to your Azure Percept DK over serial description: Learn how to set up a serial connection to your Azure Percept DK with PuTTY and a USB to TTL serial cable--++ Last updated 02/03/2021
azure-percept How To Deploy Model https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/how-to-deploy-model.md
Title: Deploy a vision AI model to your Azure Percept DK description: Learn how to deploy a vision AI model to your Azure Percept DK from Azure Percept Studio--++ Last updated 02/12/2021
azure-percept How To Manage Voice Assistant https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/how-to-manage-voice-assistant.md
Title: Configure voice assistant application within Azure Percept Studio description: Configure voice assistant application within Azure Percept Studio--++ Last updated 02/15/2021
azure-percept How To Ssh Into Percept Dk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/how-to-ssh-into-percept-dk.md
Title: Connect to your Azure Percept DK over SSH description: Learn how to SSH into your Azure Percept DK with PuTTY--++ Last updated 03/18/2021
azure-percept How To View Telemetry https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/how-to-view-telemetry.md
Title: View your Azure Percept DK's model inference telemetry description: Learn how to view your Azure Percept DK's vision model inference telemetry in Azure IoT Explorer--++ Last updated 02/17/2021
azure-percept How To View Video Stream https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/how-to-view-video-stream.md
Title: View your Azure Percept DK's RTSP video stream description: Learn how to view the RTSP video stream from Azure Percept DK--++ Last updated 02/12/2021
azure-percept Quickstart Percept Dk Unboxing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/quickstart-percept-dk-unboxing.md
Title: Unbox and assemble your Azure Percept DK components description: Learn how to unbox, connect, and power on your Azure Percept DK--++ Last updated 02/16/2021
azure-percept Tutorial No Code Speech https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/tutorial-no-code-speech.md
Title: Create a voice assistant with Azure Percept DK and Azure Percept Audio description: Learn how to create and deploy a no-code speech solution to your Azure Percept DK--++ Last updated 02/17/2021
azure-percept Tutorial Nocode Vision https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/tutorial-nocode-vision.md
Title: Create a no-code vision solution in Azure Percept Studio description: Learn how to create a no-code vision solution in Azure Percept Studio and deploy it to your Azure Percept DK--++ Last updated 02/10/2021
azure-resource-manager Define Resource Dependency https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/define-resource-dependency.md
The following example shows a logical SQL server and database. Notice that an ex
] ```
-For the full template, see [quickstart template for Azure SQL Database](https://github.com/Azure/azure-quickstart-templates/blob/master/101-sql-database/azuredeploy.json).
+For the full template, see [quickstart template for Azure SQL Database](https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.sql/sql-database/azuredeploy.json).
## reference and list functions
azure-sql Single Database Create Arm Template Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/single-database-create-arm-template-quickstart.md
Creating a [single database](single-database-overview.md) is the quickest and si
If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
-[![Deploy to Azure](../../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2F101-sql-database%2Fazuredeploy.json)
+[![Deploy to Azure](../../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.sql%2Fsql-database%2Fazuredeploy.json)
## Prerequisites
A single database has a defined set of compute, memory, IO, and storage resource
The template used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/101-sql-database/). These resources are defined in the template:
$adminPassword = Read-Host -Prompt "Enter the SQl server administrator password"
$resourceGroupName = "${projectName}rg" New-AzResourceGroup -Name $resourceGroupName -Location $location
-New-AzResourceGroupDeployment -ResourceGroupName $resourceGroupName -TemplateUri "https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/101-sql-database/azuredeploy.json" -administratorLogin $adminUser -administratorLoginPassword $adminPassword
+New-AzResourceGroupDeployment -ResourceGroupName $resourceGroupName -TemplateUri "https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.sql/sql-database/azuredeploy.json" -administratorLogin $adminUser -administratorLoginPassword $adminPassword
Read-Host -Prompt "Press [ENTER] to continue ..." ```
azure-sql Sql Server Availability Group To Sql On Azure Vm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/virtual-machines/sql-server-availability-group-to-sql-on-azure-vm.md
+
+ Title: Migrate availability group
+description: Learn how to lift and shift your Always On availability group high availability solution to SQL Server on Azure VMs using Azure Migrate.
+++++ Last updated : 4/25/2021++
+# Migrate availability group to SQL Server on Azure VM
+
+This article teaches you to migrate your SQL Server Always On availability group to SQL Server on Azure VMs using the [Azure Migrate: Server Migration tool](../../../migrate/migrate-services-overview.md#azure-migrate-server-migration-tool). Using the migration tool, you will be able to migrate each replica in the availability group to an Azure VM hosting SQL Server, as well as the cluster metadata, availability group metadata and other necessary high availability components.
+
+In this article, you learn how to:
+
+> [!div class="checklist"]
+> * Prepare Azure and source environment for migration.
+> * Start replicating servers.
+> * Monitor replication.
+> * Run a full server migration.
+> * Reconfigure Always On availability group.
++
+This guide uses the agent-based migration approach of Azure Migrate, which treats any server or virtual machine as a physical server. When migrating physical machines, Azure Migrate: Server Migration uses the same replication architecture as the agent-based disaster recovery in the Azure Site Recovery service, and some components share the same code base. Some content might link to Site Recovery documentation.
++
+## Prerequisites
++
+Before you begin this tutorial, you should complete the following prerequisites:
+
+1. An Azure subscription. Create a [free account](https://azure.microsoft.com/pricing/free-trial/), if necessary.
+1. Install the [Azure PowerShell `Az` module](/powershell/azure/install-az-ps).
+1. Download the [PowerShell samples scripts](https://github.com/Azure/azure-docs-powershell-samples/tree/master/azure-migrate/SQL%20Migration) from the GitHub repository.
+
+## Prepare Azure
+
+Prepare Azure for migration with the [Server Migration tool](../../../migrate/migrate-services-overview.md#azure-migrate-server-migration-tool).
+
+|**Task** | **Details**|
+| |
+|**Create an Azure Migrate project** | Your Azure account needs Contributor or Owner permissions to [create a new project](../../../migrate/create-manage-projects.md).|
+|**Verify permissions for your Azure account** | Your Azure account needs Contributor or Owner permissions on the Azure subscription, permissions to register Azure Active Directory (AAD) apps, and User Access Administrator permissions on the Azure subscription to create a Key Vault, to create a VM, and to write to an Azure managed disk. |
+|**Set up an Azure virtual network** | [Setup](/virtual-network/manage-virtual-network.md#create-a-virtual-network) an Azure virtual network (VNet). When you replicate to Azure, Azure VMs are created and joined to the Azure VNet that you specify when you set up migration.|
++
+To check you have proper permissions, follow these steps:
+
+1. In the Azure portal, open the subscription, and select **Access control (IAM)**.
+2. In **Check access**, find the relevant account, and select it to view permissions.
+3. You should have **Contributor** or **Owner** permissions.
+ - If you just created a free Azure account, you're the owner of your subscription.
+ - If you're not the subscription owner, work with the owner to assign the role.
+
+If you need to assign permissions, follow the steps in [Prepare for an Azure user account](../../../migrate/tutorial-discover-vmware.md#prepare-an-azure-user-account).
++
+## Prepare for migration
+
+To prepare for server migration, verify the physical server settings, and prepare to deploy a replication appliance.
+
+### Check machine requirements
++
+Ensure source machines comply with requirements to migrate to Azure. Follow these steps:
+
+1. [Verify](../../../migrate/migrate-support-matrix-physical-migration.md#physical-server-requirements) server requirements.
+1. Verify that source machines that you replicate to Azure comply with [Azure VM requirements](../../../migrate/migrate-support-matrix-physical-migration.md#azure-vm-requirements).
+1. Some [Windows](../../../migrate/prepare-for-migration.md#windows-machines) sources require a few additional changes. Migrating the source before making these changes could prevent the VM from booting in Azure. For some operating systems, Azure Migrate makes these changes automatically.
++
+### Prepare for replication
+
+Azure Migrate: Server Migration uses a replication appliance to replicate machines to Azure. The replication appliance runs the following components:
+
+- **Configuration server**: The configuration server coordinates communications between on-premises and Azure, and manages data replication.
+- **Process server**: The process server acts as a replication gateway. It receives replication data; optimizes it with caching, compression, and encryption, and sends it to a cache storage account in Azure.
+
+Prepare for appliance deployment as follows:
+
+- Create a Windows Server 2016 machine to host the replication appliance. Review the [machine requirements](../../../migrate/migrate-replication-appliance.md#appliance-requirements).
+- The replication appliance uses MySQL. Review the [options](../../../migrate/migrate-replication-appliance.md#mysql-installation) for installing MySQL on the appliance.
+- Review the Azure URLs required for the replication appliance to access [public](../../../migrate/migrate-replication-appliance.md#url-access) and [government](../../../migrate/migrate-replication-appliance.md#azure-government-url-access) clouds.
+- Review [port](../../../migrate/migrate-replication-appliance.md#port-access) access requirements for the replication appliance.
+
+> [!NOTE]
+> The replication appliance should be installed on a machine other than the source machine you are replicating or migrating, and not on any machine that has had the Azure Migrate discovery and assessment appliance installed before.
+
+### Download replication appliance installer
+
+To download the replication appliance installer, follow these steps:
+
+1. In the Azure Migrate project > **Servers**, in **Azure Migrate: Server Migration**, select **Discover**.
+
+ ![Discover VMs](../../../migrate/media/tutorial-migrate-physical-virtual-machines/migrate-discover.png)
+
+1. In **Discover machines** > **Are your machines virtualized?**, select **Physical or other (AWS, GCP, Xen, etc.)**.
+1. In **Target region**, select the Azure region to which you want to migrate the machines.
+1. Select **Confirm that the target region for migration is region-name**.
+1. Select **Create resources**. This creates an Azure Site Recovery vault in the background.
+ - If you've already set up migration with Azure Migrate: Server Migration, the target option can't be configured, since resources were set up previously.
+ - You can't change the target region for this project after selecting this button.
+ - All subsequent migrations are to this region.
+
+1. In **Do you want to install a new replication appliance?**, select **Install a replication appliance**.
+1. In **Download and install the replication appliance software**, download the appliance installer, and the registration key. You need to the key in order to register the appliance. The key is valid for five days after it's downloaded.
+
+ ![Download provider](../../../migrate/media/tutorial-migrate-physical-virtual-machines/download-provider.png)
+
+1. Copy the appliance setup file and key file to the Windows Server 2016 machine you created for the appliance.
+1. After the installation completes, the Appliance configuration wizard will launch automatically (You can also launch the wizard manually by using the cspsconfigtool shortcut that is created on the desktop of the appliance machine). Use the **Manage Accounts** tab of the wizard to create a dummy account with the following details:
+
+ - "guest" as the friendly name
+ - "username" as the username
+ - "password" as the password for the account.
+
+ You will use this dummy account in the Enable Replication stage.
+
+1. After setup completes, and the appliance restarts, in **Discover machines**, select the new appliance in **Select Configuration Server**, and select **Finalize registration**. Finalize registration performs a couple of final tasks to prepare the replication appliance.
+
+ ![Finalize registration](../../../migrate/media/tutorial-migrate-physical-virtual-machines/finalize-registration.png)
++
+## Install Mobility service
+
+Install the Mobility service agent on the servers you want to migrate. The agent installers are available on the replication appliance. Find the right installer, and install the agent on each machine you want to migrate.
++
+To install the Mobility service, follow these steps:
+
+1. Sign in to the replication appliance.
+1. Navigate to **%ProgramData%\ASR\home\svsystems\pushinstallsvc\repository**.
+1. Find the installer for the machine operating system and version. Review [supported operating systems](/site-recovery/vmware-physical-azure-support-matrix.md#replicated-machines).
+1. Copy the installer file to the machine you want to migrate.
+1. Make sure that you have the passphrase that was generated when you deployed the appliance.
+ - Store the file in a temporary text file on the machine.
+ - You can obtain the passphrase on the replication appliance. From the command line, run **C:\ProgramData\ASR\home\svsystems\bin\genpassphrase.exe -v** to view the current passphrase.
+ - Don't regenerate the passphrase. This will break connectivity and you will have to reregister the replication appliance.
+ - In the */Platform* parameter, specify *VMware* for both VMware machines and physical machines.
+
+1. Connect to the machine and extract the contents of the installer file to a local folder (such as c:\temp). Run this in an admin command prompt:
+
+ ```
+ ren Microsoft-ASR_UA*Windows*release.exe MobilityServiceInstaller.exe
+ MobilityServiceInstaller.exe /q /x:C:\Temp\Extracted
+ cd C:\Temp\Extracted
+ ```
+
+2. Run the Mobility Service Installer:
+
+ ```
+ UnifiedAgent.exe /Role "MS" /Platform "VmWare" /Silent
+ ```
+
+3. Register the agent with the replication appliance:
+
+ ```
+ cd C:\Program Files (x86)\Microsoft Azure Site Recovery\agent
+ UnifiedAgentConfigurator.exe /CSEndPoint <replication appliance IP address> /PassphraseFilePath <Passphrase File Path>
+ ```
+
+It may take some time after installation for discovered machines to appear in Azure Migrate: Server Migration. As VMs are discovered, the **Discovered servers** count rises.
+
+![Discovered servers](../../../migrate/media/tutorial-migrate-physical-virtual-machines/discovered-servers.png)
+
+## Prepare source machines
+
+To prepare source machines, run the `Get-ClusterInfo.ps1` script on a cluster node to retrieve information on the cluster resources. The script will output the role name, resource name, IP, and probe port in the `Cluster-Config.csv` file.
+
+```powershell
+./Get-ClusterInfo.ps1
+```
+
+## Create load balancer
+
+For the cluster and cluster roles to respond properly to requests, an Azure Load balancer is required. Without a load balancer, the other VMs are unable to reach the cluster IP address as it's not recognized as belonging to the network or the cluster.
+
+To create the load balancer, follow these steps:
+
+1. Fill out the columns in the `Cluster-Config.csv` file:
+
+**Column Header** | **Description**
+ |
+NewIP | Specify the IP address in the Azure virtual network (or subnet) for each resource in the CSV file.
+ServicePort | Specify the service port to be used by each resource in the CSV file. For the SQL clustered resource, use the same value for service port as the probe port in the CSV. For other cluster roles, the default values used are 1433 but you can continue to use the port numbers that are configured in your current setup.
++
+2. Run the `Create-ClusterLoadBalancer.ps1` script to create the load balancer using the following parameters:
+
+**Parameter** | **Type** | **Description**
+ | |
+ConfigFilePath | Mandatory| Specify the path for the `Cluster-Config.csv` file that you have filled out in the previous step.
+ResourceGroupName | Mandatory|Specify the name of the resource group in which the load balancer is to be created.
+VNetName | Mandatory|Specify the name of the Azure virtual network that the load balancer will be associated to.
+SubnetName | Mandatory|Specify the name of the subnet in the Azure virtual network that the load balancer will be associated to.
+VNetResourceGroupName | Mandatory|Specify the name of the resource group for the Azure virtual network that the load balancer will be associated to.
+Location | Mandatory|Specify the location in which the load balancer should be created.
+LoadBalancerName | Mandatory|Specify the name of the load balancer to be created.
++
+```powershell
+./Create-ClusterLoadBalancer.ps1 -ConfigFilePath ./clsuterinfo.csv -ResourceGroupName $resoucegroupname -VNetName $vnetname -subnetName $subnetname -VnetResourceGroupName $vnetresourcegroupname -Location "eastus" -LoadBalancerName $loadbalancername
+```
+
+## Replicate machines
+
+Now, select machines for migration. You can replicate up to 10 machines together. If you need to replicate more, then replicate them simultaneously in batches of 10.
+
+To replicate machines, follow these steps:
+
+1. In the Azure Migrate project > **Servers**, **Azure Migrate: Server Migration**, select **Replicate**.
+
+ ![Screenshot of the Azure Migrate - Servers screen showing the Replicate button selected in Azure Migrate: Server Migration under Migration tools](../../../migrate/media/tutorial-migrate-physical-virtual-machines/select-replicate.png)
+
+1. In **Replicate**, > **Source settings** > **Are your machines virtualized?**, select **Physical or other (AWS, GCP, Xen, etc.)**.
+1. In **On-premises appliance**, select the name of the Azure Migrate appliance that you set up.
+1. In **Process Server**, select the name of the replication appliance.
+1. In **Guest credentials**, select the dummy account created previously during the [replication installer setup](#download-replication-appliance-installer) previously in this article. Then select **Next: Virtual machines**.
+
+ ![Screenshot of the Source settings tab in the Replicate screen with the Guest credentials field highlighted.](../../../migrate/media/tutorial-migrate-physical-virtual-machines/source-settings.png)
+
+1. In **Virtual Machines**, in **Import migration settings from an assessment?**, leave the default setting **No, I'll specify the migration settings manually**.
+1. Check each VM you want to migrate. Then select **Next: Target settings**.
+
+ ![Select VMs](../../../migrate/media/tutorial-migrate-physical-virtual-machines/select-vms.png)
++
+1. In **Target settings**, select the subscription, and target region to which you'll migrate, and specify the resource group in which the Azure VMs will reside after migration.
+1. In **Virtual Network**, select the Azure VNet/subnet to which the Azure VMs will be joined after migration.
+1. In **Availability options**, select:
+ - Availability Zone to pin the migrated machine to a specific Availability Zone in the region. Use this option to distribute servers that form a multi-node application tier across Availability Zones. If you select this option, you'll need to specify the Availability Zone to use for each of the selected machines in the Compute tab. This option is only available if the target region selected for the migration supports Availability Zones.
+ - Availability Set to place the migrated machine in an Availability Set. The target resource group that was selected must have one or more availability sets in order to use this option.
+ - No infrastructure redundancy required option if you don't need either of these availability configurations for the migrated machines.
+
+1. In **Disk encryption type**, select:
+ - Encryption-at-rest with platform-managed key
+ - Encryption-at-rest with customer-managed key
+ - Double encryption with platform-managed and customer-managed keys
+
+ > [!NOTE]
+ > To replicate VMs with CMK, you'll need to [create a disk encryption set](https://go.microsoft.com/fwlink/?linkid=2151800) under the target Resource Group. A disk encryption set object maps Managed Disks to a Key Vault that contains the CMK to use for SSE.
+
+1. In **Azure Hybrid Benefit**:
+
+ - Select **No** if you don't want to apply Azure Hybrid Benefit. Then select **Next**.
+ - Select **Yes** if you have Windows Server machines that are covered with active Software Assurance or Windows Server subscriptions, and you want to apply the benefit to the machines you're migrating. Then select **Next**.
+
+ :::image type="content" source="../../../migrate/media/tutorial-migrate-vmware/target-settings.png" alt-text="Target settings":::
+
+1. In **Compute**, review the VM name, size, OS disk type, and availability configuration (if selected in the previous step). VMs must conform with [Azure requirements](../../../migrate/migrate-support-matrix-physical-migration.md#azure-vm-requirements).
+
+ - **VM size**: If you're using assessment recommendations, the VM size dropdown shows the recommended size. Otherwise Azure Migrate picks a size based on the closest match in the Azure subscription. Alternatively, pick a manual size in **Azure VM size**.
+ - **OS disk**: Specify the OS (boot) disk for the VM. The OS disk is the disk that has the operating system bootloader and installer.
+ - **Availability Zone**: Specify the Availability Zone to use.
+ - **Availability Set**: Specify the Availability Set to use.
+
+ ![Compute settings](../../../migrate/media/tutorial-migrate-physical-virtual-machines/compute-settings.png)
+
+1. In **Disks**, specify whether the VM disks should be replicated to Azure, and select the disk type (standard SSD/HDD or premium managed disks) in Azure. Then select **Next**.
+
+ ![Disk settings](../../../migrate/media/tutorial-migrate-physical-virtual-machines/disks.png)
+
+1. In **Review and start replication**, review the settings, and select **Replicate** to start the initial replication for the servers.
+
+> [!NOTE]
+> You can update replication settings any time before replication starts, **Manage** > **Replicating machines**. Settings can't be changed after replication starts.
+
+## Track and monitor
+
+Replication proceeds in the following sequence:
+
+- When you select **Replicate**, a _Start Replication_ job begins.
+- When the _Start Replication_ job finishes successfully, the machines begin their initial replication to Azure.
+- After initial replication finishes, delta replication begins. Incremental changes to on-premises disks are periodically replicated to the replica disks in Azure.
++
+You can track job status in the portal notifications.
+
+You can monitor replication status by selecting on **Replicating servers** in **Azure Migrate: Server Migration**.
+![Monitor replication](../../../migrate/media/tutorial-migrate-physical-virtual-machines/replicating-servers.png)
++
+## Migrate VMs
+
+After machines are replicated, they are ready for migration. To migrate your servers, follow these steps:
++
+1. In the Azure Migrate project > **Servers** > **Azure Migrate: Server Migration**, select **Replicating servers**.
+
+ ![Replicating servers](../../../migrate/media/tutorial-migrate-physical-virtual-machines/replicate-servers.png)
+
+2. To ensure the migrated server is synchronized with the source server, stop the SQL Server service on every replica in the availability group, starting with secondary replicas (in **SQL Server Configuration Manager** > **Services) while ensuring the disks hosting SQL data are online.
+3. In **Replicating machines** > select server name > **Overview**, ensure that the last synchronized timestamp is after you have stopped the SQL Server service on the servers to be migrated before you move onto the next step. This should only take a few minutes.
+2. In **Replicating machines**, right-click the VM > **Migrate**.
+3. In **Migrate** > **Shut down virtual machines and perform a planned migration with no data loss**, select **No** > **OK**.
+
+ > [!NOTE]
+ > For physical server migration, shut down of source machine is not supported automatically. The recommendation is to bring the application down as part of the migration window (don't let the applications accept any connections) and then initiate the migration (the server needs to be kept running, so remaining changes can be synchronized) before the migration is completed.
+
+4. A migration job starts for the VM. Track the job in Azure notifications.
+5. After the job finishes, you can view and manage the VM from the **Virtual Machines** page.
+
+## Reconfigure cluster
+
+After your VMs have migrated, reconfigure the cluster. Follow these steps:
+
+1. Shut down the migrated servers in Azure.
+1. Add the migrated machines to the backend pool of the load balancer. Navigate to **Load Balancer** > **Backend pools** > select backend pool > **add migrated machines**. 3. Start the migrated servers in Azure and login to any node.
+1. Copy the `ClusterConfig.csv` file and run the `Update-ClusterConfig.ps1` script passing the CSV as a parameter. This ensures the cluster resources are updated with the new configuration for the cluster to work in Azure.
+
+ ```powershell
+ ./Update-ClusterConfig.ps1 -ConfigFilePath $filepath
+ ```
+
+Your Always On availability group is ready.
+
+## Complete the migration
+
+1. After the migration is done, right-click the VM > **Stop migration**. This does the following:
+ - Stops replication for the on-premises machine.
+ - Removes the machine from the **Replicating servers** count in Azure Migrate: Server Migration.
+ - Cleans up replication state information for the machine.
+2. Install the Azure VM [Windows](/virtual-machines/extensions/agent-windows.md) agent on the migrated machines.
+3. Perform any post-migration app tweaks, such as updating database connection strings, and web server configurations.
+4. Perform final application and migration acceptance testing on the migrated application now running in Azure.
+5. Cut over traffic to the migrated Azure VM instance.
+6. Remove the on-premises VMs from your local VM inventory.
+7. Remove the on-premises VMs from local backups.
+8. Update any internal documentation to show the new location and IP address of the Azure VMs.
+
+## Post-migration best practices
+
+- For SQL Server:
+ - Install [SQL Server IaaS Agent extension](../../virtual-machines/windows/sql-server-iaas-agent-extension-automate-management.md) to automate management and administration tasks.
+ - [Optimize](../../virtual-machines/windows/performance-guidelines-best-practices-checklist.md) SQL Server performance on Azure VMs.
+ - Understand [pricing](../../virtual-machines/windows/pricing-guidance.md#free-licensed-sql-server-editions) for SQL Server on Azure.
+- For increased resilience:
+ - Keep data secure by backing up Azure VMs using the [Azure Backup service](../../../backup/quick-backup-vm-portal.md).
+ - Keep workloads running and continuously available by replicating Azure VMs to a secondary region with [Site Recovery](../../../site-recovery/azure-to-azure-tutorial-enable-replication.md).
+- For increased security:
+ - Lock down and limit inbound traffic access with [Azure Security Center - Just in time administration](../../../security-center/security-center-just-in-time.md).
+ - Restrict network traffic to management endpoints with [Network Security Groups](../../../virtual-network/network-security-groups-overview.md).
+ - Deploy [Azure Disk Encryption](../../../security/fundamentals/azure-disk-encryption-vms-vmss.md) to help secure disks, and keep data safe from theft and unauthorized access.
+ - Read more about [securing IaaS resources](https://azure.microsoft.com/services/virtual-machines/secure-well-managed-iaas/), and visit the [Azure Security Center](https://azure.microsoft.com/services/security-center/).
+- For monitoring and management:
+ - Consider deploying [Azure Cost Management](../../../cost-management-billing/cloudyn/overview.md) to monitor resource usage and spending.
++
+## Next steps
+
+Investigate the [cloud migration journey](/azure/architecture/cloud-adoption/getting-started/migrate) in the Azure Cloud Adoption Framework.
azure-sql Sql Server Failover Cluster Instance To Sql On Azure Vm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/virtual-machines/sql-server-failover-cluster-instance-to-sql-on-azure-vm.md
+
+ Title: Migrate failover cluster instance
+description: Learn how to lift and shift your Always On failover cluster instance high availability solution to SQL Server on Azure VMs using Azure Migrate.
+++++ Last updated : 4/25/2021++
+# Migrate failover cluster instance to SQL Server on Azure VMs
+
+This article teaches you to migrate your Always On failover cluster instance (FCI) to SQL Server on Azure VMs using the [Azure Migrate: Server Migration tool](../../../migrate/migrate-services-overview.md#azure-migrate-server-migration-tool). Using the migration tool, you will be able to migrate each node in the failover cluster instance to an Azure VM hosting SQL Server, as well as the cluster and FCI metadata.
+
+In this article, you learn how to:
+
+> [!div class="checklist"]
+> * Prepare Azure and source environment for migration.
+> * Start replicating VMs.
+> * Monitor replication.
+> * Run a full VM migration.
+> * Reconfigure SQL failover cluster with Azure shared disks.
++
+This guide uses the agent-based migration approach of Azure Migrate, which treats any server or virtual machine as a physical server. When migrating physical machines, Azure Migrate: Server Migration uses the same replication architecture as the agent-based disaster recovery in the Azure Site Recovery service, and some components share the same code base. Some content might link to Site Recovery documentation.
++
+## Prerequisites
+
+Before you begin this tutorial, you should:
+
+1. An Azure subscription. Create a [free account](https://azure.microsoft.com/pricing/free-trial/), if necessary.
+1. Install the [Azure PowerShell `Az` module](/powershell/azure/install-az-ps).
+1. Download the [PowerShell samples scripts](https://github.com/Azure/azure-docs-powershell-samples/tree/master/azure-migrate/SQL%20Migration) from the GitHub repository.
+
+## Prepare Azure
+
+Prepare Azure for migration with Server Migration.
+
+**Task** | **Details**
+ |
+**Create an Azure Migrate project** | Your Azure account needs Contributor or Owner permissions to [create a new project](https://docs.microsoft.com/azure/migrate/create-manage-projects).
+**Verify permissions for your Azure account** | Your Azure account needs Contributor or Owner permissions on the Azure subscription, permissions to register Azure Active Directory (AAD) apps, and User Access Administrator permissions on the Azure subscription to create a Key Vault, to create a VM, and to write to an Azure managed disk.
+**Set up an Azure virtual network** | [Setup](../../../virtual-network/manage-virtual-network.md#create-a-virtual-network) an Azure virtual network (VNet). When you replicate to Azure, Azure VMs are created and joined to the Azure VNet that you specify when you set up migration.
++
+To check you have proper permissions, follow these steps:
+
+1. In the Azure portal, open the subscription, and select **Access control (IAM)**.
+2. In **Check access**, find the relevant account, and select it to view permissions.
+3. You should have **Contributor** or **Owner** permissions.
+ - If you just created a free Azure account, you're the owner of your subscription.
+ - If you're not the subscription owner, work with the owner to assign the role.
+
+If you need to assign permissions, follow the steps in [Prepare for an Azure user account](../../../migrate/tutorial-discover-vmware.md#prepare-an-azure-user-account).
++
+## Prepare for migration
+
+To prepare for server migration, you need to verify the server settings, and prepare to deploy a replication appliance.
+
+### Check machine requirements
+
+Make sure machines comply with requirements for migration to Azure.
+
+1. [Verify](../../../migrate/migrate-support-matrix-physical-migration.md#physical-server-requirements) server requirements.
+2. Verify that source machines that you replicate to Azure comply with [Azure VM requirements](../../../migrate/migrate-support-matrix-physical-migration.md#azure-vm-requirements).
+1. Some [Windows](../../../migrate/prepare-for-migration.md#windows-machines) sources require a few additional changes. Migrating the source before making these changes could prevent the VM from booting in Azure. For some operating systems, Azure Migrate makes these changes automatically.
+
+### Prepare for replication
+
+Azure Migrate: Server Migration uses a replication appliance to replicate machines to Azure. The replication appliance runs the following components:
+
+- **Configuration server**: The configuration server coordinates communications between on-premises and Azure, and manages data replication.
+- **Process server**: The process server acts as a replication gateway. It receives replication data; optimizes it with caching, compression, and encryption, and sends it to a cache storage account in Azure.
+
+Prepare for appliance deployment as follows:
+
+- Create a Windows Server 2016 machine to host the replication appliance. Review the [machine requirements](../../../migrate/migrate-replication-appliance.md#appliance-requirements).
+- The replication appliance uses MySQL. Review the [options](../../../migrate/migrate-replication-appliance.md#mysql-installation) for installing MySQL on the appliance.
+- Review the Azure URLs required for the replication appliance to access [public](../../../migrate/migrate-replication-appliance.md#url-access) and [government](../../../migrate/migrate-replication-appliance.md#azure-government-url-access) clouds.
+- Review [port](../../../migrate/migrate-replication-appliance.md#port-access) access requirements for the replication appliance.
+
+> [!NOTE]
+> The replication appliance should be installed on a machine other than the source machine you are replicating or migrating, and not on any machine that has had the Azure Migrate discovery and assessment appliance installed to before.
+
+### Download replication appliance installer
+
+To download the replication appliance installer, follow these steps:
+
+1. In the Azure Migrate project > **Servers**, in **Azure Migrate: Server Migration**, select **Discover**.
+
+ ![Discover VMs](../../../migrate/media/tutorial-migrate-physical-virtual-machines/migrate-discover.png)
+
+1. In **Discover machines** > **Are your machines virtualized?**, select **Physical or other (AWS, GCP, Xen, etc.)**.
+1. In **Target region**, select the Azure region to which you want to migrate the machines.
+1. Select **Confirm that the target region for migration is region-name**.
+1. Select **Create resources**. This creates an Azure Site Recovery vault in the background.
+ - If you've already set up migration with Azure Migrate Server Migration, the target option can't be configured, since resources were set up previously.
+ - You can't change the target region for this project after selecting this button.
+ - All subsequent migrations are to this region.
+
+1. In **Do you want to install a new replication appliance?**, select **Install a replication appliance**.
+1. In **Download and install the replication appliance software**, download the appliance installer, and the registration key. You need to the key in order to register the appliance. The key is valid for five days after it's downloaded.
+
+ ![Download provider](../../../migrate/media/tutorial-migrate-physical-virtual-machines/download-provider.png)
+
+1. Copy the appliance setup file and key file to the Windows Server 2016 machine you created for the appliance.
+1. After the installation completes, the Appliance configuration wizard will launch automatically (You can also launch the wizard manually by using the cspsconfigtool shortcut that is created on the desktop of the appliance machine). Use the **Manage Accounts** tab of the wizard to create a dummy account with the following details:
+
+ - "guest" as the friendly name
+ - "username" as the username
+ - "password" as the password for the account.
+
+ You will use this dummy account in the Enable Replication stage.
+
+1. After setup completes, and the appliance restarts, in **Discover machines**, select the new appliance in **Select Configuration Server**, and select **Finalize registration**. Finalize registration performs a couple of final tasks to prepare the replication appliance.
+
+ ![Finalize registration](../../../migrate/media/tutorial-migrate-physical-virtual-machines/finalize-registration.png)
++
+## Install the Mobility service
+
+Install the Mobility service agent on the servers you want to migrate. The agent installers are available on the replication appliance. Find the right installer, and install the agent on each machine you want to migrate.
+
+To install the Mobility service, follow these steps:
+
+1. Sign in to the replication appliance.
+2. Navigate to **%ProgramData%\ASR\home\svsystems\pushinstallsvc\repository**.
+3. Find the installer for the machine operating system and version. Review [supported operating systems](/site-recovery/vmware-physical-azure-support-matrix.md#replicated-machines).
+4. Copy the installer file to the machine you want to migrate.
+5. Make sure that you have the passphrase that was generated when you deployed the appliance.
+ - Store the file in a temporary text file on the machine.
+ - You can obtain the passphrase on the replication appliance. From the command line, run **C:\ProgramData\ASR\home\svsystems\bin\genpassphrase.exe -v** to view the current passphrase.
+ - Don't regenerate the passphrase. This will break connectivity and you will have to reregister the replication appliance.
+ - In the */Platform* parameter, specify *VMware* for both VMware machines and physical machines.
+
+1. Connect to the machine and extract the contents of the installer file to a local folder (such as c:\temp). Run this in an admin command prompt:
+
+ ```
+ ren Microsoft-ASR_UA*Windows*release.exe MobilityServiceInstaller.exe
+ MobilityServiceInstaller.exe /q /x:C:\Temp\Extracted
+ cd C:\Temp\Extracted
+ ```
+
+2. Run the Mobility Service Installer:
+
+ ```
+ UnifiedAgent.exe /Role "MS" /Platform "VmWare" /Silent
+ ```
+
+3. Register the agent with the replication appliance:
+
+ ```
+ cd C:\Program Files (x86)\Microsoft Azure Site Recovery\agent
+ UnifiedAgentConfigurator.exe /CSEndPoint <replication appliance IP address> /PassphraseFilePath <Passphrase File Path>
+ ```
+
+It may take some time after installation for discovered machines to appear in Azure Migrate: Server Migration. As VMs are discovered, the **Discovered servers** count rises.
+
+![Discovered servers](../../../migrate/media/tutorial-migrate-physical-virtual-machines/discovered-servers.png)
+
+## Prepare source machines
+
+To prepare source machines, you'll need information from the cluster.
+
+> [!CAUTION]
+> - Maintain disk ownership throughout the replication process until the final cutover. If there is a change in disk ownership, there is a chance that the volumes could be corrupted and replication would need to be to retriggered. Set the preferred owner for each disk to avoid transfer of ownership during the replication process.
+> - Avoid patching activities and system reboots during the replication process to avoid transfer of disk ownership.
+
+To prepare source machines, do the following:
+
+1. **Identify disk ownership:** Log in to one of the cluster nodes and open Failover Cluster Manager. Identify the owner node for the disks to determine the disks that need to be migrated with each server.
+2. **Retrieve cluster information:** Run the `Get-ClusterInfo.ps1` script on a cluster node to retrieve information on the cluster resources. The script will output the role name, resource name, IP, and probe port in the `Cluster-Config.csv` file. Use this CSV file to create and assign resource in Azure later in this article.
+
+ ```powershell
+ ./Get-ClusterInfo.ps1
+ ```
+
+## Create load balancer
+
+For the cluster and cluster roles to respond properly to requests, an Azure Load balancer is required. Without a load balancer, the other VMs are unable to reach the cluster IP address as it's not recognized as belonging to the network or the cluster.
+
+1. Fill out the columns in the `Cluster-Config.csv` file:
+
+ **Column Header** | **Description**
+ |
+ NewIP | Specify the IP address in the Azure virtual network (or subnet) for each resource in the CSV file.
+ ServicePort | Specify the service port to be used by each resource in the CSV file. For SQL cluster resource, use the same value for service port as the probe port in the CSV. For other cluster roles, the default values used are 1433 but you can continue to use the port numbers that are configured in your current setup.
++
+1. Run the `Create-ClusterLoadBalancer.ps1` script to create the load balancer using the following mandatory parameters:
+
+ **Parameter** | **Type** | **Description**
+ | |
+ ConfigFilePath | Mandatory | Specify the path for the `Cluster-Config.csv` file that you have filled out in the previous step.
+ ResourceGroupName | Mandatory| Specify the name of the resource Group in which the load balancer is to be created.
+ VNetName | Mandatory| Specify the name of the Azure virtual network that the load balancer will be associated to.
+ SubnetName | Mandatory| Specify the name of the subnet in the Azure virtual network that the load balancer will be associated to.
+ VNetResourceGroupName | Mandatory| Specify the name of the resource group for the Azure virtual network that the load balancer will be associated to.
+ Location | Mandatory| Specify the location in which the load balancer should be created.
+ LoadBalancerName | Mandatory| Specify the name of the load balancer to be created.
++
+ ```powershell
+ ./Create-ClusterLoadBalancer.ps1 -ConfigFilePath ./cluster-config.csv -ResourceGroupName $resoucegroupname -VNetName $vnetname -subnetName $subnetname -VnetResourceGroupName $vnetresourcegroupname -Location ΓÇ£eastusΓÇ¥ -LoadBalancerName $loadbalancername
+ ```
+
+## Replicate machines
+
+Now, select machines for migration. You can replicate up to 10 machines together. If you need to replicate more, then replicate them simultaneously in batches of 10.
+
+1. In the Azure Migrate project > **Servers**, **Azure Migrate: Server Migration**, select **Replicate**.
+
+ ![Screenshot of the Azure Migrate - Servers screen showing the Replicate button selected in Azure Migrate: Server Migration under Migration tools.](../../../migrate/media/tutorial-migrate-physical-virtual-machines/select-replicate.png)
+
+1. In **Replicate**, > **Source settings** > **Are your machines virtualized?**, select **Physical or other (AWS, GCP, Xen, etc.)**.
+1. In **On-premises appliance**, select the name of the Azure Migrate appliance that you set up.
+1. In **Process Server**, select the name of the replication appliance.
+1. In **Guest credentials**, select the dummy account created previously during the [replication installer setup](#download-replication-appliance-installer). Then select **Next: Virtual machines**.
+
+ ![Screenshot of the Source settings tab in the Replicate screen with the Guest credentials field highlighted.](../../../migrate/media/tutorial-migrate-physical-virtual-machines/source-settings.png)
+
+1. In **Virtual Machines**, in **Import migration settings from an assessment?**, leave the default setting **No, I'll specify the migration settings manually**.
+1. Check each VM you want to migrate. Then select **Next: Target settings**.
+
+ ![Select VMs](../../../migrate/media/tutorial-migrate-physical-virtual-machines/select-vms.png)
++
+1. In **Target settings**, select the subscription, and target region to which you'll migrate, and specify the resource group in which the Azure VMs will reside after migration.
+1. In **Virtual Network**, select the Azure VNet/subnet to which the Azure VMs will be joined after migration.
+1. In **Availability options**, select:
+ - Availability Zone to pin the migrated machine to a specific Availability Zone in the region. Use this option to distribute servers that form a multi-node application tier across Availability Zones. If you select this option, you'll need to specify the Availability Zone to use for each of the selected machine in the Compute tab. This option is only available if the target region selected for the migration supports Availability Zones
+ - Availability Set to place the migrated machine in an Availability Set. The target Resource Group that was selected must have one or more availability sets in order to use this option.
+ - No infrastructure redundancy required option if you don't need either of these availability configurations for the migrated machines.
+
+1. In **Disk encryption type**, select:
+ - Encryption-at-rest with platform-managed key
+ - Encryption-at-rest with customer-managed key
+ - Double encryption with platform-managed and customer-managed keys
+
+ > [!NOTE]
+ > To replicate VMs with CMK, you'll need to [create a disk encryption set](https://go.microsoft.com/fwlink/?linkid=2151800) under the target Resource Group. A disk encryption set object maps Managed Disks to a Key Vault that contains the CMK to use for SSE.
+
+1. In **Azure Hybrid Benefit**:
+
+ - Select **No** if you don't want to apply Azure Hybrid Benefit. Then select **Next**.
+ - Select **Yes** if you have Windows Server machines that are covered with active Software Assurance or Windows Server subscriptions, and you want to apply the benefit to the machines you're migrating. Then select **Next**.
+
+ :::image type="content" source="../../../migrate/media/tutorial-migrate-vmware/target-settings.png" alt-text="Target settings":::
+
+1. In **Compute**, review the VM name, size, OS disk type, and availability configuration (if selected in the previous step). VMs must conform with [Azure requirements](../../../migrate/migrate-support-matrix-physical-migration.md#azure-vm-requirements).
+
+ - **VM size**: If you're using assessment recommendations, the VM size dropdown shows the recommended size. Otherwise Azure Migrate picks a size based on the closest match in the Azure subscription. Alternatively, pick a manual size in **Azure VM size**.
+ - **OS disk**: Specify the OS (boot) disk for the VM. The OS disk is the disk that has the operating system bootloader and installer.
+ - **Availability Zone**: Specify the Availability Zone to use.
+ - **Availability Set**: Specify the Availability Set to use.
+
+ ![Compute settings](../../../migrate/media/tutorial-migrate-physical-virtual-machines/compute-settings.png)
+
+1. In **Disks**, specify whether the VM disks should be replicated to Azure, and select the disk type (standard SSD/HDD or premium managed disks) in Azure. Then select **Next**.
+ - Use the list that you had made earlier to select the disks to be replicated with each server. Exclude other disks from replication.
+
+
+ ![Disk settings](../../../migrate/media/tutorial-migrate-physical-virtual-machines/disks.png)
+
+1. In **Review and start replication**, review the settings, and select **Replicate** to start the initial replication for the servers.
+
+> [!NOTE]
+> You can update replication settings any time before replication starts, **Manage** > **Replicating machines**. Settings can't be changed after replication starts.
+
+## Track and monitor
+
+Replication proceeds in the following sequence:
+
+- When you select **Replicate** a _Start Replication_ job begins.
+- When the _Start Replication_ job finishes successfully, the machines begin their initial replication to Azure.
+- After initial replication finishes, delta replication begins. Incremental changes to on-premises disks are periodically replicated to the replica disks in Azure.
+- After the initial replication is completed, configure the Compute and Network items for each VM. Clusters typically have multiple NICs but only one NIC is required for the migration (set the others as do not create).
+
+You can track job status in the portal notifications.
+
+You can monitor replication status by selecting on **Replicating servers** in **Azure Migrate: Server Migration**.
+![Monitor replication](../../../migrate/media/tutorial-migrate-physical-virtual-machines/replicating-servers.png)
++
+## Migrate VMs
+
+After machines are replicated, they are ready for migration. To migrate your servers, follow these steps:
+
+1. In the Azure Migrate project > **Servers** > **Azure Migrate: Server Migration**, select **Replicating servers**.
+
+ ![Replicating servers](../../../migrate/media/tutorial-migrate-physical-virtual-machines/replicate-servers.png)
+
+1. To ensure that the migrated server is synchronized with the source server, stop the SQL Server resource (in **Failover Cluster Manager** > **Roles** > **Other resources**) while ensuring that the cluster disks are online.
+1. In **Replicating machines** > select on server name > **Overview**, ensure that the last synchronized timestamp is after you have stopped SQL Server resource on the servers to be migrated before you move onto the next step. This should only take a few of minutes.
+1. In **Replicating machines**, right-click the VM > **Migrate**.
+1. In **Migrate** > **Shut down virtual machines and perform a planned migration with no data loss**, select **No** > **OK**.
+
+ > [!NOTE]
+ > For Physical Server Migration, shut down of source machine is not supported automatically. The recommendation is to bring the application down as part of the migration window (don't let the applications accept any connections) and then initiate the migration (the server needs to be kept running, so remaining changes can be synchronized) before the migration is completed.
+
+1. A migration job starts for the VM. Track the job in Azure notifications.
+1. After the job finishes, you can view and manage the VM from the **Virtual Machines** page.
+
+## Reconfigure cluster
+
+After your VMs have migrated, reconfigure the cluster. Follow these steps:
+
+1. Shut down the migrated servers in Azure.
+1. Add the migrated machines to the backend pool of the load balancer. Navigate to **Load Balancer** > **Backend pools** > select backend pool > **add migrated machines**.
+
+1. Reconfigure the migrated disks of the servers as shared disks by running the `Create-SharedDisks.ps1` script. The script is interactive and will prompt for a list of machines and then show available disks to be extracted (only data disks). You will be prompted once to select which machines contain the drives to be turned into shared disks. Once selected, you will be prompted again, once per machine, to pick the specific disks.
+
+ **Parameter** | **Type** | **Description**
+ | |
+ ResourceGroupName | Mandatory | Specify the name of the resource group containing the migrated servers.
+ NumberofNodes | Optional | Specify the number of nodes in your failover cluster instance. This parameter is used to identify the right SKU for the shared disks to be created. By default, the script assumes the number of nodes in the cluster to be 2.
+ DiskNamePrefix | Optional | Specify the prefix that you'd want to add to the names of your shared disks.
+
+ ```powershell
+ ./Create-SharedDisks.ps1 -ResourceGroupName $resoucegroupname -NumberofNodes $nodesincluster -DiskNamePrefix $disknameprefix
+ ```
+
+1. Attach the shared disks to the migrated servers by running the `Attach-SharedDisks.ps1` script.
+
+ **Parameter** | **Type** |**Description**
+ | |
+ ResourceGroupName | Mandatory | Specify the name of the resource group containing the migrated servers.
+ StartingLunNumber | Optional |Specify the starting LUN number that is available for the shared disks to be attached to. By default, the script tries to attach shared disks to LUN starting 0.
+
+ ```powershell
+ ./Attach-ShareDisks.ps1 -ResourceGroupName $resoucegroupname
+ ```
+
+1. Start the migrated servers in Azure and login to any node.
+
+1. Copy the `ClusterConfig.csv` file and run the `Update-ClusterConfig.ps1` script passing the CSV as a parameter. This will ensure the cluster resources are updated with the new configuration for the cluster to work in Azure.
+
+ ```powershell
+ ./Update-ClusterConfig.ps1 -ConfigFilePath $filepath
+ ```
+
+Your SQL Server failover cluster instance is ready.
+
+## Complete the migration
+
+1. After the migration is done, right-click the VM > **Stop migration**. This does the following:
+ - Stops replication for the on-premises machine.
+ - Removes the machine from the **Replicating servers** count in Azure Migrate: Server Migration.
+ - Cleans up replication state information for the machine.
+1. Install the Azure VM [Windows](/virtual-machines/extensions/agent-windows.md) agent on the migrated machines.
+1. Perform any post-migration app tweaks, such as updating database connection strings, and web server configurations.
+1. Perform final application and migration acceptance testing on the migrated application now running in Azure.
+1. Cut over traffic to the migrated Azure VM instance.
+1. Remove the on-premises VMs from your local VM inventory.
+1. Remove the on-premises VMs from local backups.
+1. Update any internal documentation to show the new location and IP address of the Azure VMs.
+
+## Post-migration best practices
+
+- For SQL Server:
+ - Install [SQL Server IaaS Agent extension](../../virtual-machines/windows/sql-server-iaas-agent-extension-automate-management.md) to automate management and administration tasks.
+ - [Optimize](../../virtual-machines/windows/performance-guidelines-best-practices-checklist.md) SQL Server performance on Azure VMs.
+ - Understand [pricing](../../virtual-machines/windows/pricing-guidance.md#free-licensed-sql-server-editions) for SQL Server on Azure.
+- For increased resilience:
+ - Keep data secure by backing up Azure VMs using the [Azure Backup service](../../../backup/quick-backup-vm-portal.md).
+ - Keep workloads running and continuously available by replicating Azure VMs to a secondary region with [Site Recovery](../../../site-recovery/azure-to-azure-tutorial-enable-replication.md).
+- For increased security:
+ - Lock down and limit inbound traffic access with [Azure Security Center - Just in time administration](../../../security-center/security-center-just-in-time.md).
+ - Restrict network traffic to management endpoints with [Network Security Groups](../../../virtual-network/network-security-groups-overview.md).
+ - Deploy [Azure Disk Encryption](../../../security/fundamentals/azure-disk-encryption-vms-vmss.md) to help secure disks, and keep data safe from theft and unauthorized access.
+ - Read more about [securing IaaS resources](https://azure.microsoft.com/services/virtual-machines/secure-well-managed-iaas/), and visit the [Azure Security Center](https://azure.microsoft.com/services/security-center/).
+- For monitoring and management:
+ - Consider deploying [Azure Cost Management](../../../cost-management-billing/cloudyn/overview.md) to monitor resource usage and spending.
++
+## Next steps
+
+Investigate the [cloud migration journey](/azure/architecture/cloud-adoption/getting-started/migrate) in the Azure Cloud Adoption Framework.
azure-sql Sql Server To Sql On Azure Vm Migration Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/virtual-machines/sql-server-to-sql-on-azure-vm-migration-overview.md
The following table details the available method for the **lift and shift** migr
| | | | | | | [Azure Migrate](../../../migrate/index.yml) | SQL Server 2008 SP4| SQL Server 2008 SP4| [Azure VM storage limit](../../../index.yml) | Existing SQL Server to be moved as-is to instance of SQL Server on an Azure VM. Can scale migration workloads of up to 35,000 VMs. <br /><br /> Source server(s) remain online and servicing requests during synchronization of server data, minimizing downtime. <br /><br /> **Automation & scripting**: [Azure Site Recovery Scripts](../../../migrate/how-to-migrate-at-scale.md) and [Example of scaled migration and planning for Azure](/azure/cloud-adoption-framework/migrate/azure-best-practices/contoso-migration-scale)|
+> [!NOTE]
+> It's now possible to lift and shift both your [failover cluster instance](sql-server-failover-cluster-instance-to-sql-on-azure-vm.md) and [availability group](sql-server-availability-group-to-sql-on-azure-vm.md) solution to SQL Server on Azure VMs using Azure Migrate.
+ ## Migrate Due to the ease of setup, the recommended migration approach is to take a native SQL Server [backup](/sql/t-sql/statements/backup-transact-sql) locally and then copy the file to Azure. This method supports larger databases (>1 TB) for all versions of SQL Server starting from 2008 and larger database backups (>1 TB). However, for databases starting in SQL Server 2014, that are smaller than 1 TB, and that have good connectivity to Azure, then [SQL Server backup to URL](/sql/relational-databases/backup-restore/sql-server-backup-to-url) is the better approach.
The following table details all available methods to migrate your SQL Server dat
&nbsp; > [!TIP]
-> For large data transfers with limited to no network options, see [Large data transfers with limited connectivity](../../../storage/common/storage-solution-large-dataset-low-network.md).
->
+> - For large data transfers with limited to no network options, see [Large data transfers with limited connectivity](../../../storage/common/storage-solution-large-dataset-low-network.md).
+> - It's now possible to lift and shift both your [failover cluster instance](sql-server-failover-cluster-instance-to-sql-on-azure-vm.md) and [availability group](sql-server-availability-group-to-sql-on-azure-vm.md) solution to SQL Server on Azure VMs using Azure Migrate.
### Considerations
azure-sql Availability Group Az Commandline Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/availability-group-az-commandline-configure.md
Deployment of the availability group is still done manually through SQL Server M
While this article uses PowerShell and the Az CLI to configure the availability group environment, it is also possible to do so from the [Azure portal](availability-group-azure-portal-configure.md), using [Azure Quickstart templates](availability-group-quickstart-template-configure.md), or [Manually](availability-group-manually-configure-tutorial.md) as well.
+> [!NOTE]
+> It's now possible to lift and shift your availability group solution to SQL Server on Azure VMs using Azure Migrate. See [Migrate availability group](../../migration-guides/virtual-machines/sql-server-availability-group-to-sql-on-azure-vm.md) to learn more.
+ ## Prerequisites To configure an Always On availability group, you must have the following prerequisites:
azure-sql Availability Group Azure Portal Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/availability-group-azure-portal-configure.md
This feature is currently in preview.
While this article uses the Azure portal to configure the availability group environment, it is also possible to do so using [PowerShell or the Azure CLI](availability-group-az-commandline-configure.md), [Azure Quickstart templates](availability-group-quickstart-template-configure.md), or [Manually](availability-group-manually-configure-tutorial.md) as well.
+> [!NOTE]
+> It's now possible to lift and shift your availability group solution to SQL Server on Azure VMs using Azure Migrate. See [Migrate availability group](../../migration-guides/virtual-machines/sql-server-availability-group-to-sql-on-azure-vm.md) to learn more.
+ ## Prerequisites
azure-sql Availability Group Manually Configure Prerequisites Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/availability-group-manually-configure-prerequisites-tutorial.md
The following diagram illustrates what you build in the tutorial.
![Availability group](./media/availability-group-manually-configure-prerequisites-tutorial-/00-EndstateSampleNoELB.png)
+>[!NOTE]
+> It's now possible to lift and shift your availability group solution to SQL Server on Azure VMs using Azure Migrate. See [Migrate availability group](../../migration-guides/virtual-machines/sql-server-availability-group-to-sql-on-azure-vm.md) to learn more.
+ ## Review availability group documentation This tutorial assumes that you have a basic understanding of SQL Server Always On availability groups. If you're not familiar with this technology, see [Overview of Always On availability groups (SQL Server)](/sql/database-engine/availability-groups/windows/overview-of-always-on-availability-groups-sql-server).
azure-sql Availability Group Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/availability-group-overview.md
The following diagram illustrates an availability group for SQL Server on Azure
![Availability Group](./media/availability-group-overview/00-EndstateSampleNoELB.png)
+> [!NOTE]
+> It's now possible to lift and shift your availability group solution to SQL Server on Azure VMs using Azure Migrate. See [Migrate availability group](../../migration-guides/virtual-machines/sql-server-availability-group-to-sql-on-azure-vm.md) to learn more.
## VM redundancy
azure-sql Availability Group Quickstart Template Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/availability-group-quickstart-template-configure.md
This article describes how to use the Azure quickstart templates to partially au
Other parts of the availability group configuration must be done manually, such as creating the availability group and creating the internal load balancer. This article provides the sequence of automated and manual steps. While this article uses the Azure Quickstart templates to configure the availability group environment, it is also possible to do so using the [Azure portal](availability-group-azure-portal-configure.md), [PowerShell or the Azure CLI](availability-group-az-commandline-configure.md), or [Manually](availability-group-manually-configure-tutorial.md) as well. +
+> [!NOTE]
+> It's now possible to lift and shift your availability group solution to SQL Server on Azure VMs using Azure Migrate. See [Migrate availability group](../../migration-guides/virtual-machines/sql-server-availability-group-to-sql-on-azure-vm.md) to learn more.
## Prerequisites
azure-sql Business Continuity High Availability Disaster Recovery Hadr Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/business-continuity-high-availability-disaster-recovery-hadr-overview.md
It's possible for the SQL Server instance to fail while the VM is online and hea
Geo-redundant storage (GRS) in Azure is implemented with a feature called geo-replication. GRS might not be an adequate disaster recovery solution for your databases. Because geo-replication sends data asynchronously, recent updates can be lost in a disaster. More information about geo-replication limitations is covered in the [Geo-replication support](#geo-replication-support) section.
+> [!NOTE]
+> It's now possible to lift and shift both your [failover cluster instance](../../migration-guides/virtual-machines/sql-server-failover-cluster-instance-to-sql-on-azure-vm.md) and [availability group](../../migration-guides/virtual-machines/sql-server-availability-group-to-sql-on-azure-vm.md) solution to SQL Server on Azure VMs using Azure Migrate.
++ ## Deployment architectures Azure supports these SQL Server technologies for business continuity:
azure-sql Create Sql Vm Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/create-sql-vm-resource-manager-template.md
Use this Azure Resource Manager template (ARM template) to deploy a SQL Server o
If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
-[![Deploy to Azure](../../../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3a%2f%2fraw.githubusercontent.com%2fAzure%2fazure-quickstart-templates%2fmaster%2f101-sql-vm-new-storage%2fazuredeploy.json)
+[![Deploy to Azure](../../../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.sqlvirtualmachine%2Fsql-vm-new-storage%2Fazuredeploy.json)
## Prerequisites
The SQL Server VM ARM template requires the following:
The template used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/101-sql-vm-new-storage/). Five Azure resources are defined in the template:
More SQL Server on Azure VM templates can be found in the [quickstart template g
1. Select the following image to sign in to Azure and open a template. The template creates a virtual machine with the intended SQL Server version installed to it, and registered with the SQL IaaS Agent extension.
- [![Deploy to Azure](../../../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3a%2f%2fraw.githubusercontent.com%2fAzure%2fazure-quickstart-templates%2fmaster%2f101-sql-vm-new-storage%2fazuredeploy.json)
+ [![Deploy to Azure](../../../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.sqlvirtualmachine%2Fsql-vm-new-storage%2Fazuredeploy.json)
2. Select or enter the following values.
azure-sql Doc Changes Updates Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/doc-changes-updates-release-notes.md
vm-windows-sql-server Previously updated : 10/15/2020 Last updated : 04/25/2021 # Documentation changes for SQL Server on Azure Virtual Machines [!INCLUDE[appliesto-sqlvm](../../includes/appliesto-sqlvm.md)] Azure allows you to deploy a virtual machine (VM) with an image of SQL Server built in. This article summarizes the documentation changes associated with new features and improvements in the recent releases of [SQL Server on Azure Virtual Machines](https://azure.microsoft.com/services/virtual-machines/sql-server/).
-## October 2020
-| Changes | Details |
-| | |
-| **DNN for AG** | You can now configure a [distributed network name (DNN) listener)](availability-group-distributed-network-name-dnn-listener-configure.md) for SQL Server 2019 CU8 and later to replace the traditional [VNN listener](availability-group-overview.md#connectivity), negating the need for an Azure Load Balancer. |
-
-## September 2020
+## April 2021
| Changes | Details | | | |
-| **Automatic extension registration** | You can now enable the [Automatic registration](sql-agent-extension-automatic-registration-all-vms.md) feature to automatically register all SQL Server VMs already deployed to your subscription with the [SQL IaaS Agent extension](sql-server-iaas-agent-extension-automate-management.md). This applies to all existing VMs, and will also automatically register all SQL Server VMs added in the future. |
+| **Migrate high availability to VM** | Azure Migrate brings support to lift and shift your entire high availability solution to SQL Server on Azure VMs. Bring your [availability group](../../migration-guides/virtual-machines/sql-server-availability-group-to-sql-on-azure-vm.md) or your [failover cluster instance](../../migration-guides/virtual-machines/sql-server-failover-cluster-instance-to-sql-on-azure-vm.md) to SQL Server VMs using Azure Migrate today! |
-## August 2020
+## March 2021
| Changes | Details | | | |
-| **Configure ag in portal** | It is now possible to [configure your availability group via the Azure portal](availability-group-azure-portal-configure.md). This feature is currently in preview and being deployed so if your desired region is unavailable, check back soon. |
+| **Performance best practices refresh** | We've rewritten, refreshed, and updated the performance best practices documentation, splitting one article into a series that contain: [a checklist](performance-guidelines-best-practices-checklist.md), [VM size guidance](performance-guidelines-best-practices-vm-size.md), [Storage guidance](performance-guidelines-best-practices-storage.md), and [collecting baseline instructions](performance-guidelines-best-practices-collect-baseline.md). |
-## July 2020
+## 2020
| Changes | Details | | | |
-| **Migrate log to ultra disk** | Learn how you can [migrate your log file to an ultra disk](storage-migrate-to-ultradisk.md) to leverage high performance and low latency. |
-| **Create AG using Azure PowerShell** | It's now possible to simplify the creation of an availability group by using [Azure PowerShell](availability-group-az-commandline-configure.md) as well as the Azure CLI. |
--
-## June 2020
-
-| Changes | Details |
-| | |
+| **Azure Government support** | It's now possible to register SQL Server virtual machines with the SQL IaaS Agent extension for virtual machines hosted in the [Azure Government](https://azure.microsoft.com/global-infrastructure/government/) cloud. |
+| **Azure SQL family** | SQL Server on Azure Virtual Machines is now a part of the [Azure SQL family of products](../../azure-sql-iaas-vs-paas-what-is-overview.md). Check out our [new look](../index.yml)! Nothing has changed in the product, but the documentation aims to make the Azure SQL product decision easier. |
| **Distributed network name (DNN)** | SQL Server 2019 on Windows Server 2016+ is now previewing support for routing traffic to your failover cluster instance (FCI) by using a [distributed network name](./failover-cluster-instance-distributed-network-name-dnn-configure.md) rather than using Azure Load Balancer. This support simplifies and streamlines connecting to your high-availability (HA) solution in Azure. | | **FCI with Azure shared disks** | It's now possible to deploy your [failover cluster instance (FCI)](failover-cluster-instance-overview.md) by using [Azure shared disks](failover-cluster-instance-azure-shared-disks-manually-configure.md). | | **Reorganized FCI docs** | The documentation around [failover cluster instances with SQL Server on Azure VMs](failover-cluster-instance-overview.md) has been rewritten and reorganized for clarity. We've separated some of the configuration content, like the [cluster configuration best practices](hadr-cluster-best-practices.md), how to prepare a [virtual machine for a SQL Server FCI](failover-cluster-instance-prepare-vm.md), and how to configure [Azure Load Balancer](./availability-group-vnn-azure-load-balancer-configure.md). |
-| &nbsp; | &nbsp; |
--
-## May 2020
-
-| Changes | Details |
-| | |
-| **Azure SQL family** | SQL Server on Azure Virtual Machines is now a part of the [Azure SQL family of products](../../azure-sql-iaas-vs-paas-what-is-overview.md). Check out our [new look](../index.yml)! Nothing has changed in the product, but the documentation aims to make the Azure SQL product decision easier. |
--
-## January 2020
-
-| Changes | Details |
-| | |
-| **Azure Government support** | It's now possible to register SQL Server virtual machines with the SQL IaaS Agent extension for virtual machines hosted in the [Azure Government](https://azure.microsoft.com/global-infrastructure/government/) cloud. |
+| **Migrate log to ultra disk** | Learn how you can [migrate your log file to an ultra disk](storage-migrate-to-ultradisk.md) to leverage high performance and low latency. |
+| **Create AG using Azure PowerShell** | It's now possible to simplify the creation of an availability group by using [Azure PowerShell](availability-group-az-commandline-configure.md) as well as the Azure CLI. |
+| **Configure ag in portal** | It is now possible to [configure your availability group via the Azure portal](availability-group-azure-portal-configure.md). This feature is currently in preview and being deployed so if your desired region is unavailable, check back soon. |
+| **Automatic extension registration** | You can now enable the [Automatic registration](sql-agent-extension-automatic-registration-all-vms.md) feature to automatically register all SQL Server VMs already deployed to your subscription with the [SQL IaaS Agent extension](sql-server-iaas-agent-extension-automate-management.md). This applies to all existing VMs, and will also automatically register all SQL Server VMs added in the future. |
+| **DNN for AG** | You can now configure a [distributed network name (DNN) listener)](availability-group-distributed-network-name-dnn-listener-configure.md) for SQL Server 2019 CU8 and later to replace the traditional [VNN listener](availability-group-overview.md#connectivity), negating the need for an Azure Load Balancer. |
| &nbsp; | &nbsp; | ## 2019
azure-sql Failover Cluster Instance Azure Shared Disks Manually Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/failover-cluster-instance-azure-shared-disks-manually-configure.md
This article explains how to create a failover cluster instance (FCI) by using A
To learn more, see an overview of [FCI with SQL Server on Azure VMs](failover-cluster-instance-overview.md) and [cluster best practices](hadr-cluster-best-practices.md).
+> [!NOTE]
+> It's now possible to lift and shift your failover cluster instance solution to SQL Server on Azure VMs using Azure Migrate. See [Migrate failover cluster instance](../../migration-guides/virtual-machines/sql-server-failover-cluster-instance-to-sql-on-azure-vm.md) to learn more.
+ ## Prerequisites Before you complete the instructions in this article, you should already have:
azure-sql Failover Cluster Instance Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/failover-cluster-instance-overview.md
The rest of the article focuses on the differences for failover cluster instance
- [Windows cluster technologies](/windows-server/failover-clustering/failover-clustering-overview) - [SQL Server failover cluster instances](/sql/sql-server/failover-clusters/windows/always-on-failover-cluster-instances-sql-server)
+> [!NOTE]
+> It's now possible to lift and shift your failover cluster instance solution to SQL Server on Azure VMs using Azure Migrate. See [Migrate failover cluster instance](../../migration-guides/virtual-machines/sql-server-failover-cluster-instance-to-sql-on-azure-vm.md) to learn more.
+ ## Quorum Failover cluster instances with SQL Server on Azure Virtual Machines support using a disk witness, a cloud witness, or a file share witness for cluster quorum.
azure-sql Failover Cluster Instance Premium File Share Manually Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/failover-cluster-instance-premium-file-share-manually-configure.md
Premium file shares are Storage Spaces Direct (SSD)-backed, consistently low-lat
To learn more, see an overview of [FCI with SQL Server on Azure VMs](failover-cluster-instance-overview.md) and [cluster best practices](hadr-cluster-best-practices.md).
+> [!NOTE]
+> It's now possible to lift and shift your failover cluster instance solution to SQL Server on Azure VMs using Azure Migrate. See [Migrate failover cluster instance](../../migration-guides/virtual-machines/sql-server-failover-cluster-instance-to-sql-on-azure-vm.md) to learn more.
+ ## Prerequisites Before you complete the instructions in this article, you should already have:
azure-sql Failover Cluster Instance Prepare Vm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/failover-cluster-instance-prepare-vm.md
This article describes how to prepare Azure virtual machines (VMs) to use them w
To learn more, see an overview of [FCI with SQL Server on Azure VMs](failover-cluster-instance-overview.md) and [cluster best practices](hadr-cluster-best-practices.md).
+> [!NOTE]
+> It's now possible to lift and shift your failover cluster instance solution to SQL Server on Azure VMs using Azure Migrate. See [Migrate failover cluster instance](../../migration-guides/virtual-machines/sql-server-failover-cluster-instance-to-sql-on-azure-vm.md) to learn more.
+ ## Prerequisites - A Microsoft Azure subscription. Get started for [free](https://azure.microsoft.com/free/).
azure-sql Failover Cluster Instance Storage Spaces Direct Manually Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/failover-cluster-instance-storage-spaces-direct-manually-configure.md
This article explains how to create a failover cluster instance (FCI) by using [
To learn more, see an overview of [FCI with SQL Server on Azure VMs](failover-cluster-instance-overview.md) and [cluster best practices](hadr-cluster-best-practices.md).
+> [!NOTE]
+> It's now possible to lift and shift your failover cluster instance solution to SQL Server on Azure VMs using Azure Migrate. See [Migrate failover cluster instance](../../migration-guides/virtual-machines/sql-server-failover-cluster-instance-to-sql-on-azure-vm.md) to learn more.
+ ## Overview
azure-sql Storage Configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/storage-configuration.md
If you use the following Resource Manager templates, two premium data disks are
You can use the following quickstart template to deploy a SQL Server VM using storage optimization.
-* [Create VM with storage optimization](https://github.com/Azure/azure-quickstart-templates/tree/master/101-sql-vm-new-storage/)
+* [Create VM with storage optimization](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.sqlvirtualmachine/sql-vm-new-storage/)
* [Create VM using UltraSSD](https://github.com/Azure/azure-quickstart-templates/tree/master/101-sql-vm-new-storage-ultrassd) ## Existing VMs
azure-vmware Lifecycle Management Of Azure Vmware Solution Vms https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/lifecycle-management-of-azure-vmware-solution-vms.md
Last updated 02/08/2021
# Lifecycle management of Azure VMware Solution VMs
-Microsoft Azure native tools allow you to monitor and manage your virtual machines (VMs) in the Azure environment. Yet they also allow you to monitor and manage your VMs on Azure VMware Solution and your on-premises VMs. In this overview, we'll look at the integrated monitoring architecture Azure offers, and how you can use its native tools to manage your Azure VMware Solution VMs throughout their lifecycle.
+Microsoft Azure native tools allow you to monitor and manage your virtual machines (VMs) in the Azure environment. Yet they also allow you to monitor and manage your VMs on Azure VMware Solution and your on-premises VMs. In this article, we'll look at the integrated monitoring architecture Azure offers, and how you can use its native tools to manage your Azure VMware Solution VMs throughout their lifecycle.
## Benefits
Microsoft Azure native tools allow you to monitor and manage your virtual machin
## Integrated Azure monitoring architecture
-The following diagram shows the integrated monitoring architecture for Azure VMware Solution VMs.
+The diagram shows the integrated monitoring architecture for Azure VMware Solution VMs.
![Integrated Azure monitoring architecture](media/lifecycle-management-azure-vmware-solutions-virtual-machines/integrated-azure-monitoring-architecture.png)
If you are new to Azure or unfamiliar with any of the services previously mentio
- [What is Azure Arc enabled servers?](../azure-arc/servers/overview.md) and [What is Azure Arc enabled Kubernetes?](../azure-arc/kubernetes/overview.md) - [Update Management overview](../automation/update-management/overview.md)
-## Integrating and deploying Azure native services
+## Integrate and deploy Azure native services
### Enable Azure Update Management
backup Backup Azure Backup Sharepoint Mabs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-backup-sharepoint-mabs.md
To back up the SharePoint farm, configure protection for SharePoint by using Con
1. In **Select Group Members**, expand the server that holds the WFE role. If there's more than one WFE server, select the one on which you installed ConfigureSharePoint.exe.
- When you expand the SharePoint server MABS queries VSS to see what data MABS can protect. If the SharePoint database is remote, MABS connects to it. If SharePoint data sources don't appear, check that the VSS writer is running on the SharePoint server and any remote SQL Server, and ensure the MABS agent is installed on both the SharePoint server and remote SQL Server. Also, ensure that SharePoint databases aren't being protected elsewhere as SQL Server databases.
+ When you expand the computer running SharePoint, MABS queries VSS to see what data MABS can protect. If the SharePoint database is remote, MABS connects to it. If SharePoint data sources don't appear, check that the VSS writer is running on the computer that's running SharePoint and on any remote instance of SQL Server. Then, ensure that the MABS agent is installed both on the computer running SharePoint and on the remote instance of SQL Server. Also, ensure that SharePoint databases aren't being protected elsewhere as SQL Server databases.
1. In **Select data protection method**, specify how you want to handle short and long\-term backup. Short\-term back up is always to disk first, with the option of backing up from the disk to the Azure cloud with Azure Backup \(for short or long\-term\).
backup Backup Azure Microsoft Azure Backup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-microsoft-azure-backup.md
You can get detailed information here about [preparing your environment for DPM]
You can use these articles to gain a deeper understanding of workload protection using Microsoft Azure Backup server. * [SQL Server backup](backup-azure-backup-sql.md)
-* [SharePoint server backup](backup-azure-backup-sharepoint.md)
+* [SharePoint Server backup](backup-azure-backup-sharepoint.md)
* [Alternate server backup](backup-azure-alternate-dpm-server.md)
backup Backup Mabs Install Azure Stack https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-mabs-install-azure-stack.md
The article, [Preparing your environment for DPM](/system-center/dpm/prepare-env
You can use the following articles to gain a deeper understanding of workload protection using Microsoft Azure Backup Server. - [SQL Server backup](./backup-mabs-sql-azure-stack.md)-- [SharePoint server backup](./backup-mabs-sharepoint-azure-stack.md)
+- [SharePoint Server backup](./backup-mabs-sharepoint-azure-stack.md)
- [Alternate server backup](backup-azure-alternate-dpm-server.md)
backup Backup Mabs Sharepoint Azure Stack https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-mabs-sharepoint-azure-stack.md
To back up the SharePoint farm, configure protection for SharePoint by using Con
1. In **Select Group Members**, expand the server that holds the WFE role. If there's more than one WFE server, select the one on which you installed ConfigureSharePoint.exe.
- When you expand the SharePoint server MABS queries VSS to see what data MABS can protect. If the SharePoint database is remote, MABS connects to it. If SharePoint data sources don't appear, check that the VSS writer is running on the SharePoint server and any remote SQL Server, and ensure the MABS agent is installed on both the SharePoint server and remote SQL Server. Also, ensure that SharePoint databases aren't being protected elsewhere as SQL Server databases.
+ When you expand the computer running SharePoint, MABS queries VSS to see what data MABS can protect. If the SharePoint database is remote, MABS connects to it. If SharePoint data sources don't appear, check that the VSS writer is running on the computer that's running SharePoint and on any remote instance of SQL Server. Then, ensure that the MABS agent is installed both on the computer running SharePoint and on the remote instance of SQL Server. Also, ensure that SharePoint databases aren't being protected elsewhere as SQL Server databases.
1. In **Select data protection method**, specify how you want to handle short and long\-term backup. Short\-term back up is always to disk first, with the option of backing up from the disk to the Azure cloud with Azure Backup \(for short or long\-term\).
cloud-services Cloud Services Guestos Update Matrix https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-guestos-update-matrix.md
na Previously updated : 3/28/2021 Last updated : 4/30/2021 # Azure Guest OS releases and SDK compatibility matrix
Unsure about how to update your Guest OS? Check [this][cloud updates] out.
## News updates
+###### **April 30, 2021**
+The April Guest OS has released.
+ ###### **March 28, 2021** The March Guest OS has released.
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-6.30_202104-01 | April 30, 2021 | Post 6.32 |
| WA-GUEST-OS-6.29_202103-01 | March 28, 2021 | Post 6.31 |
-| WA-GUEST-OS-6.28_202102-01 | February 19, 2021 | Post 6.30 |
+|~~WA-GUEST-OS-6.28_202102-01~~| February 19, 2021 | April 30, 2021 |
|~~WA-GUEST-OS-6.27_202101-01~~| February 5, 2021 | March 28, 2021 | |~~WA-GUEST-OS-6.26_202012-01~~| January 15, 2021 | February 19, 2021 | |~~WA-GUEST-OS-6.25_202011-01~~| December 19, 2020 | February 5, 2021 |
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-5.54_202104-01 | April 30, 2021 | Post 5.56 |
| WA-GUEST-OS-5.53_202103-01 | March 28, 2021 | Post 5.55 |
-| WA-GUEST-OS-5.52_202102-01 | February 19, 2021 | Post 5.54 |
+|~~WA-GUEST-OS-5.52_202102-01~~| February 19, 2021 | April 30, 2021 |
|~~WA-GUEST-OS-5.51_202101-01~~| February 5, 2021 | March 28, 2021 | |~~WA-GUEST-OS-5.50_202012-01~~| January 15, 2021 | February 19, 2021 | |~~WA-GUEST-OS-5.49_202011-01~~| December 19, 2020 | February 5, 2021 |
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-4.89_202104-01 | April 30, 2021 | Post 4.91 |
| WA-GUEST-OS-4.88_202103-01 | March 28, 2021 | Post 4.90 |
-| WA-GUEST-OS-4.87_202102-01 | February 19, 2021 | Post 4.89 |
+|~~WA-GUEST-OS-4.87_202102-01~~| February 19, 2021 | April 30, 2021 |
|~~WA-GUEST-OS-4.86_202101-01~~| February 5, 2021 | March 28, 2021 | |~~WA-GUEST-OS-4.85_202012-01~~| January 15, 2021 | February 19, 2021 | |~~WA-GUEST-OS-4.84_202011-01~~| December 19, 2020 | February 5, 2021 |
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-3.96_202104-01 | April 30, 2021 | Post 3.98 |
| WA-GUEST-OS-3.95_202103-01 | March 28, 2021 | Post 3.97 |
-| WA-GUEST-OS-3.94_202102-01 | February 19, 2021 | Post 3.96 |
+|~~WA-GUEST-OS-3.94_202102-01~~| February 19, 2021 | April 30, 2021 |
|~~WA-GUEST-OS-3.93_202101-01~~| February 5, 2021 | March 28, 2021 | |~~WA-GUEST-OS-3.92_202012-01~~| January 15, 2021 | February 19, 2021 | |~~WA-GUEST-OS-3.91_202011-01~~| December 19, 2020 | February 5, 2021 |
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-2.109_202104-01 | April 30, 2021 | Post 2.111 |
| WA-GUEST-OS-2.108_202103-01 | March 28, 2021 | Post 2.110 |
-| WA-GUEST-OS-2.107_202102-01 | February 19, 2021 | Post 2.109 |
+|~~WA-GUEST-OS-2.107_202102-01~~| February 19, 2021 | April 30, 2021 |
|~~WA-GUEST-OS-2.106_202101-01~~| February 5, 2021 | March 28, 2021 | |~~WA-GUEST-OS-2.105_202012-01~~| January 15, 2021 | February 19, 2021 | |~~WA-GUEST-OS-2.104_202011-01~~| December 19, 2020 | February 5, 2021 |
cognitive-services Computer Vision How To Install Containers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/computer-vision-how-to-install-containers.md
In this article, you learned concepts and workflow for downloading, installing,
* Review [Configure containers](computer-vision-resource-container-config.md) for configuration settings * Review the [OCR overview](overview-ocr.md) to learn more about recognizing printed and handwritten text
-* Refer to the [Read API](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2-ga/operations/56f91f2e778daf14a499f21b) for details about the methods supported by the container.
+* Refer to the [Read API](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/5d986960601faab4bf452005) for details about the methods supported by the container.
* Refer to [Frequently asked questions (FAQ)](FAQ.md) to resolve issues related to Computer Vision functionality. * Use more [Cognitive Services Containers](../cognitive-services-container-support.md)
cognitive-services Luis Concept Enterprise https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/LUIS/luis-concept-enterprise.md
A dispatch application has 500 dispatch sources, equivalent to 500 intents, as t
* [Bot framework SDK](https://github.com/Microsoft/botframework) * [Dispatch model tutorial](/azure/bot-service/bot-builder-tutorial-dispatch?tabs=cs) * [Dispatch CLI](https://github.com/Microsoft/botbuilder-tools)
-* Dispatch model bot sample - [.NET](https://github.com/microsoft/BotBuilder-Samples/tree/master/samples/csharp_dotnetcore/14.nlp-with-dispatch), [Node.js](https://github.com/microsoft/BotBuilder-Samples/tree/master/samples/javascript_nodejs/14.nlp-with-dispatch)
+* Dispatch model bot sample - [.NET](https://github.com/microsoft/BotBuilder-Samples/tree/main/samples/csharp_dotnetcore/14.nlp-with-orchestrator), [Node.js](https://github.com/microsoft/BotBuilder-Samples/tree/main/samples/javascript_nodejs/14.nlp-with-orchestrator)
## Next steps * Learn how to [test a batch](luis-how-to-batch-test.md) [dispatcher-application-tutorial]: /azure/bot-service/bot-builder-tutorial-dispatch
-[dispatch-tool]: https://aka.ms/dispatch-tool
+[dispatch-tool]: https://aka.ms/dispatch-tool
cognitive-services Data Sources And Content https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/QnAMaker/Concepts/data-sources-and-content.md
The table below summarizes the types of content and file formats that are suppor
If you need authentication for your data source, consider the following methods to get that content into QnA Maker: * Download the file manually and import into QnA Maker
-* Add the file from authenticated [Sharepoint location](../how-to/add-sharepoint-datasources.md)
+* Add the file from authenticated [SharePoint location](../how-to/add-sharepoint-datasources.md)
### URL content
cognitive-services Add Sharepoint Datasources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/QnAMaker/How-To/add-sharepoint-datasources.md
When you test the QnA pair in the interactive test panel, in the QnA Maker porta
## Permissions
-Granting permissions happens when a secured file from a SharePoint server is added to a knowledge base. Depending on how the SharePoint is set up and the permissions of the person adding the file, this could require:
+Granting permissions happens when a secured file from a server running SharePoint is added to a knowledge base. Depending on how the SharePoint is set up and the permissions of the person adding the file, this could require:
* no additional steps - the person adding the file has all the permissions needed. * steps by both [knowledge base manager](#knowledge-base-manager-add-sharepoint-data-source-in-qna-maker-portal) and [Active Directory manager](#active-directory-manager-grant-file-read-access-to-qna-maker).
cognitive-services Configure Qna Maker Resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/QnAMaker/How-To/configure-qna-maker-resources.md
The user can configure QnA Maker to use a different Cognitive search resource. T
# [QnA Maker GA (stable release)](#tab/v1)
-### Configure QnA Maker to use different Cognitive Search resource
+## Configure QnA Maker to use different Cognitive Search resource
If you create a QnA service and its dependencies (such as Search) through the portal, a Search service is created for you and linked to the QnA Maker service. After these resources are created, you can update the App Service setting to use a previously existing Search service and remove the one you just created.
If you create a QnA service through Azure Resource Manager templates, you can cr
Learn more about how to configure the App Service [Application settings](../../../app-service/configure-common.md#configure-app-settings).
-### Get the latest runtime updates
+## Get the latest runtime updates
The QnAMaker runtime is part of the Azure App Service instance that's deployed when you [create a QnAMaker service](./set-up-qnamaker-service-azure.md) in the Azure portal. Updates are made periodically to the runtime. The QnA Maker App Service instance is in auto-update mode after the April 2019 site extension release (version 5+). This update is designed to take care of ZERO downtime during upgrades.
You can check your current version at https://www.qnamaker.ai/UserSettings. If y
![Restart of the QnAMaker App Service instance](../media/qnamaker-how-to-upgrade-qnamaker/qnamaker-appservice-restart.png)
-### Configure App service idle setting to avoid timeout
+## Configure App service idle setting to avoid timeout
The app service, which serves the QnA Maker prediction runtime for a published knowledge base, has an idle timeout configuration, which defaults to automatically time out if the service is idle. For QnA Maker, this means your prediction runtime generateAnswer API occasionally times out after periods of no traffic.
In order to keep the prediction endpoint app loaded even when there is no traffi
Learn more about how to configure the App Service [General settings](../../../app-service/configure-common.md#configure-general-settings).
-### Business continuity with traffic manager
+## Business continuity with traffic manager
The primary objective of the business continuity plan is to create a resilient knowledge base endpoint, which would ensure no down time for the Bot or the application consuming it.
The high-level idea as represented above is as follows:
# [QnA Maker managed (preview release)](#tab/v2)
-### Configure QnA Maker managed (Preview) service to use different Cognitive Search resource
+## Configure QnA Maker managed (Preview) service to use different Cognitive Search resource
If you create a QnA service managed (Preview) and its dependencies (such as Search) through the portal, a Search service is created for you and linked to the QnA Maker managed (Preview) service. After these resources are created, you can update the Search service in the **Configuration** tab.
cognitive-services Cognitive Services Development Options https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/cognitive-services-development-options.md
If you want to learn more about Big Data for Cognitive Services, a good place to
Power automate is a service in the [Power Platform](/power-platform/) that helps you create automated workflows between apps and services without writing code. We offer several connectors to make it easy to interact with your Cognitive Services resource in a Power Automate solution. Power Automate is built on top of Logic Apps.
-* **Target user(s)**: Business users (analysts) and Sharepoint administrators
+* **Target user(s)**: Business users (analysts) and SharePoint administrators
* **Benefits**: Automate repetitive manual tasks simply by recording mouse clicks, keystrokes and copy paste steps from your desktop! * **UI tools**: Yes - UI only * **Subscription(s)**: Azure account + Cognitive Services resource + Power Automate Subscription + Office 365 Subscription
Power automate is a service in the [Power Platform](/power-platform/) that helps
[AI Builder](/ai-builder/overview) is a Microsoft Power Platform capability you can use to improve business performance by automating processes and predicting outcomes. AI builder brings the power of AI to your solutions through a point-and-click experience. Many cognitive services such as Form Recognizer, Text Analytics, and Computer Vision have been directly integrated here and you don't need to create your own Cognitive Services.
-* **Target user(s)**: Business users (analysts) and Sharepoint administrators
+* **Target user(s)**: Business users (analysts) and SharePoint administrators
* **Benefits**: A turnkey solution that brings the power of AI through a point-and-click experience. No coding or data science skills required. * **UI tools**: Yes - UI only * **Subscription(s)**: AI Builder
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/immersive-reader/language-support.md
This article lists supported human languages for Immersive Reader features.
| Danish | da | | Danish (Denmark) | da-DK | | Dutch | nl |
+| Dutch (Belgium) | nl-BE |
| Dutch (Netherlands) | nl-NL | | English | en | | English (Australia) | en-AU |
This article lists supported human languages for Immersive Reader features.
| English (India) | en-IN | | English (Ireland) | en-IE | | English (New Zealand) | en-NZ |
+| English (Philippines) | en-PH |
| English (United Kingdom) | en-GB | | English (United States) | en-US |
+| Estonian | et-EE |
| Finnish | fi | | Finnish (Finland) | fi-FI | | French | fr |
+| French (Belgium) | fr-BE |
| French (Canada) | fr-CA | | French (France) | fr-FR | | French (Switzerland) | fr-CH |
This article lists supported human languages for Immersive Reader features.
| Hungarian (Hungary) | hu-HU | | Indonesian | id | | Indonesian (Indonesia) | id-ID |
+| Irish | ga-IE |
| Italian | it | | Italian (Italy) | it-IT | | Japanese | ja | | Japanese (Japan) | ja-JP | | Korean | ko | | Korean (Korea) | ko-KR |
+| Latvian | Lv-LV |
+| Lithuanian | lt-LT |
| Malay | ms | | Malay (Malaysia) | ms-MY |
+| Maltese | Mt-MT |
| Norwegian Bokmal| nb | | Norwegian Bokmal (Norway) | nb-NO | | Polish | pl |
This article lists supported human languages for Immersive Reader features.
| Thai (Thailand) | th-TH | | Turkish | tr | | Turkish (Turkey) | tr-TR |
+| Ukrainian | ur-PK |
| Vietnamese | vi | | Vietnamese (Vietnam) | vi-VN |
+| Welsh | Cy-GB |
## Translation | Language | Tag | |-|--| | Afrikaans | af |
+| Albanian | sq |
+| Amharic | am |
| Arabic | ar | | Arabic (Egyptian) | ar-EG | | Arabic (Saudi Arabia) | ar-SA |
+| Armenian | hy |
+| Azerbaijani | az |
+| Afrikaans | af |
| Bangla | bn | | Bosnian | bs | | Bulgarian | bg | | Bulgarian (Bulgaria) | bg-BG |
+| Burmese | my |
| Catalan | ca | | Catalan (Catalan) | ca-ES | | Chinese | zh |
This article lists supported human languages for Immersive Reader features.
| Japanese | ja | | Japanese (Japan) | ja-JP | | Kannada | kn |
+| Kazakh | kk |
+| Khmer | km |
| Kiswahili | sw | | Korean | ko | | Korean (Korea) | ko-KR | | Kurdish (Central) | ku | | Kurdish (Northern) | kmr |
+| Lao | lo |
| Latvian | lv | | Lithuanian | lt | | Malagasy | mg |
This article lists supported human languages for Immersive Reader features.
| Maltese | mt | | Maori | mi | | Marathi | mr |
+| Nepali | ne |
| Norwegian Bokmal| nb | | Norwegian Bokmal (Norway) | nb-NO |
-| Oriya | or |
+| Odia | or |
| Pashto (Afghanistan) | ps | | Persian | fa | | Polish | pl |
This article lists supported human languages for Immersive Reader features.
| Telugu (India) | te-IN | | Thai | th | | Thai (Thailand) | th-TH |
+| Tigrinya | ti |
| Tongan | to | | Turkish | tr | | Turkish (Turkey) | tr-TR |
This article lists supported human languages for Immersive Reader features.
| Spanish (Spain) | es-ES | | Swedish | sv | | Swedish (Sweden) | sv-SE |-
-## Dictionary
-
-| Language | Tag |
-|-|--|
-| English | en |
-| English (Australia) | en-AU |
-| English (Canada) | en-CA |
-| English (Hong Kong SAR) | en-HK |
-| English (India) | en-IN |
-| English (Ireland) | en-IE |
-| English (New Zealand) | en-NZ |
-| English (United Kingdom) | en-GB |
-| English (United States) | en-US |
-| French | fr |
-| French (Canada) | fr-CA |
-| French (France) | fr-FR |
-| French (Switzerland) | fr-CH |
-| German | de |
-| German (Austria) | de-AT |
-| German (Germany) | de-DE |
-| German (Switzerland)| de-CH |
-| Italian | it |
-| Italian (Italy) | it-IT |
-| Spanish | es |
-| Spanish (Latin America) | es-419 |
-| Spanish (Mexico) | es-MX |
-| Spanish (Spain) | es-ES |
communication-services Pricing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/pricing.md
Note: USA mixed rates to `+1-425` is $0.013. Refer to the following link for det
Alice is a doctor meeting with her patient, Bob. Alice will be joining the visit from the Teams Desktop application. Bob will receive a link to join using the healthcare provider website, which connects to the meeting using the Communication Services JavaScript SDK. Bob will use his mobile phone to enter the meeting using a web browser (iPhone with Safari). Chat will be available during the virtual visit. - The call lasts a total of 30 minutes.-- Alice and Bob participate for the entire call. Alice turns on her video five minutes after the call starts and shares her screen for 13 minutes. Bob has his video on for the whole call.
+- When Bob joins the meeting, he's placed in the Teams meeting lobby per Teams policy. After one minute, Alice admits him into the meeting.
+- After Bob is admitted to the meeting, Alice and Bob participate for the entire call. Alice turns on her video five minutes after the call starts and shares her screen for 13 minutes. Bob has his video on for the whole call.
- Alice sends five messages, Bob replies with three messages. **Cost calculations** -- 1 participant (Bob) x 30 minutes x $0.004 per participant per minute = $0.12 [both video and audio are charged at the same rate]
+- 1 Participant (Bob) connected to Teams lobby x 1 minute x $0.004 per participant per minute (lobby charged aat regular rate of meettings) = $0.004
+- 1 participant (Bob) x 29 minutes x $0.004 per participant per minute = $0.116 [both video and audio are charged at the same rate]
- 1 participant (Alice) x 30 minutes x $0.000 per participant per minute = $0.0*. - 1 participant (Bob) x 3 chat messages x $0.0008 = $0.0024. - 1 participant (Alice) x 5 chat messages x $0.000 = $0.0*.
Alice is a doctor meeting with her patient, Bob. Alice will be joining the visit
*Alice's participation is covered by her Teams license. Your Azure invoice will show the minutes and chat messages that Teams users had with Communication Services Users for your convenience, but those minutes and messages originating from the Teams client will not cost. **Total cost for the visit**:-- User joining using the Communication Services JavaScript SDK: $0.12 + $0.0024 = $0.1224
+- User joining using the Communication Services JavaScript SDK: $0.004 + $0.116 + $0.0024 = $0.1224
- User joining on Teams Desktop Application: $0 (covered by Teams license)
connectors Connectors Create Api Azure Event Hubs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/connectors/connectors-create-api-azure-event-hubs.md
Title: Connect to Azure Event Hubs
-description: Create automated tasks and workflows that monitor and manage events by using Azure Event Hubs and Azure Logic Apps
+description: Connect to your event hub, and add a trigger or an action to your workflow in Azure Logic Apps.
ms.suite: integration Previously updated : 04/23/2019 Last updated : 05/03/2021 tags: connectors
-# Monitor, receive, and send events with Azure Event Hubs and Azure Logic Apps
+# Connect to an event hub from workflows in Azure Logic Apps
-This article shows how you can monitor and manage events sent to
-[Azure Event Hubs](../event-hubs/event-hubs-about.md)
-from inside a logic app with the Azure Event Hubs connector.
-That way, you can create logic apps that automate tasks and workflows
-for checking, sending, and receiving events from your Event Hub.
-For connector-specific technical information, see the
-[Azure Event Hubs connector reference](/connectors/eventhubs/)</a>.
+The Azure Event Hubs connector helps you connect your logic app workflows to event hubs in Azure. You can then have your workflows monitor and manage events that are sent to an event hub. For example, your workflow can check, send, and receive events from your event hub. This article provides a get started guide to using the Azure Event Hubs connector by showing how to connect to an event hub and add an Event Hubs trigger or action to your workflow.
+
+For more information about Azure Event Hubs or Azure Logic Apps, review [What is Azure Event Hubs](../event-hubs/event-hubs-about.md) or [What is Azure Logic Apps](../logic-apps/logic-apps-overview.md).
+
+## Available operations
+
+For all the operations and other technical information, such as properties, limits, and so on, review the [Event Hubs connector's reference page](/connectors/eventhubs/).
+
+> [!NOTE]
+> For logic apps hosted in an [integration service environment (ISE)](../logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md),
+> the connector's ISE version uses the [ISE message limits](../logic-apps/logic-apps-limits-and-config.md#message-size-limits) instead.
## Prerequisites
-* An Azure account and subscription. If you don't have an Azure subscription,
-[sign up for a free Azure account](https://azure.microsoft.com/free/).
+* An Azure account and subscription. If you don't have an Azure subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/).
+
+* An [Event Hubs namespace and event hub](../event-hubs/event-hubs-create.md)
-* An [Azure Event Hubs namespace and Event Hub](../event-hubs/event-hubs-create.md)
+* The logic app workflow where you want to access your event hub
-* The logic app where you want to access your Event Hub.
-To start your logic app with an Azure Event Hubs trigger, you need a
-[blank logic app](../logic-apps/quickstart-create-first-logic-app-workflow.md).
-If you're new to logic apps, review
-[What is Azure Logic Apps](../logic-apps/logic-apps-overview.md)
-and [Quickstart: Create your first logic app](../logic-apps/quickstart-create-first-logic-app-workflow.md).
+ To start a workflow with an Event Hubs trigger, you need an empty workflow. If you're new to [Azure Logic Apps](../logic-apps/logic-apps-overview.md), try this [quickstart to create an example logic app workflow](../logic-apps/quickstart-create-first-logic-app-workflow.md).
<a name="permissions-connection-string"></a> ## Check permissions and get connection string
-To make sure that your logic app can access your Event Hub,
-check your permissions and get the connection
-string for your Event Hubs namespace.
+To make sure that your workflow can access your event hub, check your permissions, and then get the connection string for your event hub's namespace.
-1. Sign in to the [Azure portal](https://portal.azure.com).
+1. In the [Azure portal](https://portal.azure.com), go to your Event Hubs *namespace*, not a specific event hub.
-1. Go to your Event Hubs *namespace*, not a specific Event Hub.
+1. On the namespace menu, under **Settings**, select **Shared access policies**. In the **Claims** column, check that you have at least **Manage** permissions for that namespace.
-1. On the namespace menu, under **Settings**, select **Shared access policies**.
-Under **Claims**, check that you have **Manage** permissions for that namespace.
+ ![Screenshot showing the Azure portal, your Event Hubs namespace, and "Manage" permissions appearing in the "Claims" column.](./media/connectors-create-api-azure-event-hubs/event-hubs-namespace.png)
- ![Manage permissions for your Event Hub namespace](./media/connectors-create-api-azure-event-hubs/event-hubs-namespace.png)
+1. If you want to later manually enter your connection information, get the connection string for your event hub namespace.
-1. If you want to later manually enter your connection information,
-get the connection string for your Event Hubs namespace.
+ 1. In the **Policy** column, select **RootManageSharedAccessKey**.
- 1. Under **Policy**, choose **RootManageSharedAccessKey**.
+ 1. Find your primary key's connection string. Copy and save the connection string for later use.
- 1. Find your primary key's connection string. Choose the copy button,
- and save the connection string for later use.
-
- ![Copy Event Hubs namespace connection string](media/connectors-create-api-azure-event-hubs/find-event-hub-namespace-connection-string.png)
+ ![Screenshot showing the primary key's connection string with the copy button selected.](media/connectors-create-api-azure-event-hubs/find-event-hub-namespace-connection-string.png)
> [!TIP]
- > To confirm whether your connection string is associated with
- > your Event Hubs namespace or with a specific event hub,
- > make sure the connection string doesn't have the `EntityPath` parameter.
- > If you find this parameter, the connection string is for a specific
- > Event Hub "entity" and is not the correct string to use with your logic app.
+ > To confirm whether your connection string is associated with your Event Hubs namespace or with
+ > a specific event hub, make sure the connection string doesn't have the `EntityPath` parameter.
+ > If you find this parameter, the connection string is for a specific Event Hubs "entity" and is
+ > not the correct string to use with your workflow.
+
+<a name="create-connection"></a>
+
+## Create an event hub connection
+
+When you add an Event Hubs trigger or action for the first time, you're prompted to create a connection to your event hub.
-1. Now continue with [Add an Event Hubs trigger](#add-trigger)
-or [Add an Event Hubs action](#add-action).
+1. When you're prompted, choose one of the following options:
+
+ * Provide the following connection information:
+
+ | Property | Required | Value | Description |
+ |-|-|-|-|
+ | **Connection Name** | Yes | <*connection-name*> | The name to create for your connection |
+ | **Event Hubs Namespace** | Yes | <*event-hubs-namespace*> | Select the Event Hubs namespace that you want to use. |
+ |||||
+
+ * To manually enter your previously saved connection string, select **Manually enter connection information**. Learn [how to find your connection string](#permissions-connection-string).
+
+1. Select the Event Hubs policy to use, if not already selected, and then select **Create**.
+
+ ![Screenshot showing the provided connection information with "Create" selected.](./media/connectors-create-api-azure-event-hubs/create-event-hubs-connection-2.png)
+
+1. After you create your connection, continue with [Add an Event Hubs trigger](#add-trigger) or [Add an Event Hubs action](#add-action).
<a name="add-trigger"></a> ## Add Event Hubs trigger
-In Azure Logic Apps, every logic app must start with a [trigger](../logic-apps/logic-apps-overview.md#logic-app-concepts), which fires when a specific event happens or when a
-specific condition is met. Each time the trigger fires, the Logic Apps engine creates a logic app instance and starts running your app's workflow.
+In Azure Logic Apps, every workflow must start with a [trigger](../logic-apps/logic-apps-overview.md#logic-app-concepts), which fires when a specific condition is met. Each time the trigger fires, the Logic Apps service creates a workflow instance and starts running the steps in the workflow.
-This example shows how you can start a logic app workflow when new events are sent to your Event Hub.
+The following steps describe the general way to add a trigger, for example, **When events are available in Event Hub**. This example shows how to add a trigger that checks for new events in your event hub and starts a workflow run when new events exist.
-> [!NOTE]
-> All Event Hub triggers are *long-polling* triggers, which means that the trigger processes all the events
-> and then waits 30 seconds per partition for more events to appear in your Event Hub. So, if the trigger is
-> set up with four partitions, this delay might take up to two minutes before the trigger finishes polling
-> all the partitions. If no events are received within this delay, the trigger run is skipped. Otherwise,
-> the trigger continues reading events until your Event Hub is empty. The next trigger poll happens based
-> on the recurrence interval that you specify in the trigger's properties.
-
-1. In the Azure portal or Visual Studio,
-create a blank logic app, which opens Logic Apps Designer.
-This example uses the Azure portal.
+1. In the Logic Apps Designer, open your blank logic app workflow, if not already open.
-1. In the search box, enter "event hubs" as your filter.
-From the triggers list, select this trigger:
-**When events are available in Event Hub - Event Hubs**
+1. In the operation search box, enter `event hubs`. From the triggers list, select the trigger named **When events are available in Event Hub**.
![Select trigger](./media/connectors-create-api-azure-event-hubs/find-event-hubs-trigger.png)
-1. If you're prompted for connection details,
-[create your Event Hubs connection now](#create-connection).
+1. If you're prompted to create a connection to your event hub, [provide the requested connection information](#create-connection).
-1. In the trigger, provide information about the Event Hub that you want to monitor.
-For more properties, open the **Add new parameter** list. Selecting a parameter
-adds that property to the trigger card.
-
- ![Trigger properties](./media/connectors-create-api-azure-event-hubs/event-hubs-trigger.png)
+1. In the trigger, provide information about the event hub that you want to monitor, for example:
| Property | Required | Description | |-|-|-|
- | **Event Hub name** | Yes | The name for the Event Hub that you want to monitor |
+ | **Event Hub name** | Yes | The name for the event hub that you want to monitor |
| **Content type** | No | The event's content type. The default is `application/octet-stream`. |
- | **Consumer group name** | No | The [name for the Event Hub consumer group](../event-hubs/event-hubs-features.md#consumer-groups) to use for reading events. If not specified, the default consumer group is used. |
+ | **Consumer group name** | No | The [name for the Event Hubs consumer group](../event-hubs/event-hubs-features.md#consumer-groups) to use for reading events. If not specified, the default consumer group is used. |
| **Maximum events count** | No | The maximum number of events. The trigger returns between one and the number of events specified by this property. | | **Interval** | Yes | A positive integer that describes how often the workflow runs based on the frequency | | **Frequency** | Yes | The unit of time for the recurrence | ||||
- **Additional properties**
+ For more properties, open the **Add new parameter** list. Selecting a parameter adds that property to the trigger, for example:
+
+ ![Trigger properties](./media/connectors-create-api-azure-event-hubs/event-hubs-trigger.png)
+
+ **More properties**
| Property | Required | Description | |-|-|-|
- | **Content schema** | No | The JSON content schema for the events to read from the Event Hub. For example, if you specify the content schema, you can trigger the logic app for only those events that match the schema. |
+ | **Content schema** | No | The JSON content schema for the events to read from your event hub. For example, if you specify the content schema, you can trigger the workflow for only those events that match the schema. |
| **Minimum partition key** | No | Enter the minimum [partition](../event-hubs/event-hubs-features.md#partitions) ID to read. By default, all partitions are read. | | **Maximum partition key** | No | Enter the maximum [partition](../event-hubs/event-hubs-features.md#partitions) ID to read. By default, all partitions are read. | | **Time zone** | No | Applies only when you specify a start time because this trigger doesn't accept UTC offset. Select the time zone that you want to apply. <p>For more information, see [Create and run recurring tasks and workflows with Azure Logic Apps](../connectors/connectors-native-recurrence.md). | | **Start time** | No | Provide a start time in this format: <p>YYYY-MM-DDThh:mm:ss if you select a time zone<p>-or-<p>YYYY-MM-DDThh:mm:ssZ if you don't select a time zone<p>For more information, see [Create and run recurring tasks and workflows with Azure Logic Apps](../connectors/connectors-native-recurrence.md). | ||||
-1. When you're done, on the designer toolbar, choose **Save**.
+1. When you're done, on the designer toolbar, select **Save**.
+
+1. Now continue adding one or more actions so that you can perform other tasks using the trigger outputs.
+
+ For example, to filter events based on a specific value, such as a category, you can add a condition so that the
+ **Send event** action sends only the events that meet your condition.
+
+## Trigger polling behavior
+
+All Event Hubs triggers are *long-polling* triggers, which means that the trigger processes all the events and then waits 30 seconds per partition for more events to appear in your event hub.
-1. Now continue adding one or more actions to your logic app
-for the tasks you want to perform with the trigger results.
+For example, if the trigger is set up with four partitions, this delay might take up to two minutes before the trigger finishes polling all the partitions. If no events are received within this delay, the trigger run is skipped. Otherwise, the trigger continues reading events until your event hub is empty. The next trigger poll happens based on the recurrence interval that you specify in the trigger's properties.
- For example, to filter events based on a specific value,
- such as a category, you can add a condition so that the
- **Send event** action sends only the events that
- meet your condition.
+## Trigger checkpoint behavior
+
+When an Event Hubs trigger reads events from each partition in an event hub, the trigger users its own state to maintain information about the stream offset (the event position in a partition) and the partitions from where the trigger reads events.
+
+Each time your workflow runs, the trigger reads events from a partition, starting from the stream offset that's kept by the trigger state. In round-robin fashion, the trigger iterates over each partition in the event hub and reads events in subsequent trigger runs. A single run gets events from a single partition at a time.
+
+The trigger doesn't use this checkpoint capability in storage, resulting in no extra cost. However, the key point is that updating the Event Hubs trigger resets the trigger's state, which might cause the trigger to read events at start of the stream.
<a name="add-action"></a> ## Add Event Hubs action
-In Azure Logic Apps, an [action](../logic-apps/logic-apps-overview.md#logic-app-concepts)
-is a step in your workflow that follows a trigger or another action.
-For this example, the logic app starts with an Event Hubs trigger
-that checks for new events in your Event Hub.
+In Azure Logic Apps, an [action](../logic-apps/logic-apps-overview.md#logic-app-concepts) follows the trigger or another action and performs some operation in your workflow. The following steps describe the general way to add an action, for example, **Send event**. For this example, the workflow starts with an Event Hubs trigger that checks for new events in your event hub.
-1. In the Azure portal or Visual Studio,
-open your logic app in Logic Apps Designer.
-This example uses the Azure portal.
+1. In the Logic Apps Designer, open your logic app workflow, if not already open.
-1. Under the trigger or action, choose **New step**.
+1. Under the trigger or another action, add a new step.
- To add an action between existing steps,
- move your mouse over the connecting arrow.
- Choose the plus sign (**+**) that appears,
- and then select **Add an action**.
+ To add a step between existing steps, move your mouse over the arrow. Select the plus sign (**+**) that appears, and then select **Add an action**.
-1. In the search box, enter "event hubs" as your filter.
-From the actions list, select this action:
-**Send event - Event Hubs**
+1. In the operation search box, enter `event hubs`. From the actions list, select the action named **Send event**.
![Select "Send event" action](./media/connectors-create-api-azure-event-hubs/find-event-hubs-action.png)
-1. If you're prompted for connection details,
-[create your Event Hubs connection now](#create-connection).
+1. If you're prompted to create a connection to your event hub, [provide the requested connection information](#create-connection).
-1. In the action, provide information about the events that you want to send.
-For more properties, open the **Add new parameter** list. Selecting a parameter
-adds that property to the action card.
-
- ![Select Event Hub name and provide event content](./media/connectors-create-api-azure-event-hubs/event-hubs-send-event-action.png)
+1. In the action, provide information about the events that you want to send.
| Property | Required | Description | |-|-|-|
- | **Event Hub name** | Yes | The Event Hub where you want to send the event |
+ | **Event Hub name** | Yes | The event hub where you want to send the event |
| **Content** | No | The content for the event you want to send | | **Properties** | No | The app properties and values to send | | **Partition key** | No | The [partition](../event-hubs/event-hubs-features.md#partitions) ID for where to send the event | ||||
- For example, you can send the output from your Event Hubs trigger to another Event Hub:
-
- ![Send event example](./media/connectors-create-api-azure-event-hubs/event-hubs-send-event-action-example.png)
-
-1. When you're done, on the designer toolbar, choose **Save**.
-
-<a name="create-connection"></a>
-
-## Connect to your Event Hub
-
+ For more properties, open the **Add new parameter** list. Selecting a parameter adds that property to the action, for example:
-1. When you're prompted for connection information,
-provide these details:
+ ![Select event hub name and provide event content](./media/connectors-create-api-azure-event-hubs/event-hubs-send-event-action.png)
- | Property | Required | Value | Description |
- |-|-|-|-|
- | **Connection Name** | Yes | <*connection-name*> | The name to create for your connection |
- | **Event Hubs Namespace** | Yes | <*event-hubs-namespace*> | Select the Event Hubs namespace you want to use. |
- |||||
+ For example, you can send the output from your Event Hubs trigger to another event hub:
- For example:
-
- ![Create Event Hub connection](./media/connectors-create-api-azure-event-hubs/create-event-hubs-connection-1.png)
-
- To manually enter the connection string,
- select **Manually enter connection information**.
- Learn [how to find your connection string](#permissions-connection-string).
-
-2. Select the Event Hubs policy to use,
-if not already selected. Choose **Create**.
-
- ![Create Event Hub connection, part 2](./media/connectors-create-api-azure-event-hubs/create-event-hubs-connection-2.png)
+ ![Send event example](./media/connectors-create-api-azure-event-hubs/event-hubs-send-event-action-example.png)
-3. After you create your connection,
-continue with [Add Event Hubs trigger](#add-trigger)
-or [Add Event Hubs action](#add-action).
+1. When you're done, on the designer toolbar, select **Save**.
## Connector reference
-For technical details, such as triggers, actions, and limits, as described by the connector's Swagger file, see the [connector's reference page](/connectors/eventhubs/).
+For all the operations and other technical information, such as properties, limits, and so on, review the [Event Hubs connector's reference page](/connectors/eventhubs/).
> [!NOTE]
-> For logic apps in an [integration service environment (ISE)](../logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md),
-> this connector's ISE-labeled version uses the [ISE message limits](../logic-apps/logic-apps-limits-and-config.md#message-size-limits) instead.
+> For logic apps hosted in an [integration service environment (ISE)](../logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md),
+> the connector's ISE version uses the [ISE message limits](../logic-apps/logic-apps-limits-and-config.md#message-size-limits) instead.
## Next steps
-* Learn about other [Logic Apps connectors](../connectors/apis-list.md)
+* Learn about other [Logic Apps connectors](../connectors/apis-list.md)
digital-twins Concepts Query Language https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/concepts-query-language.md
When writing queries for Azure Digital Twins, keep the following considerations
:::code language="sql" source="~/digital-twins-docs-samples/queries/examples.sql" id="EscapedSingleQuote":::
+* **Consider possible latency**: After making a change to the data in your graph, there may be a latency of up to 10 seconds before the changes will be reflected in queries. The [GetDigitalTwin API](how-to-manage-twin.md#get-data-for-a-digital-twin) does not experience this delay, so if you need an instant response, use the API call instead of querying to see your change reflected immediately.
+ ## Next steps Learn how to write queries and see client code examples in [How-to: Query the twin graph](how-to-query-graph.md).
digital-twins Reference Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/reference-service-limits.md
When a limit is reached, the service throttles additional requests. This will re
To manage this, here are some recommendations for working with limits. * **Use retry logic.** The [Azure Digital Twins SDKs](how-to-use-apis-sdks.md) implement retry logic for failed requests, so if you are working with a provided SDK, this is already built-in. Otherwise, consider implementing retry logic in your own application. The service sends back a `Retry-After` header in the failure response, which you can use to determine how long to wait before retrying. * **Use thresholds and notifications to warn about approaching limits.** Some of the service limits for Azure Digital Twins have corresponding [metrics](troubleshoot-metrics.md) that can be used to track usage in these areas. To configure thresholds and set up an alert on any metric when a threshold is approached, see the instructions in [Troubleshooting: Set up alerts](troubleshoot-alerts.md). To set up notifications for other limits where metrics aren't provided, consider implementing this logic in your own application code.
+* **Deploy at scale across multiple instances.** Avoid having a single point of failure. Instead of one large graph for your entire deployment, consider sectioning out subsets of twins logically (like by region or tenant) across multiple instances.
## Next steps
expressroute Expressroute Locations Providers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-locations-providers.md
The following table shows connectivity locations and the service providers for e
| **Dubai2** | [du datamena](http://datamena.com/solutions/data-centre) | 3 | UAE North | n/a | DE-CIX, du datamena, Equinix, Megaport, Orange, Orixcom | | **Dublin** | [Equinix DB3](https://www.equinix.com/locations/europe-colocation/ireland-colocation/dublin-data-centers/db3/) | 1 | North Europe | 10G, 100G | CenturyLink Cloud Connect, Colt, eir, Equinix, GEANT, euNetworks, Interxion, Megaport | | **Frankfurt** | [Interxion FRA11](https://www.interxion.com/Locations/frankfurt/) | 1 | Germany West Central | 10G, 100G | AT&T NetBond, British Telecom, CenturyLink Cloud Connect, Colt, DE-CIX, Equinix, euNetworks, GEANT, InterCloud, Interxion, Megaport, Orange, Telia Carrier, T-Systems |
-| **Frankfurt2** | [Equinix FR7](https://www.equinix.com/locations/europe-colocation/germany-colocation/frankfurt-data-centers/fr7/) | 1 | Germany West Central | 10G, 100G | Equinix |
+| **Frankfurt2** | [Equinix FR7](https://www.equinix.com/locations/europe-colocation/germany-colocation/frankfurt-data-centers/fr7/) | 1 | Germany West Central | 10G, 100G | Deutsche Telekom AG, Equinix |
| **Geneva** | [Equinix GV2](https://www.equinix.com/locations/europe-colocation/switzerland-colocation/geneva-data-centers/gv2/) | 1 | Switzerland West | 10G, 100G | Equinix, Megaport, Swisscom | | **Hong Kong** | [Equinix HK1](https://www.equinix.com/data-centers/asia-pacific-colocation/hong-kong-colocation/hong-kong-data-centers/hk1) | 2 | East Asia | 10G | Aryaka Networks, British Telecom, CenturyLink Cloud Connect, Chief Telecom, China Telecom Global, China Unicom, Colt, Equinix, InterCloud, Megaport, NTT Communications, Orange, PCCW Global Limited, Tata Communications, Telia Carrier, Verizon | | **Hong Kong2** | [iAdvantage MEGA-i](https://www.iadvantage.net/index.php/locations/mega-i) | 2 | East Asia | 10G | China Mobile International, China Telecom Global, iAdvantage, Megaport, PCCW Global Limited, SingTel |
firewall Deploy Ps Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall/deploy-ps-policy.md
+
+ Title: 'Deploy and configure Azure Firewall policy using Azure PowerShell'
+description: In this article, you learn how to deploy and configure Azure Firewall policy using the Azure PowerShell.
+++ Last updated : 05/03/2021++
+#Customer intent: As an administrator new to this service, I want to control outbound network access from resources located in an Azure subnet.
++
+# Deploy and configure Azure Firewall policy using Azure PowerShell
+
+Controlling outbound network access is an important part of an overall network security plan. For example, you may want to limit access to web sites. Or, you may want to limit the outbound IP addresses and ports that can be accessed.
+
+One way you can control outbound network access from an Azure subnet is with Azure Firewall and Firewall Policy. With Azure Firewall, you can configure:
+
+* Application rules that define fully qualified domain names (FQDNs) that can be accessed from a subnet.
+* Network rules that define source address, protocol, destination port, and destination address.
+
+Network traffic is subjected to the configured firewall rules when you route your network traffic to the firewall as the subnet default gateway.
+
+For this article, you create a simplified single VNet with three subnets for easy deployment. For production deployments, a [hub and spoke model](/azure/architecture/reference-architectures/hybrid-networking/hub-spoke) is recommended, where the firewall is in its own VNet. The workload servers are in peered VNets in the same region with one or more subnets.
+
+* **AzureFirewallSubnet** - the firewall is in this subnet.
+* **Workload-SN** - the workload server is in this subnet. This subnet's network traffic goes through the firewall.
+* **AzureBastionSubnet** - the subnet used for Azure Bastion, which is used to connect to the workload server. For more information about Azure Bastion, see [What is Azure Bastion?](../bastion/bastion-overview.md)
+
+![Tutorial network infrastructure](media/deploy-ps/tutorial-network.png)
+
+In this article, you learn how to:
++
+* Set up a test network environment
+* Deploy a firewall
+* Create a default route
+* Create a firewall policy
+* Configure an application rule to allow access to www.google.com
+* Configure a network rule to allow access to external DNS servers
+* Test the firewall
+
+If you prefer, you can complete this procedure using the [Azure portal](tutorial-firewall-deploy-portal-policy.md).
+
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+
+## Prerequisites
+
+This procedure requires that you run PowerShell locally. You must have the Azure PowerShell module installed. Run `Get-Module -ListAvailable Az` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-Az-ps). After you verify the PowerShell version, run `Connect-AzAccount` to create a connection with Azure.
+
+## Set up the network
+
+First, create a resource group to contain the resources needed to deploy the firewall. Then create a VNet, subnets, and test servers.
+
+### Create a resource group
+
+The resource group contains all the resources for the deployment.
+
+```azurepowershell
+New-AzResourceGroup -Name Test-FW-RG -Location "East US"
+```
+
+### Create a virtual network and Azure Bastion host
+
+This virtual network has three subnets:
+
+> [!NOTE]
+> The size of the AzureFirewallSubnet subnet is /26. For more information about the subnet size, see [Azure Firewall FAQ](firewall-faq.yml#why-does-azure-firewall-need-a--26-subnet-size).
+
+```azurepowershell
+$Bastionsub = New-AzVirtualNetworkSubnetConfig -Name AzureBastionSubnet -AddressPrefix 10.0.0.0/27
+$FWsub = New-AzVirtualNetworkSubnetConfig -Name AzureFirewallSubnet -AddressPrefix 10.0.1.0/26
+$Worksub = New-AzVirtualNetworkSubnetConfig -Name Workload-SN -AddressPrefix 10.0.2.0/24
+```
+Now, create the virtual network:
+
+```azurepowershell
+$testVnet = New-AzVirtualNetwork -Name Test-FW-VN -ResourceGroupName Test-FW-RG `
+-Location "East US" -AddressPrefix 10.0.0.0/16 -Subnet $Bastionsub, $FWsub, $Worksub
+```
+### Create public IP address for Azure Bastion host
+
+```azurepowershell
+$publicip = New-AzPublicIpAddress -ResourceGroupName Test-FW-RG -Location "East US" `
+ -Name Bastion-pip -AllocationMethod static -Sku standard
+```
+
+### Create Azure Bastion host
+
+```azurepowershell
+New-AzBastion -ResourceGroupName Test-FW-RG -Name Bastion-01 -PublicIpAddress $publicip -VirtualNetwork $testVnet
+```
+### Create a virtual machine
+
+Now create the workload virtual machine, and place it in the appropriate subnet.
+When prompted, type a user name and password for the virtual machine.
++
+Create a workload virtual machine.
+When prompted, type a user name and password for the virtual machine.
+
+```azurepowershell
+#Create the NIC
+$wsn = Get-AzVirtualNetworkSubnetConfig -Name Workload-SN -VirtualNetwork $testvnet
+$NIC01 = New-AzNetworkInterface -Name Srv-Work -ResourceGroupName Test-FW-RG -Location "East us" -Subnet $wsn
+
+#Define the virtual machine
+$VirtualMachine = New-AzVMConfig -VMName Srv-Work -VMSize "Standard_DS2"
+$VirtualMachine = Set-AzVMOperatingSystem -VM $VirtualMachine -Windows -ComputerName Srv-Work -ProvisionVMAgent -EnableAutoUpdate
+$VirtualMachine = Add-AzVMNetworkInterface -VM $VirtualMachine -Id $NIC01.Id
+$VirtualMachine = Set-AzVMSourceImage -VM $VirtualMachine -PublisherName 'MicrosoftWindowsServer' -Offer 'WindowsServer' -Skus '2019-Datacenter' -Version latest
+
+#Create the virtual machine
+New-AzVM -ResourceGroupName Test-FW-RG -Location "East US" -VM $VirtualMachine -Verbose
+```
+
+## Create a Firewall Policy
+
+```azurepowershell
+$fwpol = New-AzFirewallPolicy -Name fw-pol -ResourceGroupName Test-FW-RG -Location eastus
+```
+## Configure a firewall policy application rule
+
+The application rule allows outbound access to `www.google.com`.
+
+```azurepowershell
+$RCGroup = New-AzFirewallPolicyRuleCollectionGroup -Name AppRCGroup -Priority 100 -FirewallPolicyObject $fwpol
+$apprule1 = New-AzFirewallPolicyApplicationRule -Name Allow-google -SourceAddress "10.0.2.0/24" -Protocol "http:80","https:443" -TargetFqdn www.google.com
+$appcoll1 = New-AzFirewallPolicyFilterRuleCollection -Name App-coll01 -Priority 100 -Rule $appRule1 -ActionType "Allow"
+Set-AzFirewallPolicyRuleCollectionGroup -Name $RCGroup.Name -Priority 100 -RuleCollection $appcoll1 -FirewallPolicyObject $fwPol
+```
+
+Azure Firewall includes a built-in rule collection for infrastructure FQDNs that are allowed by default. These FQDNs are specific for the platform and can't be used for other purposes. For more information, see [Infrastructure FQDNs](infrastructure-fqdns.md).
+
+## Configure a firewall policy network rule
+
+The network rule allows outbound access to two IP addresses at port 53 (DNS).
+
+```azurepowershell
+$RCGroup = New-AzFirewallPolicyRuleCollectionGroup -Name NetRCGroup -Priority 200 -FirewallPolicyObject $fwpol
+$netrule1 = New-AzFirewallPolicyNetworkRule -name Allow-DNS -protocol UDP -sourceaddress 10.0.2.0/24 -destinationaddress 209.244.0.3,209.244.0.4 -destinationport 53
+$netcoll1 = New-AzFirewallPolicyFilterRuleCollection -Name Net-coll01 -Priority 200 -Rule $netrule1 -ActionType "Allow"
+Set-AzFirewallPolicyRuleCollectionGroup -Name $RCGroup.Name -Priority 200 -RuleCollection $netcoll1 -FirewallPolicyObject $fwPol
+```
+
+## Deploy the firewall
+
+Now deploy the firewall into the virtual network.
+
+```azurepowershell
+# Get a Public IP for the firewall
+$FWpip = New-AzPublicIpAddress -Name "fw-pip" -ResourceGroupName Test-FW-RG `
+ -Location "East US" -AllocationMethod Static -Sku Standard
+# Create the firewall
+$Azfw = New-AzFirewall -Name Test-FW01 -ResourceGroupName Test-FW-RG -Location "East US" -VirtualNetwork $testVnet -PublicIpAddress $FWpip -FirewallPolicyId $fwpol.Id
++
+#Save the firewall private IP address for future use
+
+$AzfwPrivateIP = $Azfw.IpConfigurations.privateipaddress
+$AzfwPrivateIP
+```
+
+Note the private IP address. You'll use it later when you create the default route.
+
+## Create a default route
+
+Create a table, with BGP route propagation disabled
+
+```azurepowershell
+$routeTableDG = New-AzRouteTable `
+ -Name Firewall-rt-table `
+ -ResourceGroupName Test-FW-RG `
+ -location "East US" `
+ -DisableBgpRoutePropagation
+
+#Create a route
+ Add-AzRouteConfig `
+ -Name "DG-Route" `
+ -RouteTable $routeTableDG `
+ -AddressPrefix 0.0.0.0/0 `
+ -NextHopType "VirtualAppliance" `
+ -NextHopIpAddress $AzfwPrivateIP `
+ | Set-AzRouteTable
+
+#Associate the route table to the subnet
+
+Set-AzVirtualNetworkSubnetConfig `
+ -VirtualNetwork $testVnet `
+ -Name Workload-SN `
+ -AddressPrefix 10.0.2.0/24 `
+ -RouteTable $routeTableDG | Set-AzVirtualNetwork
+```
+++
+## Change the primary and secondary DNS address for the **Srv-Work** network interface
+
+For testing purposes in this procedure, configure the server's primary and secondary DNS addresses. This isn't a general Azure Firewall requirement.
+
+```azurepowershell
+$NIC01.DnsSettings.DnsServers.Add("209.244.0.3")
+$NIC01.DnsSettings.DnsServers.Add("209.244.0.4")
+$NIC01 | Set-AzNetworkInterface
+```
+
+## Test the firewall
+
+Now, test the firewall to confirm that it works as expected.
+
+1. Connect to **Srv-Work** virtual machine using Bastion, and sign in.
+
+ :::image type="content" source="media/deploy-ps/bastion.png" alt-text="Connect using Bastion.":::
+
+3. On **Srv-Work**, open a PowerShell window and run the following commands:
+
+ ```
+ nslookup www.google.com
+ nslookup www.microsoft.com
+ ```
+
+ Both commands should return answers, showing that your DNS queries are getting through the firewall.
+
+1. Run the following commands:
+
+ ```
+ Invoke-WebRequest -Uri https://www.google.com
+ Invoke-WebRequest -Uri https://www.google.com
+
+ Invoke-WebRequest -Uri https://www.microsoft.com
+ Invoke-WebRequest -Uri https://www.microsoft.com
+ ```
+
+ The `www.google.com` requests should succeed, and the `www.microsoft.com` requests should fail. This demonstrates that your firewall rules are operating as expected.
+
+So now you've verified that the firewall policy rules are working:
+
+* You can resolve DNS names using the configured external DNS server.
+* You can browse to the one allowed FQDN, but not to any others.
+
+## Clean up resources
+
+You can keep your firewall resources for further testing, or if no longer needed, delete the **Test-FW-RG** resource group to delete all firewall-related resources:
+
+```azurepowershell
+Remove-AzResourceGroup -Name Test-FW-RG
+```
+
+## Next steps
+
+* [Tutorial: Monitor Azure Firewall logs](./firewall-diagnostics.md)
firewall Tutorial Firewall Deploy Portal Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall/tutorial-firewall-deploy-portal-policy.md
Previously updated : 04/28/2021 Last updated : 05/03/2021 #Customer intent: As an administrator new to this service, I want to control outbound network access from resources located in an Azure subnet.
In this tutorial, you learn how to:
> * Configure a NAT rule to allow a remote desktop to the test server > * Test the firewall
+If you prefer, you can complete this procedure using [Azure PowerShell](deploy-ps-policy.md).
## Prerequisites
governance Azure Management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/azure-management.md
required in continuous succession over the lifespan of a resource. This resource
with the initial deployment, through continued operation, and finally when retired. :::image type="complex" source="../monitoring/media/management-overview/management-capabilities.png" alt-text="Diagram of the disciplines of Management in Azure." border="false":::
- Diagram that shows the Migrate, Secure, Protect, Monitor, Configure, and Govern elements of the wheel of services that support Management and Governance in Azure. Secure has Security management and Threat protection as sub items. Protect has Backup and Disaster recovery as sub items. Monitor has App, Infra and network monitoring, and Log Analytics and Diagnostics as sub items. Configure has Configuration, Update management, Automation, and Scripting as sub items. And Govern has Policy management and Cost management as sub items.
+ Diagram that shows the Migrate, Secure, Protect, Monitor, Configure, and Govern elements of the wheel of services that support Management and Governance in Azure. Secure has Security management and Threat protection as sub items. Protect has Backup and Disaster recovery as sub items. Monitor has App, infrastructure and network monitoring, and Log Analytics and Diagnostics as sub items. Configure has Configuration, Update Management, Automation, and Scripting as sub items. And Govern has Policy management and Cost management as sub items.
:::image-end::: No single Azure service completely fills the requirements of a particular management area. Instead,
platforms.
To learn more about Azure Governance, see these articles: - See the [Azure Governance hub](./index.yml).-- See [Governance in the Cloud Adoption Framework for Azure](/azure/cloud-adoption-framework/govern/)
+- See [Governance in the Cloud Adoption Framework for Azure](/azure/cloud-adoption-framework/govern/)
governance Lifecycle https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/concepts/lifecycle.md
Like many resources within Azure, a blueprint in Azure Blueprints has a typical
lifecycle. They're created, deployed, and finally deleted when no longer needed or relevant. Azure Blueprints supports standard lifecycle operations. It then builds upon them to provide additional levels of status that support common continuous integration and continuous deployment pipelines for
-organizations that manage their Infrastructure as Code ΓÇô a key element in DevOps.
+organizations that manage their Infrastructure as Code - a key element in DevOps.
To fully understand a blueprint and the stages, we'll cover a standard lifecycle:
subscription. During blueprint unassignment, the following occurs:
- Find out how to make use of [blueprint resource locking](./resource-locking.md). - Learn how to [update existing assignments](../how-to/update-existing-assignments.md). - Resolve issues during the assignment of a blueprint with
- [general troubleshooting](../troubleshoot/general.md).
+ [general troubleshooting](../troubleshoot/general.md).
governance Parameters https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/concepts/parameters.md
you define as required vs what can be changed during assignment.
1. The **Edit Artifact** page displays value options appropriate to the artifact selected. Each parameter on the artifact has a title, a value box, and a checkbox. Set the box to unchecked to
- make it a **static parameter**. In the example below, only _Location_ is a **static parameter**
- as it's unchecked and _Resource Group Name_ is checked.
+ make it a **static parameter**. In the following example, only _Location_ is a **static
+ parameter** as it's unchecked and _Resource Group Name_ is checked.
:::image type="content" source="../media/parameters/static-parameter.png" alt-text="Screenshot of static parameters on a blueprint artifact." border="false":::
different name for every assignment of the blueprint. For a list of blueprint fu
1. On the **Assign blueprint** page, find the **Artifact parameters** section. Each artifact with at least one **dynamic parameter** displays the artifact and the configuration options. Provide
- required values to the parameters before assigning the blueprint. In the example below, _Name_ is
- a **dynamic parameter** that must be defined to complete blueprint assignment.
+ required values to the parameters before assigning the blueprint. In the following example,
+ _Name_ is a **dynamic parameter** that must be defined to complete blueprint assignment.
:::image type="content" source="../media/parameters/dynamic-parameter.png" alt-text="Screenshot of setting dynamic parameters during blueprint assignment." border="false":::
a dynamic parameter that isn't provided during assignment, the assignment will f
- Find out how to make use of [blueprint resource locking](./resource-locking.md). - Learn how to [update existing assignments](../how-to/update-existing-assignments.md). - Resolve issues during the assignment of a blueprint with
- [general troubleshooting](../troubleshoot/general.md).
+ [general troubleshooting](../troubleshoot/general.md).
governance Create Blueprint Azurecli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/create-blueprint-azurecli.md
assignment on the resource group.
--description 'Contains the resource template deployment and a role assignment.' ```
-1. Add role assignment at subscription. In the example below, the principal identities granted the
- specified role are configured to a parameter that is set during blueprint assignment. This
+1. Add role assignment at subscription. In the following example, the principal identities granted
+ the specified role are configured to a parameter that is set during blueprint assignment. This
example uses the _Contributor_ built-in role with a GUID of `b24988ac-6180-42a0-ab88-20f7382dd24c`.
In this quickstart, you've created, assigned, and removed a blueprint with Azure
about Azure Blueprints, continue to the blueprint lifecycle article. > [!div class="nextstepaction"]
-> [Learn about the blueprint lifecycle](./concepts/lifecycle.md)
+> [Learn about the blueprint lifecycle](./concepts/lifecycle.md)
governance Create Blueprint Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/create-blueprint-portal.md
Manager template and role assignment on the new resource group.
} ```
- 1. Clear the **storageAccountType** check box and note that the drop-down list contains only
+ 1. Clear the **storageAccountType** check box and note that the dropdown list contains only
values included in the ARM template under **allowedValues**. Select the box to set it back to a dynamic parameter.
assignment to the new resource group. You can fix both by following these steps:
1. Select **Blueprint definitions** from the page on the left.
-1. In the list of blueprints, right-click the one that you previously created and select **Edit
- blueprint**.
+1. In the list of blueprints, select and hold (or right-click) the one that you previously created
+ and select **Edit blueprint**.
1. In **Blueprint description**, provide some information about the blueprint and the artifacts that compose it. In this case, enter something like: **This blueprint sets tag policy and role
Publishing makes the blueprint available to be assigned to a subscription.
1. Select **Blueprint definitions** from the page on the left.
-1. In the list of blueprints, right-click the one you previously created and select **Publish
- blueprint**.
+1. In the list of blueprints, select and hold (or right-click) the one you previously created and
+ select **Publish blueprint**.
1. In the pane that opens, provide a **Version** (letters, numbers, and hyphens with a maximum length of 20 characters), such as **v1**. Optionally, enter text in **Change notes**, such as
is saved to a subscription, it can only be assigned to that subscription.
1. Select **Blueprint definitions** from the page on the left.
-1. In the list of blueprints, right-click the one that you previously created (or select the
- ellipsis) and select **Assign blueprint**.
+1. In the list of blueprints, select and hold (or right-click) the one that you previously created
+ (or select the ellipsis) and select **Assign blueprint**.
-1. On the **Assign blueprint** page, in the **Subscription** drop-down list, select the
+1. On the **Assign blueprint** page, in the **Subscription** dropdown list, select the
subscriptions that you want to deploy this blueprint to. If there are supported Enterprise offerings available from
is saved to a subscription, it can only be assigned to that subscription.
1. Provide a **Display name** for the new subscription.
- 1. Select the available **Offer** from the drop-down list.
+ 1. Select the available **Offer** from the dropdown list.
1. Use the ellipsis to select the [management group](../management-groups/overview.md) that the subscription will be a child of.
is saved to a subscription, it can only be assigned to that subscription.
blueprint. To learn more, see [Managed identities for Azure resources](../../active-directory/managed-identities-azure-resources/overview.md).
-1. Leave the **Blueprint definition version** drop-down selection of **Published** versions on the
- **v1** entry. (The default is the most recently published version.)
+1. Leave the **Blueprint definition version** dropdown list selection of **Published** versions on
+ the **v1** entry. (The default is the most recently published version.)
1. For **Lock Assignment**, leave the default of **Don't Lock**. For more information, see [Blueprints resource locking](./concepts/resource-locking.md).
is saved to a subscription, it can only be assigned to that subscription.
Value** to **ContosoIT**. 1. For **ResourceGroup**, provide a **Name** of **StorageAccount** and a **Location** of **East US
- 2** from the drop-down list.
+ 2** from the dropdown list.
> [!NOTE] > For each artifact that you added under the resource group during blueprint definition, that
Now that the blueprint has been assigned to a subscription, verify the progress
1. Select **Assigned blueprints** from the page on the left.
-1. In the list of blueprints, right-click the one that you previously assigned and select **View
- assignment details**.
+1. In the list of blueprints, select and hold (or right-click) the one that you previously assigned
+ and select **View assignment details**.
:::image type="content" source="./media/create-blueprint-portal/view-assignment-details.png" alt-text="Screenshot of the blueprint assignment context menu with the 'View assignment details' option selected." border="false":::
governance Create Blueprint Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/create-blueprint-powershell.md
assignment on the resource group.
1. Add role assignment at subscription. The **ArtifactFile** defines the _kind_ of artifact, the properties align to the role definition identifier, and the principal identities are passed as an
- array of values. In the example below, the principal identities granted the specified role are
- configured to a parameter that is set during blueprint assignment. This example uses the
+ array of values. In the following example, the principal identities granted the specified role
+ are configured to a parameter that is set during blueprint assignment. This example uses the
_Contributor_ built-in role with a GUID of `b24988ac-6180-42a0-ab88-20f7382dd24c`. - JSON file - \artifacts\roleContributor.json
governance Create Blueprint Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/create-blueprint-rest-api.md
values:
1. Add role assignment at subscription. The **Request Body** defines the _kind_ of artifact, the properties align to the role definition identifier, and the principal identities are passed as an
- array of values. In the example below, the principal identities granted the specified role are
- configured to a parameter that is set during blueprint assignment. This example uses the
+ array of values. In the following example, the principal identities granted the specified role
+ are configured to a parameter that is set during blueprint assignment. This example uses the
_Contributor_ built-in role with a GUID of `b24988ac-6180-42a0-ab88-20f7382dd24c`. - REST API URI
In this quickstart, you've created, assigned, and removed a blueprint with REST
about Azure Blueprints, continue to the blueprint lifecycle article. > [!div class="nextstepaction"]
-> [Learn about the blueprint lifecycle](./concepts/lifecycle.md)
+> [Learn about the blueprint lifecycle](./concepts/lifecycle.md)
governance Update Existing Assignments https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/how-to/update-existing-assignments.md
an existing assignment, including:
1. Select **Assigned blueprints** from the page on the left. 1. In the list of blueprints, select the blueprint assignment. Then use the **Update assignment**
- button OR right-click the blueprint assignment and select **Update assignment**.
+ button OR select and hold (or right-click) the blueprint assignment and select **Update
+ assignment**.
:::image type="content" source="../media/update-existing-assignments/update-assignment.png" alt-text="Screenshot of the Blueprint assignment page with the 'Update assignment' button highlighted." border="false":::
an existing assignment, including:
:::image type="content" source="../media/update-existing-assignments/updated-assignment.png" alt-text="Screenshot of an updated blueprint assignment showing the lock mode changed." border="false":::
-1. Explore details about other **Assignment operations** using the drop-down. The table of **Managed
- resources** updates by selected assignment operation.
+1. Explore details about other **Assignment operations** using the dropdown list. The table of
+ **Managed resources** updates by selected assignment operation.
:::image type="content" source="../media/update-existing-assignments/assignment-operations.png" alt-text="Screenshot of an updated blueprint assignment showing the assignment operations and their status." border="false":::
but any change that would result in an error through Resource Manager will also
failure of the assignment. There's no limit on how many times an assignment can be updated. If an error occurs, determine the
-error and make another update to the assignment. Example error scenarios:
+error and make another update to the assignment. Example error scenarios:
- A bad parameter - An already existing object
error and make another update to the assignment. Example error scenarios:
- Learn to customize the [blueprint sequencing order](../concepts/sequencing-order.md). - Find out how to make use of [blueprint resource locking](../concepts/resource-locking.md). - Resolve issues during the assignment of a blueprint with
- [general troubleshooting](../troubleshoot/general.md).
+ [general troubleshooting](../troubleshoot/general.md).
governance Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/overview.md
including through a continuous integration and continuous delivery (CI/CD) pipel
each is assigned to a subscription in a single operation that can be audited and tracked. Nearly everything that you want to include for deployment in Azure Blueprints can be accomplished
-with an ARM template. However, an ARM template is a document that doesn't exist natively in Azure ΓÇô
+with an ARM template. However, an ARM template is a document that doesn't exist natively in Azure -
each is stored either locally or in source control. The template gets used for deployments of one or more Azure resources, but once those resources deploy there's no active connection or relationship to the template.
as artifacts:
|Resource | Hierarchy options| Description | ||||
-|Resource Groups | Subscription | Create a new resource group for use by other artifacts within the blueprint. These placeholder resource groups enable you to organize resources exactly the way you want them structured and provides a scope limiter for included policy and role assignment artifacts and ARM templates. |
+|Resource Groups | Subscription | Create a new resource group for use by other artifacts within the blueprint. These placeholder resource groups enable you to organize resources exactly the way you want them structured and provides a scope limiter for included policy and role assignment artifacts and ARM templates. |
|ARM template | Subscription, Resource Group | Templates, including nested and linked templates, are used to compose complex environments. Example environments: a SharePoint farm, Azure Automation State Configuration, or a Log Analytics workspace. | |Policy Assignment | Subscription, Resource Group | Allows assignment of a policy or initiative to the subscription the blueprint is assigned to. The policy or initiative must be within the scope of the blueprint definition location. If the policy or initiative has parameters, these parameters are assigned at creation of the blueprint or during blueprint assignment. | |Role Assignment | Subscription, Resource Group | Add an existing user or group to a built-in role to make sure the right people always have the right access to your resources. Role assignments can be defined for the entire subscription or nested to a specific resource group included in the blueprint. |
on Channel 9.
- [Create a blueprint - Portal](./create-blueprint-portal.md). - [Create a blueprint - PowerShell](./create-blueprint-powershell.md).-- [Create a blueprint - REST API](./create-blueprint-rest-api.md).
+- [Create a blueprint - REST API](./create-blueprint-rest-api.md).
governance Blueprint Functions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/reference/blueprint-functions.md
as parameter _resourceName_ to the template artifact.
- Find out how to make use of [blueprint resource locking](../concepts/resource-locking.md). - Learn how to [update existing assignments](../how-to/update-existing-assignments.md). - Resolve issues during the assignment of a blueprint with
- [general troubleshooting](../troubleshoot/general.md).
+ [general troubleshooting](../troubleshoot/general.md).
governance Deploy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/azure-security-benchmark-foundation/deploy.md
The following table provides a list of the blueprint parameters:
|Hub resource group|Resource group|Resource group location|Locked - Uses hub location| |Azure Firewall template|Resource Manager template|Azure Firewall private IP address|| |Azure Log Analytics and Diagnostics template|Resource Manager template|Log Analytics workspace location|Location where Log Analytics workspace is created; run `Get-AzLocation | Where-Object Providers -like 'Microsoft.OperationalInsights' | Select DisplayName` in Azure PowersShell to see available regions|
-|Azure Log Analytics and Diagnostics template|Resource Manager template|Azure Automation account ID (optional)|Automation account resource ID; used to create a linked service between Log Analytics and an Automation account|
+|Azure Log Analytics and Diagnostics template|Resource Manager template|Azure Automation account ID (optional) |Automation account resource ID; used to create a linked service between Log Analytics and an Automation account|
|Azure Network Security Group template|Resource Manager template|Enable NSG flow logs|Enter 'true' or 'false' to enable or disable NSG flow logs| |Azure Virtual Network hub template|Resource Manager template|Virtual network address prefix|Virtual network address prefix for hub virtual network| |Azure Virtual Network hub template|Resource Manager template|Firewall subnet address prefix|Firewall subnet address prefix for hub virtual network|
governance Azure Security Benchmark https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/azure-security-benchmark.md
# Azure Security Benchmark blueprint sample
-The Azure Security Benchmark blueprint sample provides governance guard-rails using
+The Azure Security Benchmark blueprint sample provides governance guardrails using
[Azure Policy](../../policy/overview.md) that help you assess specific [Azure Security Benchmark v1](../../../security/benchmarks/overview.md) controls. This blueprint helps customers deploy a core set of policies for any Azure-deployed architecture where they intend
Additional articles about blueprints and how to use them:
- Understand how to use [static and dynamic parameters](../concepts/parameters.md). - Learn to customize the [blueprint sequencing order](../concepts/sequencing-order.md). - Find out how to make use of [blueprint resource locking](../concepts/resource-locking.md).-- Learn how to [update existing assignments](../how-to/update-existing-assignments.md).
+- Learn how to [update existing assignments](../how-to/update-existing-assignments.md).
governance Index https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/caf-foundation/index.md
estate. This sample will deploy and enforce resources, policies, and templates t
organization to confidently get started with Azure. :::image type="complex" source="../../media/caf-blueprints/caf-foundation-architecture.png" alt-text="C A F Foundation, image describes what gets installed as part of C A F guidance for creating a foundation to get started with Azure." border="false":::
- Describes an Azure architecture which is achieved by deploying the C A F Foundation blueprint. It's applicable to a subscription with resource groups which consists of a storage account for storing logs, Log analytics configured to store in the storage account. It also depicts Azure Key Vault configured with Azure Security Center standard setup. All these core infrastructures are accessed using Azure Active Directory and enforced using Azure Policy.
+ Describes an Azure architecture which is achieved by deploying the C A F Foundation blueprint. It's applicable to a subscription with resource groups which consists of a storage account for storing logs, Log Analytics configured to store in the storage account. It also depicts Azure Key Vault configured with Azure Security Center standard setup. All these core infrastructures are accessed using Azure Active Directory and enforced using Azure Policy.
:::image-end::: This implementation incorporates several Azure services used to provide a secure, fully monitored,
enterprise-ready foundation. This environment is composed of:
version) provides threat protection for your migrated workloads - The blueprint also defines and deploys [Azure Policy](../../../policy/overview.md) definitions: - Policy definitions:
- - Tagging (CostCenter) applied to resources groups
+ - Tagging (CostCenter) applied to resource groups
- Append resources in resource group with the CostCenter Tag - Allowed Azure Region for Resources and Resource Groups - Allowed Storage Account SKUs (choose while deploying)
governance Deploy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/caf-migrate-landing-zone/deploy.md
sample as a starter.
this sample**. 1. Enter the _Basics_ of the blueprint sample:
- - **Blueprint name** Provide a name for your copy of the CAF migration landing zone blueprint
+ - **Blueprint name** Provide a name for your copy of the CAF Migration landing zone blueprint
sample. - **Definition location** Use the ellipsis and select the management group to save your copy of the sample to.
governance Index https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/caf-migrate-landing-zone/index.md
sample will deploy and enforce resources, policies, and templates that will allo
confidently get started with Azure. :::image type="complex" source="../../media/caf-blueprints/caf-migration-landing-zone-architecture.png" alt-text="C A F Migration landing zone, image describes what gets installed as part of C A F guidance for initial landing zone." border="false":::
- Describes an Azure architecture which is achieved by deploying the C A F migration blueprint. It's applicable to a subscription with resource groups which consists of an Azure virtual network, storage account for storing logs, Log analytics configured to store in the storage account. It also depicts Azure Key Vault configured and Azure Migrate initial setup created. All these core infrastructures are accessed using Azure Active directory.
+ Describes an Azure architecture which is achieved by deploying the C A F migration blueprint. It's applicable to a subscription with resource groups which consists of an Azure virtual network, storage account for storing logs, Log Analytics configured to store in the storage account. It also depicts Azure Key Vault configured and Azure Migrate initial setup created. All these core infrastructures are accessed using Azure Active Directory.
:::image-end::: This environment is composed of several Azure services used to provide a secure, fully monitored,
governance Control Mapping https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/canada-federal-pbmm/control-mapping.md
on SQL servers.
## CM-7 (5) Least Functionality | Authorized Software / Whitelisting Adaptive application control in Azure Security Center is an intelligent, automated end-to-end
-application allow list solution that can block or prevent specific software from running on your
+application allowlist solution that can block or prevent specific software from running on your
virtual machines. Application control helps you create approved application lists for your virtual machines. This blueprint assigns an [Azure Policy](../../../policy/overview.md) definition that
-helps you monitor virtual machines where an application allow list is recommended but has not yet
+helps you monitor virtual machines where an application allowlist is recommended but has not yet
been configured. - Adaptive application controls for defining safe applications should be enabled on your machines
been configured.
## CM-11 User-Installed Software Adaptive application control in Azure Security Center is an intelligent, automated end-to-end
-application allow list solution that can block or prevent specific software from running on your
+application allowlist solution that can block or prevent specific software from running on your
virtual machines. Application control can help you enforce and monitor compliance with software restriction policies. This blueprint assigns an [Azure Policy](../../../policy/overview.md)
-definition that helps you monitor virtual machines where an application allow list is recommended
-but has not yet been configured.
+definition that helps you monitor virtual machines where an application allowlist is recommended but
+has not yet been configured.
- Adaptive application controls for defining safe applications should be enabled on your machines
indicators helps you ensure that system authenticators comply with your organiza
identification and authentication policy. - Show audit results from Linux VMs that do not have the passwd file permissions set to 0644-- Show audit results from Linux VMs that have accounts without passwords
+- Show audit results from Linux VMs that have accounts without passwords
## IA-5 (1) Authenticator Management | Password-Based Authentication
vulnerabilities in your deployed resources.
## SC-5 Denial of Service Protection
-Azure's distributed denial of service (DDoS) standard tier provides additional features and
+Azure's distributed denial of service (DDoS) Standard tier provides additional features and
mitigation capabilities over the basic service tier. These additional features include Azure Monitor integration and the ability to review post-attack mitigation reports. This blueprint assigns an
-[Azure Policy](../../../policy/overview.md) definition that audits if the DDoS standard tier is
+[Azure Policy](../../../policy/overview.md) definition that audits if the DDoS Standard tier is
enabled. Understanding the capability difference between the service tiers can help you select the best solution to address denial of service protections for your Azure environment.
properly encrypted can help you meet your organization's requirements or protect
unauthorized disclosure and modification. - API App should only be accessible over HTTPS-- Show audit results from Windows web servers that are not using secure communication protocols
+- Show audit results from Windows web servers that are not using secure communication protocols
- Function App should only be accessible over HTTPS - Only secure connections to your Azure Cache for Redis should be enabled - Web Application should only be accessible over HTTPS
governance Index https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/canada-federal-pbmm/index.md
# Overview of the Canada Federal PBMM blueprint sample Canada Federal Protected B, Medium Integrity, Medium Availability (PBMM) blueprint sample provides a
-set of governance guard-rails using [Azure Policy](../../../policy/overview.md) that help towards
+set of governance guardrails using [Azure Policy](../../../policy/overview.md) that help toward
[Canada Federal PBMM](https://www.canada.ca/en/government/system/digital-government/digital-government-innovations/cloud-services/government-canada-security-control-profile-cloud-based-it-services.html) attestation.
governance Cis Azure 1 1 0 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/cis-azure-1-1-0.md
# CIS Microsoft Azure Foundations Benchmark v1.1.0 blueprint sample
-The CIS Microsoft Azure Foundations Benchmark v1.1.0 blueprint sample provides governance
-guard-rails using [Azure Policy](../../policy/overview.md) that help you assess specific CIS
-Microsoft Azure Foundations Benchmark recommendations. This blueprint helps customers deploy a core
-set of policies for any Azure-deployed architecture that must implement CIS Microsoft Azure
-Foundations Benchmark v1.1.0 recommendations.
+The CIS Microsoft Azure Foundations Benchmark v1.1.0 blueprint sample provides governance guardrails
+using [Azure Policy](../../policy/overview.md) that help you assess specific CIS Microsoft Azure
+Foundations Benchmark recommendations. This blueprint helps customers deploy a core set of policies
+for any Azure-deployed architecture that must implement CIS Microsoft Azure Foundations Benchmark
+v1.1.0 recommendations.
## Recommendation mapping
Additional articles about blueprints and how to use them:
- Understand how to use [static and dynamic parameters](../concepts/parameters.md). - Learn to customize the [blueprint sequencing order](../concepts/sequencing-order.md). - Find out how to make use of [blueprint resource locking](../concepts/resource-locking.md).-- Learn how to [update existing assignments](../how-to/update-existing-assignments.md).
+- Learn how to [update existing assignments](../how-to/update-existing-assignments.md).
governance Cis Azure 1 3 0 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/cis-azure-1-3-0.md
# CIS Microsoft Azure Foundations Benchmark v1.3.0 blueprint sample
-The CIS Microsoft Azure Foundations Benchmark v1.3.0 blueprint sample provides governance
-guard-rails using [Azure Policy](../../policy/overview.md) that help you assess specific CIS
-Microsoft Azure Foundations Benchmark v1.3.0 recommendations. This blueprint helps customers deploy
-a core set of policies for any Azure-deployed architecture that must implement CIS Microsoft Azure
-Foundations Benchmark v1.3.0 recommendations.
+The CIS Microsoft Azure Foundations Benchmark v1.3.0 blueprint sample provides governance guardrails
+using [Azure Policy](../../policy/overview.md) that help you assess specific CIS Microsoft Azure
+Foundations Benchmark v1.3.0 recommendations. This blueprint helps customers deploy a core set of
+policies for any Azure-deployed architecture that must implement CIS Microsoft Azure Foundations
+Benchmark v1.3.0 recommendations.
## Recommendation mapping
The following table provides a list of the blueprint artifact parameters:
|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Monitor missing Endpoint Protection in Azure Security Center|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) | |CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Diagnostic logs in Search services should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) | |CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Diagnostic logs in App Services should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: An activity log alert should exist for specific Administrative operations (Microsoft.Network/networkSecurityGroups/delete)|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: An activity log alert should exist for specific Administrative operations (Microsoft.Network/networkSecurityGroups/securityRules/delete)|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: An activity log alert should exist for specific Administrative operations (Microsoft.Network/networkSecurityGroups/securityRules/write)|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: An activity log alert should exist for specific Administrative operations (Microsoft.Network/networkSecurityGroups/write)|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: An activity log alert should exist for specific Administrative operations (Microsoft.Sql/servers/firewallRules/delete)|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: An activity log alert should exist for specific Administrative operations (Microsoft.Sql/servers/firewallRules/write)|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: An activity log alert should exist for specific Administrative operations (Microsoft.Network/networkSecurityGroups/delete) |For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: An activity log alert should exist for specific Administrative operations (Microsoft.Network/networkSecurityGroups/securityRules/delete) |For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: An activity log alert should exist for specific Administrative operations (Microsoft.Network/networkSecurityGroups/securityRules/write) |For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: An activity log alert should exist for specific Administrative operations (Microsoft.Network/networkSecurityGroups/write) |For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: An activity log alert should exist for specific Administrative operations (Microsoft.Sql/servers/firewallRules/delete) |For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: An activity log alert should exist for specific Administrative operations (Microsoft.Sql/servers/firewallRules/write) |For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Only approved VM extensions should be installed|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) | |CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Azure Defender for container registries should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) | |CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Managed identity should be used in your API App|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) | |CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Authentication should be enabled on your API app|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: An activity log alert should exist for specific Policy operations (Microsoft.Authorization/policyAssignments/delete)|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
-|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: An activity log alert should exist for specific Policy operations (Microsoft.Authorization/policyAssignments/write)|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: An activity log alert should exist for specific Policy operations (Microsoft.Authorization/policyAssignments/delete) |For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
+|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: An activity log alert should exist for specific Policy operations (Microsoft.Authorization/policyAssignments/write) |For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
|CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Authentication should be enabled on your Function app|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) | |CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Diagnostic logs in Data Lake Analytics should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) | |CIS Microsoft Azure Foundations Benchmark v1.3.0|Policy Assignment|Effect for policy: Storage accounts should allow access from trusted Microsoft services|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md) |
Additional articles about blueprints and how to use them:
- Understand how to use [static and dynamic parameters](../concepts/parameters.md). - Learn to customize the [blueprint sequencing order](../concepts/sequencing-order.md). - Find out how to make use of [blueprint resource locking](../concepts/resource-locking.md).-- Learn how to [update existing assignments](../how-to/update-existing-assignments.md).
+- Learn how to [update existing assignments](../how-to/update-existing-assignments.md).
governance Cmmc L3 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/cmmc-l3.md
# CMMC Level 3 blueprint sample
-The CMMC Level 3 blueprint sample provides governance guard-rails using
+The CMMC Level 3 blueprint sample provides governance guardrails using
[Azure Policy](../../policy/overview.md) that help you assess specific [Cybersecurity Maturity Model Certification (CMMC) framework](https://www.acq.osd.mil/cmmc/https://docsupdatetracker.net/index.html) controls. This blueprint helps customers deploy a core set of policies for any Azure-deployed
The following table provides a list of the blueprint artifact parameters:
|Artifact name|Artifact type|Parameter name|Description| |-|-|-|-|
-|CMMC Level 3|Policy Assignment|Include Arc-connected servers when evaluating guest configuration policies|By selecting 'true,' you agree to be charged monthly per Arc connected machine; for more information, visit https://aka.ms/policy-pricing|
+|CMMC Level 3|Policy Assignment|Include Arc-connected servers when evaluating guest configuration policies|By selecting 'true', you agree to be charged monthly per Arc connected machine; for more information, visit https://aka.ms/policy-pricing|
|CMMC Level 3|Policy Assignment|List of users that must be excluded from Windows VM Administrators group|A semicolon-separated list of users that should be excluded in the Administrators local group; Ex: Administrator; myUser1; myUser2| |CMMC Level 3|Policy Assignment|List of users that must be included in Windows VM Administrators group|A semicolon-separated list of users that should be included in the Administrators local group; Ex: Administrator; myUser1; myUser2| |CMMC Level 3|Policy Assignment|Log Analytics workspace ID for VM agent reporting|ID (GUID) of the Log Analytics workspace where VMs agents should report|
Additional articles about blueprints and how to use them:
- Understand how to use [static and dynamic parameters](../concepts/parameters.md). - Learn to customize the [blueprint sequencing order](../concepts/sequencing-order.md). - Find out how to make use of [blueprint resource locking](../concepts/resource-locking.md).-- Learn how to [update existing assignments](../how-to/update-existing-assignments.md).
+- Learn how to [update existing assignments](../how-to/update-existing-assignments.md).
governance Control Mapping https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/dod-impact-level-4/control-mapping.md
assessment on virtual machines, virtual machine scale sets, SQL Database servers
Instance servers. These policy definitions also audit configuration of diagnostic logs to provide insight into operations that are performed within Azure resources. These insights provide real-time information about the security state of your deployed resources and can help you prioritize
-remediation actions. For detailed vulnerability scanning and monitoring, we recommend you leverage
+remediation actions. For detailed vulnerability scanning and monitoring, we recommend you use
Azure Sentinel and Azure Security Center as well. - \[Preview\]: Vulnerability Assessment should be enabled on Virtual Machines
settings are enabled or not.
## CM-7 (2) Least Functionality | Prevent Program Execution Adaptive application control in Azure Security Center is an intelligent, automated end-to-end
-application allow list solution that can block or prevent specific software from running on your
+application allowlist solution that can block or prevent specific software from running on your
virtual machines. Application control can run in an enforcement mode that prohibits non-approved application from running. This blueprint assigns an Azure Policy definition that helps you monitor
-virtual machines where an application allow list is recommended but has not yet been configured.
+virtual machines where an application allowlist is recommended but has not yet been configured.
- Adaptive application controls for defining safe applications should be enabled on your machines ## CM-7 (5) Least Functionality | Authorized Software / Whitelisting Adaptive application control in Azure Security Center is an intelligent, automated end-to-end
-application allow list solution that can block or prevent specific software from running on your
+application allowlist solution that can block or prevent specific software from running on your
virtual machines. Application control helps you create approved application lists for your virtual machines. This blueprint assigns an [Azure Policy](../../../policy/overview.md) definition that
-helps you monitor virtual machines where an application allow list is recommended but has not yet
+helps you monitor virtual machines where an application allowlist is recommended but has not yet
been configured. - Adaptive application controls for defining safe applications should be enabled on your machines
been configured.
## CM-11 User-Installed Software Adaptive application control in Azure Security Center is an intelligent, automated end-to-end
-application allow list solution that can block or prevent specific software from running on your
+application allowlist solution that can block or prevent specific software from running on your
virtual machines. Application control can help you enforce and monitor compliance with software restriction policies. This blueprint assigns an [Azure Policy](../../../policy/overview.md)
-definition that helps you monitor virtual machines where an application allow list is recommended
-but has not yet been configured.
+definition that helps you monitor virtual machines where an application allowlist is recommended but
+has not yet been configured.
- Adaptive application controls for defining safe applications should be enabled on your machines
ensure necessary contingency controls are in place.
- Audit virtual machines without disaster recovery configured
-## CP-9 (05) Information System Backup | Transfer to Alternate Storage Site
+## CP-9 (05) Information System Backup | Transfer to Alternate Storage Site
This blueprint assigns Azure Policy definitions that audit the organization's system backup information to the alternate storage site electronically. For physical shipment of storage metadata,
vulnerabilities in your deployed resources.
## SC-5 Denial of Service Protection
-Azure's distributed denial of service (DDoS) standard tier provides additional features and
+Azure's distributed denial of service (DDoS) Standard tier provides additional features and
mitigation capabilities over the basic service tier. These additional features include Azure Monitor integration and the ability to review post-attack mitigation reports. This blueprint assigns an
-[Azure Policy](../../../policy/overview.md) definition that audits if the DDoS standard tier is
+[Azure Policy](../../../policy/overview.md) definition that audits if the DDoS Standard tier is
enabled. Understanding the capability difference between the service tiers can help you select the best solution to address denial of service protections for your Azure environment.
This blueprint assigns policy definitions that help you ensure applications are
version of HTTP, Java, PHP, Python, and TLS. This blueprint also assigns a policy definition that ensures that Kubernetes Services is upgraded to its non-vulnerable version. -- Ensure that 'HTTP Version' is the latest, if used to run the Api app
+- Ensure that 'HTTP Version' is the latest, if used to run the API app
- Ensure that 'HTTP Version' is the latest, if used to run the Function app - Ensure that 'HTTP Version' is the latest, if used to run the Web app-- Ensure that 'Java version' is the latest, if used as a part of the Api app
+- Ensure that 'Java version' is the latest, if used as a part of the API app
- Ensure that 'Java version' is the latest, if used as a part of the Function app - Ensure that 'Java version' is the latest, if used as a part of the Web app-- Ensure that 'PHP version' is the latest, if used as a part of the Api app
+- Ensure that 'PHP version' is the latest, if used as a part of the API app
- Ensure that 'PHP version' is the latest, if used as a part of the WEB app-- Ensure that 'Python version' is the latest, if used as a part of the Api app
+- Ensure that 'Python version' is the latest, if used as a part of the API app
- Ensure that 'Python version' is the latest, if used as a part of the Function app - Ensure that 'Python version' is the latest, if used as a part of the Web app - Latest TLS version should be used in your API App
you can take appropriate action.
## SI-4 (12) Information System Monitoring | Automated Alerts This blueprint provides policy definitions that help you ensure data security notifications are
-properly enabled. In addition, this blueprint ensures that the standard pricing tier is enabled
-for Azure Security Center. Note that the standard pricing tier enables threat detection for networks
+properly enabled. In addition, this blueprint ensures that the Standard pricing tier is enabled
+for Azure Security Center. Note that the Standard pricing tier enables threat detection for networks
and virtual machines, providing threat intelligence, anomaly detection, and behavior analytics in Azure Security Center.
governance Deploy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/dod-impact-level-4/deploy.md
The following table provides a list of the blueprint artifact parameters:
|-|-|-|-| |Allowed locations|Policy Assignment|Allowed Locations|This policy enables you to restrict the locations your organization can specify when deploying resources. Use to enforce your geo-compliance requirements.| |Allowed Locations for resource groups|Policy Assignment |Allowed Locations|This policy enables you to restrict the locations your organization can create resource groups in. Use to enforce your geo-compliance requirements.|
-|Deploy Auditing on SQL servers|Policy assignment|The value in days of the retention period (0 indicates unlimited retention)|Retention days (optional, 180 days if unspecified)|
+|Deploy Auditing on SQL servers|Policy assignment|The value in days of the retention period (0 indicates unlimited retention) |Retention days (optional, 180 days if unspecified) |
|Deploy Auditing on SQL servers|Policy assignment|Resource group name for storage account for SQL server auditing|Auditing writes database events to an audit log in your Azure Storage account (a storage account will be created in each region where a SQL Server is created that will be shared by all servers in that region). Important - for proper operation of Auditing do not delete or rename the resource group or the storage accounts.| |Deploy diagnostic settings for Network Security Groups|Policy assignment|Storage account prefix for network security group diagnostics|This prefix will be combined with the network security group location to form the created storage account name.|
-|Deploy diagnostic settings for Network Security Groups|Policy assignment|Resource group name for storage account for network security group diagnostics (must exist)|The resource group that the storage account will be created in. This resource group must already exist.|
+|Deploy diagnostic settings for Network Security Groups|Policy assignment|Resource group name for storage account for network security group diagnostics (must exist) |The resource group that the storage account will be created in. This resource group must already exist.|
|Deploy Log Analytics agent for Linux virtual machine scale sets|Policy assignment|Log Analytics workspace for Linux virtual machine scale sets|If this workspace is outside of the scope of the assignment you must manually grant 'Log Analytics Contributor' permissions (or similar) to the policy assignment's principal ID.| |Deploy Log Analytics agent for Linux virtual machine scale sets|Policy assignment|Optional: List of VM images that have supported Linux OS to add to scope|An empty array may be used to indicate no optional parameters: \[\]| |Deploy Log Analytics Agent for Linux VMs|Policy assignment|Log Analytics workspace for Linux VMs|If this workspace is outside of the scope of the assignment you must manually grant 'Log Analytics Contributor' permissions (or similar) to the policy assignment's principal ID.|
governance Index https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/dod-impact-level-4/index.md
# Overview of the DoD Impact Level 4 blueprint sample
-The Department of Defense Impact Level 4 (DoD IL4) blueprint sample provides governance guard-rails using [Azure Policy](../../../policy/overview.md) that help you assess specific DoD Impact Level 4 controls. This blueprint helps customers deploy a core set of policies for any Azure-deployed architecture that must implement DoD Impact Level 4 controls. For latest information on which Azure Clouds and Services meet DoD Impact Level 4 authorization, see [Azure services by FedRAMP and DoD CC SRG audit scope](../../../../azure-government/compliance/azure-services-in-fedramp-auditscope.md).
+The Department of Defense Impact Level 4 (DoD IL4) blueprint sample provides governance guardrails
+using [Azure Policy](../../../policy/overview.md) that help you assess specific DoD Impact Level 4
+controls. This blueprint helps customers deploy a core set of policies for any Azure-deployed
+architecture that must implement DoD Impact Level 4 controls. For latest information on which Azure
+Clouds and Services meet DoD Impact Level 4 authorization, see
+[Azure services by FedRAMP and DoD CC SRG audit scope](../../../../azure-government/compliance/azure-services-in-fedramp-auditscope.md).
> [!NOTE] > This blueprint sample is available in Azure Government.
Additional articles about blueprints and how to use them:
- Understand how to use [static and dynamic parameters](../../concepts/parameters.md). - Learn to customize the [blueprint sequencing order](../../concepts/sequencing-order.md). - Find out how to make use of [blueprint resource locking](../../concepts/resource-locking.md).-- Learn how to [update existing assignments](../../how-to/update-existing-assignments.md).
+- Learn how to [update existing assignments](../../how-to/update-existing-assignments.md).
governance Control Mapping https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/dod-impact-level-5/control-mapping.md
assessment on virtual machines, virtual machine scale sets, SQL Database servers
Instance servers. These policy definitions also audit configuration of diagnostic logs to provide insight into operations that are performed within Azure resources. These insights provide real-time information about the security state of your deployed resources and can help you prioritize
-remediation actions. For detailed vulnerability scanning and monitoring, we recommend you leverage
+remediation actions. For detailed vulnerability scanning and monitoring, we recommend you use
Azure Sentinel and Azure Security Center as well. - Audit diagnostic setting
settings are enabled or not.
## CM-7 (2) Least Functionality | Prevent Program Execution Adaptive application control in Azure Security Center is an intelligent, automated end-to-end
-application allow list solution that can block or prevent specific software from running on your
+application allowlist solution that can block or prevent specific software from running on your
virtual machines. Application control can run in an enforcement mode that prohibits non-approved application from running. This blueprint assigns an Azure Policy definition that helps you monitor
-virtual machines where an application allow list is recommended but has not yet been configured.
+virtual machines where an application allowlist is recommended but has not yet been configured.
- Adaptive application controls for defining safe applications should be enabled on your machines ## CM-7 (5) Least Functionality | Authorized Software / Whitelisting Adaptive application control in Azure Security Center is an intelligent, automated end-to-end
-application allow list solution that can block or prevent specific software from running on your
+application allowlist solution that can block or prevent specific software from running on your
virtual machines. Application control helps you create approved application lists for your virtual machines. This blueprint assigns an [Azure Policy](../../../policy/overview.md) definition that
-helps you monitor virtual machines where an application wallow list is recommended but has not yet
+helps you monitor virtual machines where an application allowlist is recommended but has not yet
been configured. - Adaptive application controls for defining safe applications should be enabled on your machines
been configured.
## CM-11 User-Installed Software Adaptive application control in Azure Security Center is an intelligent, automated end-to-end
-application allow list solution that can block or prevent specific software from running on your
+application allowlist solution that can block or prevent specific software from running on your
virtual machines. Application control can help you enforce and monitor compliance with software restriction policies. This blueprint assigns an [Azure Policy](../../../policy/overview.md)
-definition that helps you monitor virtual machines where an application allow list is recommended
-but has not yet been configured.
+definition that helps you monitor virtual machines where an application allowlist is recommended but
+has not yet been configured.
- Adaptive application controls for defining safe applications should be enabled on your machines
ensure necessary contingency controls are in place.
- Audit virtual machines without disaster recovery configured
-## CP-9 (05) Information System Backup | Transfer to Alternate Storage Site
+## CP-9 (05) Information System Backup | Transfer to Alternate Storage Site
This blueprint assigns Azure Policy definitions that audit the organization's system backup information to the alternate storage site electronically. For physical shipment of storage metadata,
vulnerabilities in your deployed resources.
## SC-5 Denial of Service Protection
-Azure's distributed denial of service (DDoS) standard tier provides additional features and
+Azure's distributed denial of service (DDoS) Standard tier provides additional features and
mitigation capabilities over the basic service tier. These additional features include Azure Monitor integration and the ability to review post-attack mitigation reports. This blueprint assigns an
-[Azure Policy](../../../policy/overview.md) definition that audits if the DDoS standard tier is
+[Azure Policy](../../../policy/overview.md) definition that audits if the DDoS Standard tier is
enabled. Understanding the capability difference between the service tiers can help you select the best solution to address denial of service protections for your Azure environment.
This blueprint assigns policy definitions that help you ensure applications are
version of HTTP, Java, PHP, Python, and TLS. This blueprint also assigns a policy definition that ensures that Kubernetes Services is upgraded to its non-vulnerable version. -- Ensure that 'HTTP Version' is the latest, if used to run the Api app
+- Ensure that 'HTTP Version' is the latest, if used to run the API app
- Ensure that 'HTTP Version' is the latest, if used to run the Function app - Ensure that 'HTTP Version' is the latest, if used to run the Web app-- Ensure that 'Java version' is the latest, if used as a part of the Api app
+- Ensure that 'Java version' is the latest, if used as a part of the API app
- Ensure that 'Java version' is the latest, if used as a part of the Function app - Ensure that 'Java version' is the latest, if used as a part of the Web app-- Ensure that 'PHP version' is the latest, if used as a part of the Api app
+- Ensure that 'PHP version' is the latest, if used as a part of the API app
- Ensure that 'PHP version' is the latest, if used as a part of the WEB app-- Ensure that 'Python version' is the latest, if used as a part of the Api app
+- Ensure that 'Python version' is the latest, if used as a part of the API app
- Ensure that 'Python version' is the latest, if used as a part of the Function app - Ensure that 'Python version' is the latest, if used as a part of the Web app - Latest TLS version should be used in your API App
you can take appropriate action.
## SI-4 (12) Information System Monitoring | Automated Alerts This blueprint provides policy definitions that help you ensure data security notifications are
-properly enabled. In addition, this blueprint ensures that the standard pricing tier is enabled for
-Azure Security Center. Note that the standard pricing tier enables threat detection for networks and
+properly enabled. In addition, this blueprint ensures that the Standard pricing tier is enabled for
+Azure Security Center. Note that the Standard pricing tier enables threat detection for networks and
virtual machines, providing threat intelligence, anomaly detection, and behavior analytics in Azure Security Center.
governance Deploy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/dod-impact-level-5/deploy.md
The following table provides a list of the blueprint artifact parameters:
|DoD Impact Level 5|Policy Assignment|Effect for policy: Ensure that '.NET Framework' version is the latest, if used as a part of the Function App|Azure Policy effect for this policy; for more information about effects, visit https://aka.ms/policyeffects| |DoD Impact Level 5|Policy Assignment|Effect for policy: Transparent Data Encryption on SQL databases should be enabled|Azure Policy effect for this policy; for more information about effects, visit https://aka.ms/policyeffects| |DoD Impact Level 5|Policy Assignment|Effect for policy: Vulnerability assessment should be enabled on your SQL managed instances|Azure Policy effect for this policy; for more information about effects, visit https://aka.ms/policyeffects|
-|DoD Impact Level 5|Policy Assignment|Effect for policy: Ensure that 'PHP version' is the latest, if used as a part of the Api app|Azure Policy effect for this policy; for more information about effects, visit https://aka.ms/policyeffects|
+|DoD Impact Level 5|Policy Assignment|Effect for policy: Ensure that 'PHP version' is the latest, if used as a part of the API app|Azure Policy effect for this policy; for more information about effects, visit https://aka.ms/policyeffects|
|DoD Impact Level 5|Policy Assignment|Effect for policy: An Azure Active Directory administrator should be provisioned for SQL servers|Azure Policy effect for this policy; for more information about effects, visit https://aka.ms/policyeffects| |DoD Impact Level 5|Policy Assignment|Effect for policy: Only secure connections to your Redis Cache should be enabled|Azure Policy effect for this policy; for more information about effects, visit https://aka.ms/policyeffects| |DoD Impact Level 5|Policy Assignment|Effect for policy: Endpoint protection solution should be installed on virtual machine scale sets|Azure Policy effect for this policy; for more information about effects, visit https://aka.ms/policyeffects|
The following table provides a list of the blueprint artifact parameters:
|DoD Impact Level 5|Policy Assignment|Effect for policy: Ensure that 'Python version' is the latest, if used as a part of the Web app|Azure Policy effect for this policy; for more information about effects, visit https://aka.ms/policyeffects| |DoD Impact Level 5|Policy Assignment|Effect for policy: Ensure that 'Python version' is the latest, if used as a part of the Function app|Azure Policy effect for this policy; for more information about effects, visit https://aka.ms/policyeffects| |DoD Impact Level 5|Policy Assignment|Effect for policy: Ensure that 'PHP version' is the latest, if used as a part of the WEB app|Azure Policy effect for this policy; for more information about effects, visit https://aka.ms/policyeffects|
-|DoD Impact Level 5|Policy Assignment|Effect for policy: Ensure that 'Python version' is the latest, if used as a part of the Api app|Azure Policy effect for this policy; for more information about effects, visit https://aka.ms/policyeffects|
+|DoD Impact Level 5|Policy Assignment|Effect for policy: Ensure that 'Python version' is the latest, if used as a part of the API app|Azure Policy effect for this policy; for more information about effects, visit https://aka.ms/policyeffects|
|DoD Impact Level 5|Policy Assignment|Effect for policy: Vulnerabilities should be remediated by a Vulnerability Assessment solution|Azure Policy effect for this policy; for more information about effects, visit https://aka.ms/policyeffects| |DoD Impact Level 5|Policy Assignment|Effect for policy: Geo-redundant backup should be enabled for Azure Database for MySQL|Azure Policy effect for this policy; for more information about effects, visit https://aka.ms/policyeffects| |DoD Impact Level 5|Policy Assignment|Effect for policy: Ensure that '.NET Framework' version is the latest, if used as a part of the Web app|Azure Policy effect for this policy; for more information about effects, visit https://aka.ms/policyeffects| |DoD Impact Level 5|Policy Assignment|Effect for policy: System updates should be installed on your machines|Azure Policy effect for this policy; for more information about effects, visit https://aka.ms/policyeffects|
-|DoD Impact Level 5|Policy Assignment|Effect for policy: Ensure that 'Java version' is the latest, if used as a part of the Api app|Azure Policy effect for this policy; for more information about effects, visit https://aka.ms/policyeffects|
+|DoD Impact Level 5|Policy Assignment|Effect for policy: Ensure that 'Java version' is the latest, if used as a part of the API app|Azure Policy effect for this policy; for more information about effects, visit https://aka.ms/policyeffects|
|DoD Impact Level 5|Policy Assignment|Effect for policy: Ensure that 'HTTP Version' is the latest, if used to run the Web app|Azure Policy effect for this policy; for more information about effects, visit https://aka.ms/policyeffects| |DoD Impact Level 5|Policy Assignment|Effect for policy: Latest TLS version should be used in your API App|Azure Policy effect for this policy; for more information about effects, visit https://aka.ms/policyeffects| |DoD Impact Level 5|Policy Assignment|Effect for policy: MFA should be enabled accounts with write permissions on your subscription|Azure Policy effect for this policy; for more information about effects, visit https://aka.ms/policyeffects| |DoD Impact Level 5|Policy Assignment|Effect for policy: Advanced data security settings for SQL server should contain an email address to receive security alerts|Azure Policy effect for this policy; for more information about effects, visit https://aka.ms/policyeffects|
-|DoD Impact Level 5|Policy Assignment|Effect for policy: Ensure that 'HTTP Version' is the latest, if used to run the Api app|Azure Policy effect for this policy; for more information about effects, visit https://aka.ms/policyeffects|
+|DoD Impact Level 5|Policy Assignment|Effect for policy: Ensure that 'HTTP Version' is the latest, if used to run the API app|Azure Policy effect for this policy; for more information about effects, visit https://aka.ms/policyeffects|
|DoD Impact Level 5|Policy Assignment|Effect for policy: Microsoft IaaSAntimalware extension should be deployed on Windows servers|Azure Policy effect for this policy; for more information about effects, visit https://aka.ms/policyeffects| |DoD Impact Level 5|Policy Assignment|Effect for policy: Ensure that 'Java version' is the latest, if used as a part of the Function app|Azure Policy effect for this policy; for more information about effects, visit https://aka.ms/policyeffects| |DoD Impact Level 5|Policy Assignment|Effect for policy: Access through Internet facing endpoint should be restricted|Azure Policy effect for this policy; for more information about effects, visit https://aka.ms/policyeffects|
Additional articles about blueprints and how to use them:
- Understand how to use [static and dynamic parameters](../../concepts/parameters.md). - Learn to customize the [blueprint sequencing order](../../concepts/sequencing-order.md). - Find out how to make use of [blueprint resource locking](../../concepts/resource-locking.md).-- Learn how to [update existing assignments](../../how-to/update-existing-assignments.md).
+- Learn how to [update existing assignments](../../how-to/update-existing-assignments.md).
governance Index https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/dod-impact-level-5/index.md
# Overview of the DoD Impact Level 5 blueprint sample
-The Department of Defense Impact Level 5 (DoD IL5) blueprint sample provides governance guard-rails using [Azure Policy](../../../policy/overview.md) that help you assess specific DoD Impact Level 5 controls. This blueprint helps customers deploy a core set of policies for any Azure-deployed architecture that must implement DoD Impact Level 5 controls. For latest information on which Azure Clouds and Services meet DoD Impact Level 5 authorization, see [Azure services by FedRAMP and DoD CC SRG audit scope](../../../../azure-government/compliance/azure-services-in-fedramp-auditscope.md).
+The Department of Defense Impact Level 5 (DoD IL5) blueprint sample provides governance guardrails
+using [Azure Policy](../../../policy/overview.md) that help you assess specific DoD Impact Level 5
+controls. This blueprint helps customers deploy a core set of policies for any Azure-deployed
+architecture that must implement DoD Impact Level 5 controls. For latest information on which Azure
+Clouds and Services meet DoD Impact Level 5 authorization, see
+[Azure services by FedRAMP and DoD CC SRG audit scope](../../../../azure-government/compliance/azure-services-in-fedramp-auditscope.md).
> [!NOTE] > This blueprint sample is available in Azure Government.
Additional articles about blueprints and how to use them:
- Understand how to use [static and dynamic parameters](../../concepts/parameters.md). - Learn to customize the [blueprint sequencing order](../../concepts/sequencing-order.md). - Find out how to make use of [blueprint resource locking](../../concepts/resource-locking.md).-- Learn how to [update existing assignments](../../how-to/update-existing-assignments.md).
+- Learn how to [update existing assignments](../../how-to/update-existing-assignments.md).
governance Control Mapping https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/fedramp-h/control-mapping.md
assessment on virtual machines, virtual machine scale sets, SQL Database servers
Instance servers. These policy definitions also audit configuration of diagnostic logs to provide insight into operations that are performed within Azure resources. These insights provide real-time information about the security state of your deployed resources and can help you prioritize
-remediation actions. For detailed vulnerability scanning and monitoring, we recommend you leverage
+remediation actions. For detailed vulnerability scanning and monitoring, we recommend you use
Azure Sentinel and Azure Security Center as well. - \[Preview\]: Vulnerability Assessment should be enabled on Virtual Machines
Adaptive application control in Azure Security Center is an intelligent, automat
application filtering solution that can block or prevent specific software from running on your virtual machines. Application control can run in an enforcement mode that prohibits non-approved application from running. This blueprint assigns an Azure Policy definition that helps you monitor
-virtual machines where an application allowed list is recommended but has not yet been configured.
+virtual machines where an application allowlist is recommended but has not yet been configured.
- Adaptive Application Controls should be enabled on virtual machines
Adaptive application control in Azure Security Center is an intelligent, automat
application filtering solution that can block or prevent specific software from running on your virtual machines. Application control helps you create approved application lists for your virtual machines. This blueprint assigns an [Azure Policy](../../../policy/overview.md) definition that
-helps you monitor virtual machines where an application allowed list is recommended but has not yet
+helps you monitor virtual machines where an application allowlist is recommended but has not yet
been configured. - Adaptive Application Controls should be enabled on virtual machines
Adaptive application control in Azure Security Center is an intelligent, automat
application filtering solution that can block or prevent specific software from running on your virtual machines. Application control can help you enforce and monitor compliance with software restriction policies. This blueprint assigns an [Azure Policy](../../../policy/overview.md)
-definition that helps you monitor virtual machines where an application allowed list is recommended
+definition that helps you monitor virtual machines where an application allowlist is recommended
but has not yet been configured. - Adaptive Application Controls should be enabled on virtual machines
ensure necessary contingency controls are in place.
- Audit virtual machines without disaster recovery configured
-## CP-9 (05) Information System Backup | Transfer to Alternate Storage Site
+## CP-9 (05) Information System Backup | Transfer to Alternate Storage Site
This blueprint assigns Azure Policy definitions that audit the organization's system backup information to the alternate storage site electronically. For physical shipment of storage metadata,
vulnerabilities in your deployed resources.
## SC-5 Denial of Service Protection
-Azure's distributed denial of service (DDoS) standard tier provides additional features and
+Azure's distributed denial of service (DDoS) Standard tier provides additional features and
mitigation capabilities over the basic service tier. These additional features include Azure Monitor integration and the ability to review post-attack mitigation reports. This blueprint assigns an
-[Azure Policy](../../../policy/overview.md) definition that audits if the DDoS standard tier is
+[Azure Policy](../../../policy/overview.md) definition that audits if the DDoS Standard tier is
enabled. Understanding the capability difference between the service tiers can help you select the best solution to address denial of service protections for your Azure environment.
governance Deploy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/fedramp-h/deploy.md
The following table provides a list of the blueprint artifact parameters:
|\[Preview\]: Deploy Log Analytics Agent for Windows VMs|Policy assignment|Log Analytics workspace for Windows VMs|If this workspace is outside of the scope of the assignment you must manually grant 'Log Analytics Contributor' permissions (or similar) to the policy assignment's principal ID.| |\[Preview\]: Deploy Log Analytics Agent for Windows VMs|Policy assignment|Optional: List of VM images that have supported Windows OS to add to scope|An empty array may be used to indicate no optional parameters: \[\]| |Deploy Advanced Threat Protection on Storage Accounts|Policy assignment|Effect|Information about policy effects can be found at [Understand Azure Policy Effects](../../../policy/concepts/effects.md).|
-|Deploy Auditing on SQL servers|Policy assignment|The value in days of the retention period (0 indicates unlimited retention)|Retention days (optional, 180 days if unspecified)|
+|Deploy Auditing on SQL servers|Policy assignment|The value in days of the retention period (0 indicates unlimited retention) |Retention days (optional, 180 days if unspecified) |
|Deploy Auditing on SQL servers|Policy assignment|Resource group name for storage account for SQL server auditing|Auditing writes database events to an audit log in your Azure Storage account (a storage account will be created in each region where a SQL Server is created that will be shared by all servers in that region). Important - for proper operation of Auditing do not delete or rename the resource group or the storage accounts.| |Deploy diagnostic settings for Network Security Groups|Policy assignment|Storage account prefix for network security group diagnostics|This prefix will be combined with the network security group location to form the created storage account name.|
-|Deploy diagnostic settings for Network Security Groups|Policy assignment|Resource group name for storage account for network security group diagnostics (must exist)|The resource group that the storage account will be created in. This resource group must already exist.|
+|Deploy diagnostic settings for Network Security Groups|Policy assignment|Resource group name for storage account for network security group diagnostics (must exist) |The resource group that the storage account will be created in. This resource group must already exist.|
|\[Preview\]: Audit FedRAMP High controls and deploy specific VM Extensions to support audit requirements|Policy assignment|Allowed locations for resources and resource groups|List of Azure locations that your organization can specify when deploying resources. This provided value is also used by the 'Allowed locations' policy within the policy initiative.| |\[Preview\]: Audit FedRAMP High controls and deploy specific VM Extensions to support audit requirements|Policy assignment|Vulnerability assessment should be enabled on your SQL managed instances|Information about policy effects can be found at [Understand Azure Policy Effects](../../../policy/concepts/effects.md).| |\[Preview\]: Audit FedRAMP High controls and deploy specific VM Extensions to support audit requirements|Policy assignment|Vulnerability assessment should be enabled on your SQL servers|Information about policy effects can be found at [Understand Azure Policy Effects](../../../policy/concepts/effects.md).|
governance Index https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/fedramp-h/index.md
# Overview of the FedRAMP High blueprint sample
-The FedRAMP High blueprint sample provides governance guard-rails using [Azure Policy](../../../policy/overview.md)
-that help you assess specific FedRAMP High controls. This blueprint helps customers deploy a core
-set of policies for any Azure-deployed architecture that must implement FedRAMP High controls.
+The FedRAMP High blueprint sample provides governance guardrails using
+[Azure Policy](../../../policy/overview.md) that help you assess specific FedRAMP High controls.
+This blueprint helps customers deploy a core set of policies for any Azure-deployed architecture
+that must implement FedRAMP High controls.
## Control mapping
governance Control Mapping https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/fedramp-m/control-mapping.md
Adaptive application control in Azure Security Center is an intelligent, automat
application filtering solution that can block or prevent specific software from running on your virtual machines. Application control can run in an enforcement mode that prohibits non-approved application from running. This blueprint assigns an Azure Policy definition that helps you monitor
-virtual machines where an application allowed list is recommended but has not yet been configured.
+virtual machines where an application allowlist is recommended but has not yet been configured.
- Adaptive Application Controls should be enabled on virtual machines
Adaptive application control in Azure Security Center is an intelligent, automat
application filtering solution that can block or prevent specific software from running on your virtual machines. Application control helps you create approved application lists for your virtual machines. This blueprint assigns an [Azure Policy](../../../policy/overview.md) definition that
-helps you monitor virtual machines where an application allowed list is recommended but has not yet
+helps you monitor virtual machines where an application allowlist is recommended but has not yet
been configured. - Adaptive Application Controls should be enabled on virtual machines
Adaptive application control in Azure Security Center is an intelligent, automat
application filtering solution that can block or prevent specific software from running on your virtual machines. Application control can help you enforce and monitor compliance with software restriction policies. This blueprint assigns an [Azure Policy](../../../policy/overview.md)
-definition that helps you monitor virtual machines where an application allowed list is recommended
+definition that helps you monitor virtual machines where an application allowlist is recommended
but has not yet been configured. - Adaptive Application Controls should be enabled on virtual machines
vulnerabilities in your deployed resources.
## SC-5 Denial of Service Protection
-Azure's distributed denial of service (DDoS) standard tier provides additional features and
+Azure's distributed denial of service (DDoS) Standard tier provides additional features and
mitigation capabilities over the basic service tier. These additional features include Azure Monitor integration and the ability to review post-attack mitigation reports. This blueprint assigns an
-[Azure Policy](../../../policy/overview.md) definition that audits if the DDoS standard tier is
+[Azure Policy](../../../policy/overview.md) definition that audits if the DDoS Standard tier is
enabled. Understanding the capability difference between the service tiers can help you select the best solution to address denial of service protections for your Azure environment.
governance Deploy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/fedramp-m/deploy.md
The following table provides a list of the blueprint artifact parameters:
|\[Preview\]: Deploy Log Analytics Agent for Windows VM Scale Sets (VMSS)|Policy assignment|Optional: List of VM images that have supported Windows OS to add to scope|An empty array may be used to indicate no optional parameters: \[\]| |\[Preview\]: Deploy Log Analytics Agent for Windows VMs|Policy assignment|Log Analytics workspace for Windows VMs|If this workspace is outside of the scope of the assignment you must manually grant 'Log Analytics Contributor' permissions (or similar) to the policy assignment's principal ID.| |\[Preview\]: Deploy Log Analytics Agent for Windows VMs|Policy assignment|Optional: List of VM images that have supported Windows OS to add to scope|An empty array may be used to indicate no optional parameters: \[\]|
-|Deploy Advanced Threat Protection on Storage Accounts|Policy assignment|Effect|Information about policy effects can be found at [Understand Azure Policy Effects](../../../policy/concepts/effects.md)|
-|Deploy Auditing on SQL servers|Policy assignment|The value in days of the retention period (0 indicates unlimited retention)|Retention days (optional, 180 days if unspecified)|
+|Deploy Advanced Threat Protection on Storage Accounts|Policy assignment|Effect|Information about policy effects can be found at [Understand Azure Policy Effects](../../../policy/concepts/effects.md) |
+|Deploy Auditing on SQL servers|Policy assignment|The value in days of the retention period (0 indicates unlimited retention) |Retention days (optional, 180 days if unspecified) |
|Deploy Auditing on SQL servers|Policy assignment|Resource group name for storage account for SQL server auditing|Auditing writes database events to an audit log in your Azure Storage account (a storage account will be created in each region where a SQL Server is created that will be shared by all servers in that region). Important - for proper operation of Auditing do not delete or rename the resource group or the storage accounts.| |Deploy diagnostic settings for Network Security Groups|Policy assignment|Storage account prefix for network security group diagnostics|This prefix will be combined with the network security group location to form the created storage account name.|
-|Deploy diagnostic settings for Network Security Groups|Policy assignment|Resource group name for storage account for network security group diagnostics (must exist)|The resource group that the storage account will be created in. This resource group must already exist.|
+|Deploy diagnostic settings for Network Security Groups|Policy assignment|Resource group name for storage account for network security group diagnostics (must exist) |The resource group that the storage account will be created in. This resource group must already exist.|
## Next steps
governance Index https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/fedramp-m/index.md
# Overview of the FedRAMP Moderate blueprint sample
-The FedRAMP Moderate blueprint sample provides governance guard-rails using [Azure Policy](../../../policy/overview.md)
-that help you assess specific FedRAMP Moderate controls. This blueprint helps customers deploy a
-core set of policies for any Azure-deployed architecture that must implement FedRAMP Moderate
-controls.
+The FedRAMP Moderate blueprint sample provides governance guardrails using
+[Azure Policy](../../../policy/overview.md) that help you assess specific FedRAMP Moderate controls.
+This blueprint helps customers deploy a core set of policies for any Azure-deployed architecture
+that must implement FedRAMP Moderate controls.
## Control mapping
information, see [Azure Policy](../../../policy/overview.md).
## Next steps
-You've reviewed the overview and of the FedRAMP Moderate blueprint sample. Next, visit the
+You've reviewed the overview and of the FedRAMP Moderate blueprint sample. Next, visit the
following articles to learn about the control mapping and how to deploy this sample: > [!div class="nextstepaction"]
Additional articles about blueprints and how to use them:
- Understand how to use [static and dynamic parameters](../../concepts/parameters.md). - Learn to customize the [blueprint sequencing order](../../concepts/sequencing-order.md). - Find out how to make use of [blueprint resource locking](../../concepts/resource-locking.md).-- Learn how to [update existing assignments](../../how-to/update-existing-assignments.md).
+- Learn how to [update existing assignments](../../how-to/update-existing-assignments.md).
governance Hipaa Hitrust 9 2 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/hipaa-hitrust-9-2.md
# HIPAA HITRUST 9.2 blueprint sample
-The HIPAA HITRUST 9.2 blueprint sample provides governance guard-rails using
-[Azure Policy](../../policy/overview.md) that help you assess specific HIPAA HITRUST 9.2
-controls. This blueprint helps customers deploy a core set of policies for any Azure-deployed
-architecture that must implement HIPAA HITRUST 9.2 controls.
+The HIPAA HITRUST 9.2 blueprint sample provides governance guardrails using
+[Azure Policy](../../policy/overview.md) that help you assess specific HIPAA HITRUST 9.2 controls.
+This blueprint helps customers deploy a core set of policies for any Azure-deployed architecture
+that must implement HIPAA HITRUST 9.2 controls.
## Control mapping
governance Index https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/index.md
quality and ready to deploy today to assist you in meeting your various complian
| [Azure Security Benchmark](./azure-security-benchmark.md) | Provides guardrails for compliance to [Azure Security Benchmark](../../../security/benchmarks/overview.md). | | [Azure Security Benchmark Foundation](./azure-security-benchmark-foundation/index.md) | Deploys and configures Azure Security Benchmark Foundation. | | [Canada Federal PBMM](./canada-federal-pbmm/index.md) | Provides guardrails for compliance to Canada Federal Protected B, Medium Integrity, Medium Availability (PBMM). |
-| [CIS Microsoft Azure Foundations Benchmark v1.3.0](./cis-azure-1-3-0.md)| Provides a set of policies to help comply with CIS Microsoft Azure Foundations Benchmark v1.3.0 recommendations. |
-| [CIS Microsoft Azure Foundations Benchmark v1.1.0](./cis-azure-1-1-0.md)| Provides a set of policies to help comply with CIS Microsoft Azure Foundations Benchmark v1.1.0 recommendations. |
-| [CMMC Level 3](./cmmc-l3.md)| Provides guardrails for compliance with CMMC Level 3. |
+| [CIS Microsoft Azure Foundations Benchmark v1.3.0](./cis-azure-1-3-0.md) | Provides a set of policies to help comply with CIS Microsoft Azure Foundations Benchmark v1.3.0 recommendations. |
+| [CIS Microsoft Azure Foundations Benchmark v1.1.0](./cis-azure-1-1-0.md) | Provides a set of policies to help comply with CIS Microsoft Azure Foundations Benchmark v1.1.0 recommendations. |
+| [CMMC Level 3](./cmmc-l3.md) | Provides guardrails for compliance with CMMC Level 3. |
| [DoD Impact Level 4](./dod-impact-level-4/index.md) | Provides a set of policies to help comply with DoD Impact Level 4. | | [DoD Impact Level 5](./dod-impact-level-5/index.md) | Provides a set of policies to help comply with DoD Impact Level 5. | | [FedRAMP Moderate](./fedramp-m/index.md) | Provides a set of policies to help comply with FedRAMP Moderate. |
quality and ready to deploy today to assist you in meeting your various complian
| [HIPAA HITRUST 9.2](./hipaa-hitrust-9-2.md) | Provides a set of policies to help comply with HIPAA HITRUST. | | [IRS 1075](./irs-1075/index.md) | Provides guardrails for compliance with IRS 1075.| | [ISO 27001](./iso-27001-2013.md) | Provides guardrails for compliance with ISO 27001. |
-| [ISO 27001 Shared Services](./iso27001-shared/index.md) | Provides a set of compliant infrastructure patterns and policy guard-rails that help towards ISO 27001 attestation. |
+| [ISO 27001 Shared Services](./iso27001-shared/index.md) | Provides a set of compliant infrastructure patterns and policy guardrails that help toward ISO 27001 attestation. |
| [ISO 27001 App Service Environment/SQL Database workload](./iso27001-ase-sql-workload/index.md) | Provides more infrastructure to the [ISO 27001 Shared Services](./iso27001-shared/index.md) blueprint sample. | | [Media](./medi) | Provides a set of policies to help comply with Media MPAA. | | [New Zealand ISM Restricted](./new-zealand-ism.md) | Assigns policies to address specific New Zealand Information Security Manual controls. |
quality and ready to deploy today to assist you in meeting your various complian
| [NIST SP 800-171 R2](./nist-sp-800-171-r2.md) | Provides guardrails for compliance with NIST SP 800-171 R2. | | [PCI-DSS v3.2.1](./pci-dss-3.2.1/index.md) | Provides a set of policies to aide in PCI-DSS v3.2.1 compliance. | | [SWIFT CSP-CSCF v2020](./swift-2020/index.md) | Aides in SWIFT CSP-CSCF v2020 compliance. |
-| [UK OFFICIAL and UK NHS Governance](./ukofficial/index.md) | Provides a set of compliant infrastructure patterns and policy guard-rails that help towards UK OFFICIAL and UK NHS attestation. |
+| [UK OFFICIAL and UK NHS Governance](./ukofficial/index.md) | Provides a set of compliant infrastructure patterns and policy guardrails that help toward UK OFFICIAL and UK NHS attestation. |
| [CAF Foundation](./caf-foundation/index.md) | Provides a set of controls to help you manage your cloud estate in alignment with the [Microsoft Cloud Adoption Framework for Azure (CAF)](/azure/architecture/cloud-adoption/governance/journeys/index). | | [CAF Migrate landing zone](./caf-migrate-landing-zone/index.md) | Provides a set of controls to help you set up for migrating your first workload and manage your cloud estate in alignment with the [Microsoft Cloud Adoption Framework for Azure (CAF)](/azure/architecture/cloud-adoption/migrate/index). | ## Samples strategy :::image type="complex" source="../media/blueprint-samples-strategy.png" alt-text="Diagram of where the Blueprint samples fit in for architectural complexity vs compliance requirements." border="false":::
- Describes a coordinate system where architectural complexity is on the X axis and compliance requirements are on the Y axis. As architectural complexity and compliance requirements increase, adopt standard Blueprint samples from the portal designated in region E. For customers getting started with Azure use Cloud Adoption Framework (C A F) based Foundation and Landing Zone blueprints designated by region A and B. The remaining space is attributed to custom blueprints created by customers are partners for regions C, D, and F.
+ Describes a coordinate system where architectural complexity is on the X axis and compliance requirements are on the Y axis. As architectural complexity and compliance requirements increase, adopt standard Blueprint samples from the portal designated in region E. For customers getting started with Azure use Cloud Adoption Framework (C A F) based Foundation and Landing Zone blueprints designated by region A and B. The remaining space is attributed to custom blueprints created by customers are partners for regions C, D, and F.
:::image-end::: The CAF foundation and the CAF Migrate landing zone blueprints assume that the customer is preparing
governance Control Mapping https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/irs-1075/control-mapping.md
Adaptive application control in Azure Security Center is an intelligent, automat
application filtering solution that can block or prevent specific software from running on your virtual machines. Application control can run in an enforcement mode that prohibits non-approved application from running. This blueprint assigns an Azure Policy definition that helps you monitor
-virtual machines where an application allowed list is recommended but has not yet been configured.
+virtual machines where an application allowlist is recommended but has not yet been configured.
- Adaptive Application Controls should be enabled on virtual machines
Adaptive application control in Azure Security Center is an intelligent, automat
application filtering solution that can block or prevent specific software from running on your virtual machines. Application control helps you create approved application lists for your virtual machines. This blueprint assigns an [Azure Policy](../../../policy/overview.md) definition that
-helps you monitor virtual machines where an application allowed list is recommended but has not yet
+helps you monitor virtual machines where an application allowlist is recommended but has not yet
been configured. - Adaptive Application Controls should be enabled on virtual machines
Adaptive application control in Azure Security Center is an intelligent, automat
application filtering solution that can block or prevent specific software from running on your virtual machines. Application control can help you enforce and monitor compliance with software restriction policies. This blueprint assigns an [Azure Policy](../../../policy/overview.md)
-definition that helps you monitor virtual machines where an application allowed list is recommended
+definition that helps you monitor virtual machines where an application allowlist is recommended
but has not yet been configured. - Adaptive Application Controls should be enabled on virtual machines
vulnerabilities in your deployed resources.
## 9.3.16.4 SC-5 Denial of Service Protection
-Azure's distributed denial of service (DDoS) standard tier provides additional features and
+Azure's distributed denial of service (DDoS) Standard tier provides additional features and
mitigation capabilities over the basic service tier. These additional features include Azure Monitor integration and the ability to review post-attack mitigation reports. This blueprint assigns an
-[Azure Policy](../../../policy/overview.md) definition that audits if the DDoS standard tier is
+[Azure Policy](../../../policy/overview.md) definition that audits if the DDoS Standard tier is
enabled. Understanding the capability difference between the service tiers can help you select the best solution to address denial of service protections for your Azure environment.
governance Deploy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/irs-1075/deploy.md
it away from alignment with NIST SP 800-53 controls.
1. Select **Publish blueprint** at the top of the page. In the new page on the right, provide a **Version** for your copy of the blueprint sample. This property is useful for if you make a
- modification later. Provide **Change notes** such as "First version published from the NIST SP
+ modification later. Provide **Change notes** such as "First version published from the NIST SP
800-53 R4 blueprint sample." Then select **Publish** at the bottom of the page. ## Assign the sample copy
The following table provides a list of the blueprint artifact parameters:
|Deploy Log Analytics Agent for Windows VM Scale Sets (VMSS)|Policy assignment|Optional: List of VM images that have supported Windows OS to add to scope|An empty array may be used to indicate no optional parameters: \[\]| |Deploy Log Analytics Agent for Windows VMs|Policy assignment|Log Analytics workspace for Windows VMs|If this workspace is outside of the scope of the assignment you must manually grant 'Log Analytics Contributor' permissions (or similar) to the policy assignment's principal ID.| |Deploy Log Analytics Agent for Windows VMs|Policy assignment|Optional: List of VM images that have supported Windows OS to add to scope|An empty array may be used to indicate no optional parameters: \[\]|
-|Deploy Advanced Threat Protection on Storage Accounts|Policy assignment|Effect|Information about policy effects can be found at [Understand Azure Policy Effects](../../../policy/concepts/effects.md)|
-|Deploy Auditing on SQL servers|Policy assignment|The value in days of the retention period (0 indicates unlimited retention)|Retention days (optional, 180 days if unspecified)|
+|Deploy Advanced Threat Protection on Storage Accounts|Policy assignment|Effect|Information about policy effects can be found at [Understand Azure Policy Effects](../../../policy/concepts/effects.md) |
+|Deploy Auditing on SQL servers|Policy assignment|The value in days of the retention period (0 indicates unlimited retention) |Retention days (optional, 180 days if unspecified) |
|Deploy Auditing on SQL servers|Policy assignment|Resource group name for storage account for SQL server auditing|Auditing writes database events to an audit log in your Azure Storage account (a storage account will be created in each region where a SQL Server is created that will be shared by all servers in that region). Important - for proper operation of Auditing do not delete or rename the resource group or the storage accounts.| |Deploy diagnostic settings for Network Security Groups|Policy assignment|Storage account prefix for network security group diagnostics|This prefix will be combined with the network security group location to form the created storage account name.|
-|Deploy diagnostic settings for Network Security Groups|Policy assignment|Resource group name for storage account for network security group diagnostics (must exist)|The resource group that the storage account will be created in. This resource group must already exist.|
+|Deploy diagnostic settings for Network Security Groups|Policy assignment|Resource group name for storage account for network security group diagnostics (must exist) |The resource group that the storage account will be created in. This resource group must already exist.|
## Next steps
governance Index https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/irs-1075/index.md
# Overview of the IRS 1075 blueprint sample
-The IRS 1075 blueprint sample provides governance guard-rails using [Azure Policy](../../../policy/overview.md)
-that help you assess specific IRS 1075 controls. This blueprint helps customers deploy a
-core set of policies for any Azure-deployed architecture that must implement IRS 1075
-controls.
+The IRS 1075 blueprint sample provides governance guardrails using
+[Azure Policy](../../../policy/overview.md) that help you assess specific IRS 1075 controls. This
+blueprint helps customers deploy a core set of policies for any Azure-deployed architecture that
+must implement IRS 1075 controls.
## Control mapping
-The control mapping section provides details on policies included within this blueprint and how
-these policies address various controls in IRS 1075. When assigned to an architecture,
-resources are evaluated by Azure Policy for non-compliance with assigned policies. For more
+The control mapping section provides details on policies included within this blueprint and how
+these policies address various controls in IRS 1075. When assigned to an architecture,
+resources are evaluated by Azure Policy for non-compliance with assigned policies. For more
information, see [Azure Policy](../../../policy/overview.md). ## Next steps
-You've reviewed the overview and of the IRS 1075 blueprint sample. Next, visit the
+You've reviewed the overview and of the IRS 1075 blueprint sample. Next, visit the
following articles to learn about the control mapping and how to deploy this sample: > [!div class="nextstepaction"]
Additional articles about blueprints and how to use them:
- Understand how to use [static and dynamic parameters](../../concepts/parameters.md). - Learn to customize the [blueprint sequencing order](../../concepts/sequencing-order.md). - Find out how to make use of [blueprint resource locking](../../concepts/resource-locking.md).-- Learn how to [update existing assignments](../../how-to/update-existing-assignments.md).
+- Learn how to [update existing assignments](../../how-to/update-existing-assignments.md).
governance Deploy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/ism-protected/deploy.md
The following table provides a list of the blueprint artifact parameters:
|\[Preview\]: Deploy Log Analytics Agent for Windows VMs|Policy assignment|Log Analytics workspace for Windows VMs|If this workspace is outside of the scope of the assignment you must manually grant 'Log Analytics Contributor' permissions (or similar) to the policy assignment's principal ID.| |\[Preview\]: Deploy Log Analytics Agent for Windows VMs|Policy assignment|Optional: List of VM images that have supported Windows OS to add to scope|An empty array may be used to indicate no optional parameters: \[\]| |Deploy Advanced Threat Protection on Storage Accounts|Policy assignment|Effect|Information about policy effects can be found at [Understand Azure Policy Effects](../../../policy/concepts/effects.md).|
-|Deploy Auditing on SQL servers|Policy assignment|The value in days of the retention period (0 indicates unlimited retention)|Retention days (optional, 180 days if unspecified)|
+|Deploy Auditing on SQL servers|Policy assignment|The value in days of the retention period (0 indicates unlimited retention) |Retention days (optional, 180 days if unspecified) |
|Deploy Auditing on SQL servers|Policy assignment|Resource group name for storage account for SQL server auditing|Auditing writes database events to an audit log in your Azure Storage account (a storage account will be created in each region where a SQL Server is created that will be shared by all servers in that region). Important - for proper operation of Auditing do not delete or rename the resource group or the storage accounts.| |Deploy diagnostic settings for Network Security Groups|Policy assignment|Storage account prefix for network security group diagnostics|This prefix will be combined with the network security group location to form the created storage account name.|
-|Deploy diagnostic settings for Network Security Groups|Policy assignment|Resource group name for storage account for network security group diagnostics (must exist)|The resource group that the storage account will be created in. This resource group must already exist.|
+|Deploy diagnostic settings for Network Security Groups|Policy assignment|Resource group name for storage account for network security group diagnostics (must exist) |The resource group that the storage account will be created in. This resource group must already exist.|
|\[Preview\]: Audit Australian Government ISM PROTECTED controls and deploy specific VM Extensions to support audit requirements|Policy assignment|Allowed locations for resources and resource groups|List of Azure locations that your organization can specify when deploying resources. This provided value is also used by the 'Allowed locations' policy within the policy initiative.| |\[Preview\]: Audit Australian Government ISM PROTECTED controls and deploy specific VM Extensions to support audit requirements|Policy assignment|Vulnerability assessment should be enabled on your SQL managed instances|Information about policy effects can be found at [Understand Azure Policy Effects](../../../policy/concepts/effects.md).| |\[Preview\]: Audit Australian Government ISM PROTECTED controls and deploy specific VM Extensions to support audit requirements|Policy assignment|Vulnerability assessment should be enabled on your SQL servers|Information about policy effects can be found at [Understand Azure Policy Effects](../../../policy/concepts/effects.md).|
governance Index https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/ism-protected/index.md
# Overview of the Australian Government ISM PROTECTED blueprint sample
-ISM Governance blueprint sample provides a set of governance guard-rails using
-[Azure Policy](../../../policy/overview.md) which help towards ISM PROTECTED attestation (Feb 2020
+ISM Governance blueprint sample provides a set of governance guardrails using
+[Azure Policy](../../../policy/overview.md) which help toward ISM PROTECTED attestation (Feb 2020
version). This Blueprint helps customers deploy a core set of policies for any Azure-deployed architecture requiring accreditation or compliance with the ISM framework.
governance Iso 27001 2013 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/iso-27001-2013.md
# ISO 27001 blueprint sample
-The ISO 27001 blueprint sample provides governance guard-rails using
+The ISO 27001 blueprint sample provides governance guardrails using
[Azure Policy](../../policy/overview.md) that help you assess specific ISO 27001 controls. This blueprint helps customers deploy a core set of policies for any Azure-deployed architecture that must implement ISO 27001 controls.
Additional articles about blueprints and how to use them:
- Understand how to use [static and dynamic parameters](../concepts/parameters.md). - Learn to customize the [blueprint sequencing order](../concepts/sequencing-order.md). - Find out how to make use of [blueprint resource locking](../concepts/resource-locking.md).-- Learn how to [update existing assignments](../how-to/update-existing-assignments.md).
+- Learn how to [update existing assignments](../how-to/update-existing-assignments.md).
governance Control Mapping https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/iso27001-ase-sql-workload/control-mapping.md
contained within the information system.
The blueprint helps you ensure information transfer with Azure services is secure by assigning two [Azure Policy](../../../policy/overview.md) definitions to audit insecure connections to storage
-accounts and Redis Cache.
+accounts and Azure Cache for Redis.
- Only secure connections to your Azure Cache for Redis should be enabled - Secure transfer to storage accounts should be enabled
governance Index https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/iso27001-ase-sql-workload/index.md
Services](../iso27001-shared/index.md) blueprint sample.
## Architecture The ISO 27001 App Service Environment/SQL Database workload blueprint sample deploys a platform as
-a service based web environment. The environment can be used to host multiple web applications, web
+a service-based web environment. The environment can be used to host multiple web applications, web
APIs, and SQL Database instances that follow the ISO 27001 standards. This blueprint sample depends on the [ISO 27001 Shared Services](../iso27001-shared/index.md) blueprint sample.
governance Control Mapping https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/iso27001-shared/control-mapping.md
This blueprint helps you enforce your policy on the use of cryptograph controls
and audit use of weak cryptographic settings. Understanding where your Azure resources may have non-optimal cryptographic configurations can help you take corrective actions to ensure resources are configured in accordance with your information security policy. Specifically, the policies
-assigned by this blueprint require encryption for blob storage accounts and data lake storage
+assigned by this blueprint require encryption for blob storage accounts and Data Lake storage
accounts; require transparent data encryption on SQL databases; audit missing encryption on storage accounts, SQL databases, virtual machine disks, and automation account variables; audit insecure connections to storage accounts, Function Apps, Web App, API Apps, and Redis Cache; audit weak
contained within the information system.
The blueprint helps you ensure information transfer with Azure services is secure by assigning two [Azure Policy](../../../policy/overview.md) definitions to audit insecure connections to storage
-accounts and Redis Cache.
+accounts and Azure Cache for Redis.
- Only secure connections to your Azure Cache for Redis should be enabled - Secure transfer to storage accounts should be enabled
governance Index https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/iso27001-shared/index.md
# Overview of the ISO 27001 Shared Services blueprint sample The ISO 27001 Shared Services blueprint sample provides a set of compliant infrastructure patterns
-and policy guard-rails that help towards ISO 27001 attestation. This blueprint helps customers
-deploy cloud-based architectures that offer solutions to scenarios that have accreditation or
-compliance requirements.
+and policy guardrails that help toward ISO 27001 attestation. This blueprint helps customers deploy
+cloud-based architectures that offer solutions to scenarios that have accreditation or compliance
+requirements.
The [ISO 27001 App Service Environment/SQL Database workload](../iso27001-ase-sql-workload/index.md) blueprint sample extends this sample.
Additional articles about blueprints and how to use them:
- Understand how to use [static and dynamic parameters](../../concepts/parameters.md). - Learn to customize the [blueprint sequencing order](../../concepts/sequencing-order.md). - Find out how to make use of [blueprint resource locking](../../concepts/resource-locking.md).-- Learn how to [update existing assignments](../../how-to/update-existing-assignments.md).
+- Learn how to [update existing assignments](../../how-to/update-existing-assignments.md).
governance Index https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/media/index.md
# Overview of the Media blueprint sample
-Media blueprint sample provides a
-set of governance guard-rails using [Azure Policy](../../../policy/overview.md) that help towards
+Media blueprint sample provides a set of governance guardrails using
+[Azure Policy](../../../policy/overview.md) that help toward
[Media](https://www.hhs.gov/hipaa/for-professionals/security/laws-regulations/https://docsupdatetracker.net/index.html) attestation.
governance New Zealand Ism https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/new-zealand-ism.md
# New Zealand ISM Restricted blueprint sample
-The New Zealand ISM Restricted blueprint sample provides governance guard-rails using
+The New Zealand ISM Restricted blueprint sample provides governance guardrails using
[Azure Policy](../../policy/overview.md) that help you assess specific [New Zealand Information Security Manual](https://www.nzism.gcsb.govt.nz/) controls. This blueprint helps customers deploy a core set of policies for any Azure-deployed architecture that must
The following table provides a list of the blueprint artifact parameters:
|New Zealand ISM Restricted|Policy Assignment|Effect for policy: An Azure Active Directory administrator should be provisioned for SQL servers|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)| |New Zealand ISM Restricted|Policy Assignment|Effect for policy: Only secure connections to your Azure Cache for Redis should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)| |New Zealand ISM Restricted|Policy Assignment|Effect for policy: Endpoint protection solution should be installed on virtual machine scale sets|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|New Zealand ISM Restricted|Policy Assignment|Include Arc-connected servers when evaluating policy: Audit Windows machines missing any of specified members in the Administrators group|By selecting 'true,' you agree to be charged monthly per Arc connected machine|
+|New Zealand ISM Restricted|Policy Assignment|Include Arc-connected servers when evaluating policy: Audit Windows machines missing any of specified members in the Administrators group|By selecting 'true', you agree to be charged monthly per Arc connected machine|
|New Zealand ISM Restricted|Policy Assignment|Optional: List of custom VM images that have supported Windows OS to add to scope additional to the images in the gallery for policy: [Preview]: Log Analytics Agent should be enabled for listed virtual machine images|For more information on Guest Configuration, visit [https://aka.ms/gcpol](../../policy/concepts/guest-configuration.md)| |New Zealand ISM Restricted|Policy Assignment|Optional: List of custom VM images that have supported Linux OS to add to scope additional to the images in the gallery for policy: [Preview]: Log Analytics Agent should be enabled for listed virtual machine images|For more information on Guest Configuration, visit [https://aka.ms/gcpol](../../policy/concepts/guest-configuration.md)| |New Zealand ISM Restricted|Policy Assignment|Effect for policy: Storage accounts should restrict network access|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)| |New Zealand ISM Restricted|Policy Assignment|Optional: List of custom VM images that have supported Windows OS to add to scope additional to the images in the gallery for policy: Deploy - Configure Dependency agent to be enabled on Windows virtual machine scale sets|For more information on Guest Configuration, visit [https://aka.ms/gcpol](../../policy/concepts/guest-configuration.md)| |New Zealand ISM Restricted|Policy Assignment|Effect for policy: Vulnerabilities in security configuration on your virtual machine scale sets should be remediated|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|New Zealand ISM Restricted|Policy Assignment|Include Arc-connected servers when evaluating policy: Audit Windows machines that have extra accounts in the Administrators group|By selecting 'true,' you agree to be charged monthly per Arc connected machine|
+|New Zealand ISM Restricted|Policy Assignment|Include Arc-connected servers when evaluating policy: Audit Windows machines that have extra accounts in the Administrators group|By selecting 'true', you agree to be charged monthly per Arc connected machine|
|New Zealand ISM Restricted|Policy Assignment|Effect for policy: Secure transfer to storage accounts should be enabled|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)| |New Zealand ISM Restricted|Policy Assignment|Effect for policy: Web Application Firewall (WAF) should use the specified mode for Azure Front Door Service|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)| |New Zealand ISM Restricted|Policy Assignment|WAF mode requirement for Azure Front Door Service|The Prevention or Detection mode must be enabled on the Azure Front Door service|
The following table provides a list of the blueprint artifact parameters:
|New Zealand ISM Restricted|Policy Assignment|Effect for policy: A vulnerability assessment solution should be enabled on your virtual machines|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)| |New Zealand ISM Restricted|Policy Assignment|Effect for policy: Web Application Firewall (WAF) should be enabled for Application Gateway|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)| |New Zealand ISM Restricted|Policy Assignment|Effect for policy: CORS should not allow every resource to access your Web Applications|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|New Zealand ISM Restricted|Policy Assignment|Include Arc-connected servers when evaluating policy: Audit Windows web servers that are not using secure communication protocols|By selecting 'true,' you agree to be charged monthly per Arc connected machine|
+|New Zealand ISM Restricted|Policy Assignment|Include Arc-connected servers when evaluating policy: Audit Windows web servers that are not using secure communication protocols|By selecting 'true', you agree to be charged monthly per Arc connected machine|
|New Zealand ISM Restricted|Policy Assignment|Minimum TLS version for Windows web servers|Windows web servers with lower TLS versions will be assessed as non-compliant| |New Zealand ISM Restricted|Policy Assignment|Optional: List of custom VM images that have supported Linux OS to add to scope additional to the images in the gallery for policy: Log Analytics agent should be enabled in virtual machine scale sets for listed virtual machine images|For more information on Guest Configuration, visit [https://aka.ms/gcpol](../../policy/concepts/guest-configuration.md)| |New Zealand ISM Restricted|Policy Assignment|Optional: List of custom VM images that have supported Windows OS to add to scope additional to the images in the gallery for policy: Log Analytics agent should be enabled in virtual machine scale sets for listed virtual machine images|For more information on Guest Configuration, visit [https://aka.ms/gcpol](../../policy/concepts/guest-configuration.md)| |New Zealand ISM Restricted|Policy Assignment|Effect for policy: External accounts with write permissions should be removed from your subscription|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|New Zealand ISM Restricted|Policy Assignment|Include Arc-connected servers when evaluating policy: Audit Windows machines that have the specified members in the Administrators group|By selecting 'true,' you agree to be charged monthly per Arc connected machine|
+|New Zealand ISM Restricted|Policy Assignment|Include Arc-connected servers when evaluating policy: Audit Windows machines that have the specified members in the Administrators group|By selecting 'true', you agree to be charged monthly per Arc connected machine|
|New Zealand ISM Restricted|Policy Assignment|Effect for policy: Deprecated accounts should be removed from your subscription|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)| |New Zealand ISM Restricted|Policy Assignment|Effect for policy: Function App should only be accessible over HTTPS|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)| |New Zealand ISM Restricted|Policy Assignment|Effect for policy: Azure subscriptions should have a log profile for Activity Log|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
The following table provides a list of the blueprint artifact parameters:
|New Zealand ISM Restricted|Policy Assignment|Effect for policy: Service Fabric clusters should only use Azure Active Directory for client authentication|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)| |New Zealand ISM Restricted|Policy Assignment|Effect for policy: API App should only be accessible over HTTPS|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)| |New Zealand ISM Restricted|Policy Assignment|Effect for policy: Audit Windows machines on which Windows Defender Exploit Guard is not enabled|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|New Zealand ISM Restricted|Policy Assignment|Include Arc-connected servers when evaluating policy: Audit Windows machines on which Windows Defender Exploit Guard is not enabled|By selecting 'true,' you agree to be charged monthly per Arc connected machine|
+|New Zealand ISM Restricted|Policy Assignment|Include Arc-connected servers when evaluating policy: Audit Windows machines on which Windows Defender Exploit Guard is not enabled|By selecting 'true', you agree to be charged monthly per Arc connected machine|
|New Zealand ISM Restricted|Policy Assignment|Compliance state to report for Windows machines on which Windows Defender Exploit Guard is not available|Windows Defender Exploit Guard is only available starting with Windows 10/Windows Server with update 1709. Setting this value to 'Non-Compliant' shows machines with older versions on which Windows Defender Exploit Guard is not available (such as Windows Server 2012 R2) as non-compliant. Setting this value to 'Compliant' shows these machines as compliant.| |New Zealand ISM Restricted|Policy Assignment|Effect for policy: System updates on virtual machine scale sets should be installed|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)| |New Zealand ISM Restricted|Policy Assignment|Effect for policy: Remote debugging should be turned off for Web Applications|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
The following table provides a list of the blueprint artifact parameters:
|New Zealand ISM Restricted|Policy Assignment|Effect for policy: Vulnerabilities in container security configurations should be remediated|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)| |New Zealand ISM Restricted|Policy Assignment|Effect for policy: Remote debugging should be turned off for API Apps|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)| |New Zealand ISM Restricted|Policy Assignment|Effect for policy: Audit Linux machines that allow remote connections from accounts without passwords|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|New Zealand ISM Restricted|Policy Assignment|Include Arc-connected servers when evaluating policy: Audit Linux machines that allow remote connections from accounts without passwords|By selecting 'true,' you agree to be charged monthly per Arc connected machine|
+|New Zealand ISM Restricted|Policy Assignment|Include Arc-connected servers when evaluating policy: Audit Linux machines that allow remote connections from accounts without passwords|By selecting 'true', you agree to be charged monthly per Arc connected machine|
|New Zealand ISM Restricted|Policy Assignment|Effect for policy: Deprecated accounts with owner permissions should be removed from your subscription|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)| |New Zealand ISM Restricted|Policy Assignment|Effect for policy: Vulnerability assessment should be enabled on your SQL servers|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)| |New Zealand ISM Restricted|Policy Assignment|Effect for policy: Latest TLS version should be used in your Web App|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)| |New Zealand ISM Restricted|Policy Assignment|Effect for policy: Windows machines should meet requirements for 'Security Settings - Account Policies'|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)| |New Zealand ISM Restricted|Policy Assignment|Enforce password history for Windows VM local accounts|Specifies limits on password reuse - how many times a new password must be created for a user account before the password can be repeated|
-|New Zealand ISM Restricted|Policy Assignment|Include Arc-connected servers when evaluating policy: Windows machines should meet requirements for 'Security Settings - Account Policies'|By selecting 'true,' you agree to be charged monthly per Arc connected machine|
+|New Zealand ISM Restricted|Policy Assignment|Include Arc-connected servers when evaluating policy: Windows machines should meet requirements for 'Security Settings - Account Policies'|By selecting 'true', you agree to be charged monthly per Arc connected machine|
|New Zealand ISM Restricted|Policy Assignment|Maximum password age for Windows VM local accounts|Specifies the maximum number of days that may elapse before a user account password must be changed; the format of the value is two integers separated by a comma, denoting an inclusive range| |New Zealand ISM Restricted|Policy Assignment|Minimum password age for Windows VM local accounts|Specifies the minimum number of days that must elapse before a user account password can be changed| |New Zealand ISM Restricted|Policy Assignment|Minimum password length for Windows VM local accounts|Specifies the minimum number of characters that a user account password may contain| |New Zealand ISM Restricted|Policy Assignment|Password must meet complexity requirements for Windows VM local accounts|Specifies whether a user account password must be complex; if required, a complex password must not contain part of the user's account name or full name; be at least 6 characters long; contain a mix of uppercase, lowercase, number, and non-alphabetic characters| |New Zealand ISM Restricted|Policy Assignment|Effect for policy: Internet-facing virtual machines should be protected with network security groups|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)| |New Zealand ISM Restricted|Policy Assignment|Effect for policy: Audit Linux machines that have accounts without passwords|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
-|New Zealand ISM Restricted|Policy Assignment|Include Arc-connected servers when evaluating policy: Audit Linux machines that have accounts without passwords|By selecting 'true,' you agree to be charged monthly per Arc connected machine|
+|New Zealand ISM Restricted|Policy Assignment|Include Arc-connected servers when evaluating policy: Audit Linux machines that have accounts without passwords|By selecting 'true', you agree to be charged monthly per Arc connected machine|
|New Zealand ISM Restricted|Policy Assignment|Effect for policy: External accounts with owner permissions should be removed from your subscription|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)| |New Zealand ISM Restricted|Policy Assignment|Effect for policy: Latest TLS version should be used in your Function App|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)| |New Zealand ISM Restricted|Policy Assignment|Effect for policy: [Preview]: All Internet traffic should be routed via your deployed Azure Firewall|For more information about effects, visit [https://aka.ms/policyeffects](../../policy/concepts/effects.md)|
Additional articles about blueprints and how to use them:
- Understand how to use [static and dynamic parameters](../concepts/parameters.md). - Learn to customize the [blueprint sequencing order](../concepts/sequencing-order.md). - Find out how to make use of [blueprint resource locking](../concepts/resource-locking.md).-- Learn how to [update existing assignments](../how-to/update-existing-assignments.md).
+- Learn how to [update existing assignments](../how-to/update-existing-assignments.md).
governance Nist Sp 800 171 R2 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/nist-sp-800-171-r2.md
# NIST SP 800-171 R2 blueprint sample
-The NIST SP 800-171 R2 blueprint sample provides governance guard-rails using
+The NIST SP 800-171 R2 blueprint sample provides governance guardrails using
[Azure Policy](../../policy/overview.md) that help you assess specific NIST SP 800-171 R2 requirements or controls. This blueprint helps customers deploy a core set of policies for any Azure-deployed architecture that must implement NIST SP 800-171 R2 requirements or controls.
governance Nist Sp 800 53 R4 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/nist-sp-800-53-r4.md
# NIST SP 800-53 R4 blueprint sample
-The NIST SP 800-53 R4 blueprint sample provides governance guard-rails using
-[Azure Policy](../../policy/overview.md) that help you assess specific NIST SP 800-53 R4
-controls. This blueprint helps customers deploy a core set of policies for any Azure-deployed
-architecture that must implement NIST SP 800-53 R4 controls.
+The NIST SP 800-53 R4 blueprint sample provides governance guardrails using
+[Azure Policy](../../policy/overview.md) that help you assess specific NIST SP 800-53 R4 controls.
+This blueprint helps customers deploy a core set of policies for any Azure-deployed architecture
+that must implement NIST SP 800-53 R4 controls.
## Control mapping
The following table provides a list of the blueprint artifact parameters:
|\[Preview\]: Deploy Log Analytics Agent for Windows VM Scale Sets (VMSS)|Policy assignment|Optional: List of VM images that have supported Windows OS to add to scope|An empty array may be used to indicate no optional parameters: \[\]| |\[Preview\]: Deploy Log Analytics Agent for Windows VMs|Policy assignment|Log Analytics workspace for Windows VMs|If this workspace is outside of the scope of the assignment you must manually grant 'Log Analytics Contributor' permissions (or similar) to the policy assignment's principal ID.| |\[Preview\]: Deploy Log Analytics Agent for Windows VMs|Policy assignment|Optional: List of VM images that have supported Windows OS to add to scope|An empty array may be used to indicate no optional parameters: \[\]|
-|Deploy Advanced Threat Protection on Storage Accounts|Policy assignment|Effect|Information about policy effects can be found at [Understand Azure Policy Effects](../../policy/concepts/effects.md)|
-|Deploy Auditing on SQL servers|Policy assignment|The value in days of the retention period (0 indicates unlimited retention)|Retention days (optional, 180 days if unspecified)|
+|Deploy Advanced Threat Protection on Storage Accounts|Policy assignment|Effect|Information about policy effects can be found at [Understand Azure Policy Effects](../../policy/concepts/effects.md) |
+|Deploy Auditing on SQL servers|Policy assignment|The value in days of the retention period (0 indicates unlimited retention) |Retention days (optional, 180 days if unspecified) |
|Deploy Auditing on SQL servers|Policy assignment|Resource group name for storage account for SQL server auditing|Auditing writes database events to an audit log in your Azure Storage account (a storage account will be created in each region where a SQL Server is created that will be shared by all servers in that region). Important - for proper operation of Auditing do not delete or rename the resource group or the storage accounts.| |Deploy diagnostic settings for Network Security Groups|Policy assignment|Storage account prefix for network security group diagnostics|This prefix will be combined with the network security group location to form the created storage account name.|
-|Deploy diagnostic settings for Network Security Groups|Policy assignment|Resource group name for storage account for network security group diagnostics (must exist)|The resource group that the storage account will be created in. This resource group must already exist.|
+|Deploy diagnostic settings for Network Security Groups|Policy assignment|Resource group name for storage account for network security group diagnostics (must exist) |The resource group that the storage account will be created in. This resource group must already exist.|
## Next steps
governance Control Mapping https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/pci-dss-3.2.1/control-mapping.md
who has access to Azure resources. Understanding where custom Azure RBAC rules a
help you verify need and proper implementation, as custom Azure RBAC rules are error prone. This blueprint also assigns [Azure Policy](../../../policy/overview.md) definitions to audit use of Azure Active Directory authentication for SQL Servers. Using Azure Active Directory authentication
-simplifies permission management and centralizes identity management of database users and other
-Microsoft
-services.
+simplifies permission management and centralizes identity management of database users and other
+Microsoft services.
- External accounts with owner permissions should be removed from your subscription - External accounts with write permissions should be removed from your subscription
governance Index https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/pci-dss-3.2.1/index.md
Additional articles about blueprints and how to use them:
- Understand how to use [static and dynamic parameters](../../concepts/parameters.md). - Learn to customize the [blueprint sequencing order](../../concepts/sequencing-order.md). - Find out how to make use of [blueprint resource locking](../../concepts/resource-locking.md).-- Learn how to [update existing assignments](../../how-to/update-existing-assignments.md).
+- Learn how to [update existing assignments](../../how-to/update-existing-assignments.md).
governance Control Mapping https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/swift-2020/control-mapping.md
help you verify need and proper implementation, as custom Azure RBAC rules are e
- Audit VMs that do not use managed disks - Service Fabric clusters should only use Azure Active Directory for client authentication
-## 2.9A Account Management | Account Monitoring / Atypical Usage
+## 2.9A Account Management | Account Monitoring / Atypical Usage
Just-in-time (JIT) virtual machine access locks down inbound traffic to Azure virtual machines, reducing exposure to attacks while providing easy access to connect to VMs when needed. All JIT
Adaptive application control in Azure Security Center is an intelligent, automat
application filtering solution that can block or prevent specific software from running on your virtual machines. Application control can run in an enforcement mode that prohibits non-approved application from running. This blueprint assigns an Azure Policy definition that helps you monitor
-virtual machines where an application allowed list is recommended but has not yet been configured.
+virtual machines where an application allowlist is recommended but has not yet been configured.
- Adaptive application controls for defining safe applications should be enabled on your machines
Adaptive application control in Azure Security Center is an intelligent, automat
application filtering solution that can block or prevent specific software from running on your virtual machines. Application control helps you create approved application lists for your virtual machines. This blueprint assigns an [Azure Policy](../../../policy/overview.md) definition that
-helps you monitor virtual machines where an application allowed list is recommended but has not yet
+helps you monitor virtual machines where an application allowlist is recommended but has not yet
been configured. - Adaptive application controls for defining safe applications should be enabled on your machines
Adaptive application control in Azure Security Center is an intelligent, automat
application filtering solution that can block or prevent specific software from running on your virtual machines. Application control can help you enforce and monitor compliance with software restriction policies. This blueprint assigns an [Azure Policy](../../../policy/overview.md)
-definition that helps you monitor virtual machines where an application allowed list is recommended
+definition that helps you monitor virtual machines where an application allowlist is recommended
but has not yet been configured. - Adaptive application controls for defining safe applications should be enabled on your machines
vulnerabilities in your deployed resources.
## 1.3 Denial of Service Protection
-Azure's distributed denial of service (DDoS) standard tier provides additional features and
+Azure's distributed denial of service (DDoS) Standard tier provides additional features and
mitigation capabilities over the basic service tier. These additional features include Azure Monitor integration and the ability to review post-attack mitigation reports. This blueprint assigns an
-[Azure Policy](../../../policy/overview.md) definition that audits if the DDoS standard tier is
+[Azure Policy](../../../policy/overview.md) definition that audits if the DDoS Standard tier is
enabled. Understanding the capability difference between the service tiers can help you select the best solution to address denial of service protections for your Azure environment.
can support just-in-time access but have not yet been configured.
## 2.1, 2.4, 2.4A, 2.5A, and 2.6 Transmission Confidentiality and Integrity | Cryptographic or Alternate Physical Protection
-This blueprint helps you protect the confidential and integrity of transmitted information by
-assigning [Azure Policy](../../../policy/overview.md) definitions that help you monitor
-cryptographic mechanism implemented for communications protocols. Ensuring communications are
-properly encrypted can help you meet your organization's requirements or protecting information
+This blueprint helps you protect the confidential and integrity of transmitted information by
+assigning [Azure Policy](../../../policy/overview.md) definitions that help you monitor
+cryptographic mechanism implemented for communications protocols. Ensuring communications are
+properly encrypted can help you meet your organization's requirements or protecting information
from unauthorized disclosure and modification. - API App should only be accessible over HTTPS
governance Deploy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/swift-2020/deploy.md
The following table provides a list of the blueprint artifact parameters:
|\[Preview\]: Audit SWIFT CSP-CSCF v2020 controls and deploy specific VM Extensions to support audit requirements|Policy assignment|List of resource types that should have diagnostic logs enabled|List of resource types to audit if diagnostic log setting is not enabled. Acceptable values can be found at [Azure Monitor resource logs categories](../../../../azure-monitor/essentials/resource-logs-categories.md#supported-log-categories-per-resource-type).| |\[Preview\]: Audit SWIFT CSP-CSCF v2020 controls and deploy specific VM Extensions to support audit requirements|Policy assignment|Connected workspace IDs|A semicolon-separated list of the workspace IDs that the Log Analytics agent should be connected to| |\[Preview\]: Audit SWIFT CSP-CSCF v2020 controls and deploy specific VM Extensions to support audit requirements|Policy assignment|List of users that should be included in Windows VM Administrators group|A semicolon-separated list of members that should be included in the Administrators local group. Ex: Administrator; myUser1; myUser2|
-|\[Preview\]: Audit SWIFT CSP-CSCF v2020 controls and deploy specific VM Extensions to support audit requirements|Policy assignment|Domain Name (FQDN)|The fully qualified domain name (FQDN) that the Windows VMs should be joined to|
+|\[Preview\]: Audit SWIFT CSP-CSCF v2020 controls and deploy specific VM Extensions to support audit requirements|Policy assignment|Domain Name (FQDN) |The fully qualified domain name (FQDN) that the Windows VMs should be joined to|
|Deploy Log Analytics Agent for Linux VMs|Policy assignment|Log Analytics workspace for Linux VMs|If this workspace is outside of the scope of the assignment you must manually grant 'Log Analytics Contributor' permissions (or similar) to the policy assignment's principal ID.| |Deploy Log Analytics Agent for Linux VMs|Policy assignment|Optional: List of VM images that have supported Linux OS to add to scope|An empty array may be used to indicate no optional parameters: \[\]| |Deploy Log Analytics Agent for Windows VM Scale Sets (VMSS)|Policy assignment|Log Analytics workspace for Windows VM Scale Sets (VMSS)|If this workspace is outside of the scope of the assignment you must manually grant 'Log Analytics Contributor' permissions (or similar) to the policy assignment's principal ID.| |Deploy Log Analytics Agent for Windows VM Scale Sets (VMSS)|Policy assignment|Optional: List of VM images that have supported Windows OS to add to scope|An empty array may be used to indicate no optional parameters: \[\]| |Deploy Log Analytics Agent for Windows VMs|Policy assignment|Log Analytics workspace for Windows VMs|If this workspace is outside of the scope of the assignment you must manually grant 'Log Analytics Contributor' permissions (or similar) to the policy assignment's principal ID.| |Deploy Log Analytics Agent for Windows VMs|Policy assignment|Optional: List of VM images that have supported Windows OS to add to scope|An empty array may be used to indicate no optional parameters: \[\]|
-|Deploy Advanced Threat Protection on Storage Accounts|Policy assignment|Effect|Information about policy effects can be found at [Understand Azure Policy Effects](../../../policy/concepts/effects.md)|
-|Deploy Auditing on SQL servers|Policy assignment|The value in days of the retention period (0 indicates unlimited retention)|Retention days (optional, 180 days if unspecified)|
+|Deploy Advanced Threat Protection on Storage Accounts|Policy assignment|Effect|Information about policy effects can be found at [Understand Azure Policy Effects](../../../policy/concepts/effects.md) |
+|Deploy Auditing on SQL servers|Policy assignment|The value in days of the retention period (0 indicates unlimited retention) |Retention days (optional, 180 days if unspecified) |
|Deploy Auditing on SQL servers|Policy assignment|Resource group name for storage account for SQL server auditing|Auditing writes database events to an audit log in your Azure Storage account (a storage account will be created in each region where a SQL Server is created that will be shared by all servers in that region). Important - for proper operation of Auditing do not delete or rename the resource group or the storage accounts.| |Deploy diagnostic settings for Network Security Groups|Policy assignment|Storage account prefix for network security group diagnostics|This prefix will be combined with the network security group location to form the created storage account name.|
-|Deploy diagnostic settings for Network Security Groups|Policy assignment|Resource group name for storage account for network security group diagnostics (must exist)|The resource group that the storage account will be created in. This resource group must already exist.|
+|Deploy diagnostic settings for Network Security Groups|Policy assignment|Resource group name for storage account for network security group diagnostics (must exist) |The resource group that the storage account will be created in. This resource group must already exist.|
## Next steps
Additional articles about blueprints and how to use them:
- Understand how to use [static and dynamic parameters](../../concepts/parameters.md). - Learn to customize the [blueprint sequencing order](../../concepts/sequencing-order.md). - Find out how to make use of [blueprint resource locking](../../concepts/resource-locking.md).-- Learn how to [update existing assignments](../../how-to/update-existing-assignments.md).
+- Learn how to [update existing assignments](../../how-to/update-existing-assignments.md).
governance Index https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/swift-2020/index.md
# Overview of the SWIFT CSP-CSCF v2020 blueprint sample
-The SWIFT CSP-CSCF v2020 blueprint sample provides governance guard-rails using [Azure Policy](../../../policy/overview.md)
-that help you assess specific SWIFT CSP controls. This blueprint helps customers deploy a
-core set of policies for any Azure-deployed architecture that must implement SWIFT CSP
-controls.
+The SWIFT CSP-CSCF v2020 blueprint sample provides governance guardrails using
+[Azure Policy](../../../policy/overview.md) that help you assess specific SWIFT CSP controls. This
+blueprint helps customers deploy a core set of policies for any Azure-deployed architecture that
+must implement SWIFT CSP controls.
## Control mapping
-The control mapping section provides details on policies included within this blueprint and how
-these policies address various controls in the latest SWIFT CSP-CSCF. When assigned to an architecture,
-resources are evaluated by Azure Policy for non-compliance with assigned policies. For more
-information, see [Azure Policy](../../../policy/overview.md).
+The control mapping section provides details on policies included within this blueprint and how
+these policies address various controls in the latest SWIFT CSP-CSCF. When assigned to an
+architecture, resources are evaluated by Azure Policy for non-compliance with assigned policies. For
+more information, see [Azure Policy](../../../policy/overview.md).
## Next steps
-You've reviewed the overview and of the SWIFT CSP-CSCF v2020 blueprint sample. Next, visit the
+You've reviewed the overview and of the SWIFT CSP-CSCF v2020 blueprint sample. Next, visit the
following articles to learn about the control mapping and how to deploy this sample: > [!div class="nextstepaction"]
Additional articles about blueprints and how to use them:
- Understand how to use [static and dynamic parameters](../../concepts/parameters.md). - Learn to customize the [blueprint sequencing order](../../concepts/sequencing-order.md). - Find out how to make use of [blueprint resource locking](../../concepts/resource-locking.md).-- Learn how to [update existing assignments](../../how-to/update-existing-assignments.md).
+- Learn how to [update existing assignments](../../how-to/update-existing-assignments.md).
governance Control Mapping https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/ukofficial/control-mapping.md
audit requirements** built-in policy initiative.
The blueprint helps you ensure information transfer with Azure services is secure by assigning [Azure Policy](../../../policy/overview.md) definitions that audit insecure connections to storage
-accounts and Redis Cache.
+accounts and Azure Cache for Redis.
- Only secure connections to your Redis Cache should be enabled - Secure transfer to storage accounts should be enabled
This blueprint helps you enforce your policy on the use of cryptograph controls
and audit use of weak cryptographic settings. Understanding where your Azure resources may have non-optimal cryptographic configurations can help you take corrective actions to ensure resources are configured in accordance with your information security policy. Specifically, the policies
-assigned by this blueprint require encryption for data lake storage accounts; require transparent
+assigned by this blueprint require encryption for Data Lake Storage accounts; require transparent
data encryption on SQL databases; audit missing encryption on storage accounts, SQL databases, virtual machine disks, and automation account variables; audit insecure connections to storage
-accounts and Redis Cache; audit weak virtual machine password encryption; and audit unencrypted
-Service Fabric communication.
+accounts and Azure Cache for Redis; audit weak virtual machine password encryption; and audit
+unencrypted Service Fabric communication.
- Disk encryption should be applied on virtual machines - Automation account variables should be encrypted
application controls on virtual machines.
This blueprint helps you ensure system events are logged by assigning [Azure Policy](../../../policy/overview.md) definitions that audit log settings on Azure resources.
-An assigned policy also audits if virtual machines aren't sending logs to a specified log analytics
+An assigned policy also audits if virtual machines aren't sending logs to a specified Log Analytics
workspace. - Advanced data security should be enabled on your SQL servers
governance Deploy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/ukofficial/deploy.md
The following table provides a list of the blueprint artifact parameters:
Artifact name|Artifact type|Parameter name|Description| |-|-|-|-|
-|Blueprint initiative for UK OFFICIAL or UK NHS|Policy assignment |Resource types to audit diagnostic logs (Policy: Blueprint initiative for UK OFFICIAL or UK NHS) |List of resource types to audit if diagnostic log setting is note enabled. For acceptable values, see [Supported services, schemas, and categories for Azure Diagnostic Logs](../../../../azure-monitor/essentials/resource-logs-schema.md). |
+|Blueprint initiative for UK OFFICIAL or UK NHS|Policy assignment |Resource types to audit diagnostic logs (Policy: Blueprint initiative for UK OFFICIAL or UK NHS) |List of resource types to audit if diagnostic log setting is note enabled. For acceptable values, see [Supported services, schemas, and categories for Azure Diagnostic Logs](../../../../azure-monitor/essentials/resource-logs-schema.md). |
|\[Preview\]: Deploy Log Analytics Agent for Linux VMs |Policy assignment |Optional: List of VM images that have supported Linux OS to add to scope (Policy: \[Preview\]: Deploy Log Analytics Agent for Linux VMs) |(Optional) Default value is _none_. For more information, see [Create a Log Analytics workspace in the Azure portal](../../../../azure-monitor/logs/quick-create-workspace.md). | |\[Preview\]: Deploy Log Analytics Agent for Windows VMs |Policy assignment |Optional: List of VM images that have supported Windows OS to add to scope (Policy: \[Preview\]: Deploy Log Analytics Agent for Windows VMs) |(Optional) Default value is _none_. For more information, see [Create a Log Analytics workspace in the Azure portal](../../../../azure-monitor/logs/quick-create-workspace.md). |
governance Index https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/ukofficial/index.md
# Overview of the UK OFFICIAL and UK NHS blueprint samples
-The UK OFFICIAL and UK NHS blueprint samples provides a set of governance guard-rails using [Azure Policy](../../../policy/overview.md)
-that help towards UK OFFICIAL and UK NHS attestation. These blueprint samples help customers deploy a
-core set of policies for any Azure-deployed architecture requiring accreditation or compliance with
-the UK OFFICIAL and UK NHS frameworks.
+The UK OFFICIAL and UK NHS blueprint samples provides a set of governance guardrails using
+[Azure Policy](../../../policy/overview.md) that help toward UK OFFICIAL and UK NHS attestation.
+These blueprint samples help customers deploy a core set of policies for any Azure-deployed
+architecture requiring accreditation or compliance with the UK OFFICIAL and UK NHS frameworks.
## Control mapping
governance General https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/troubleshoot/general.md
channels for more support:
- Get answers from Azure experts through [Azure Forums](https://azure.microsoft.com/support/forums/).-- Connect with [@AzureSupport](https://twitter.com/azuresupport) ΓÇô the official Microsoft Azure
+- Connect with [@AzureSupport](https://twitter.com/azuresupport) - the official Microsoft Azure
account for improving customer experience by connecting the Azure community to the right resources: answers, support, and experts. - If you need more help, you can file an Azure support incident. Go to the
- [Azure support site](https://azure.microsoft.com/support/options/) and select **Get Support**.
+ [Azure support site](https://azure.microsoft.com/support/options/) and select **Get Support**.
governance Create From Sample https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/tutorials/create-from-sample.md
resources directly.
_Assignment-two-rgs-with-role-assignments_ blueprint assignment and then select it. From this page, we can see the assignment succeeded and the list of created resources along with
- their blueprint lock state. If the assignment is updated, the **Assignment operation** drop-down
- shows details about the deployment of each definition version. Each listed resource that was
+ their blueprint lock state. If the assignment is updated, the **Assignment operation** dropdown
+ list shows details about the deployment of each definition version. Each listed resource that was
created can be selected and opens that resources property page. 1. Select the **ProductionRG** resource group.
resources directly.
1. Select the **Deny assignments** tab.
- The blueprint assignment created a [deny assignment](../../../role-based-access-control/deny-assignments.md)
- on the deployed resource group to enforce the _Read Only_ blueprint lock mode. The deny
- assignment prevents someone with appropriate rights on the _Role assignments_ tab from taking
- specific actions. The deny assignment affects _All principals_.
+ The blueprint assignment created a
+ [deny assignment](../../../role-based-access-control/deny-assignments.md) on the deployed
+ resource group to enforce the _Read Only_ blueprint lock mode. The deny assignment prevents
+ someone with appropriate rights on the _Role assignments_ tab from taking specific actions. The
+ deny assignment affects _All principals_.
1. Select the deny assignment, then select the **Denied Permissions** page on the left.
In this tutorial, you've learned how to create a new blueprint from a sample def
more about Azure Blueprints, continue to the blueprint lifecycle article. > [!div class="nextstepaction"]
-> [Learn about the blueprint lifecycle](../concepts/lifecycle.md)
+> [Learn about the blueprint lifecycle](../concepts/lifecycle.md)
governance Protect New Resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/tutorials/protect-new-resources.md
the blueprint definition unique.
|-|-|-|-|-| |RGtoLock resource group|Resource group|Name|TestingBPLocks|Defines the name of the new resource group to apply blueprint locks to.| |RGtoLock resource group|Resource group|Location|West US 2|Defines the location of the new resource group to apply blueprint locks to.|
- |StorageAccount|Resource Manager template|storageAccountType (StorageAccount)|Standard_GRS|The storage SKU. The default value is _Standard_LRS_.|
+ |StorageAccount|Resource Manager template|storageAccountType (StorageAccount) |Standard_GRS|The storage SKU. The default value is _Standard_LRS_.|
1. After you've entered all parameters, select **Assign** at the bottom of the page.
assignment details page.
From this page, we can see that the assignment succeeded and that the resources were deployed with the new blueprint lock state. If the assignment is updated, the **Assignment operation**
- drop-down shows details about the deployment of each definition version. You can select the
+ dropdown list shows details about the deployment of each definition version. You can select the
resource group to open the property page. 1. Select the **TestingBPLocks** resource group.
In this tutorial, you've learned how to protect new resources deployed with Azur
learn more about Azure Blueprints, continue to the blueprint lifecycle article. > [!div class="nextstepaction"]
-> [Learn about the blueprint lifecycle](../concepts/lifecycle.md)
+> [Learn about the blueprint lifecycle](../concepts/lifecycle.md)
governance Create Management Group Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/management-groups/create-management-group-azure-cli.md
management group can hold subscriptions or other management groups.
To learn more about management groups and how to manage your resource hierarchy, continue to: > [!div class="nextstepaction"]
-> [Manage your resources with management groups](./manage.md)
+> [Manage your resources with management groups](./manage.md)
governance Protect Resource Hierarchy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/management-groups/how-to/protect-resource-hierarchy.md
with policy assignments or Azure role assignments more suited to a new subscript
### Set default management group in portal
-To configure this setting in Azure portal, follow these steps:
+To configure this setting in the Azure portal, follow these steps:
1. Use the search bar to search for and select 'Management groups'.
To configure this setting in Azure portal, follow these steps:
### Set default management group with REST API To configure this setting with REST API, the
-[Hierarchy Settings](/rest/api/managementgroups/hierarchysettings) endpoint is called. To do so, use the
-following REST API URI and body format. Replace `{rootMgID}` with the ID of your root management
+[Hierarchy Settings](/rest/api/managementgroups/hierarchysettings) endpoint is called. To do so, use
+the following REST API URI and body format. Replace `{rootMgID}` with the ID of your root management
group and `{defaultGroupID}` with the ID of the management group to become the default management group:
child management groups.
### Set require authorization in portal
-To configure this setting in Azure portal, follow these steps:
+To configure this setting in the Azure portal, follow these steps:
1. Use the search bar to search for and select 'Management groups'.
To configure this setting in Azure portal, follow these steps:
### Set require authorization with REST API To configure this setting with REST API, the
-[Hierarchy Settings](/rest/api/managementgroups/hierarchysettings) endpoint is called. To do so, use the
-following REST API URI and body format. This value is a _boolean_, so provide either **true** or
+[Hierarchy Settings](/rest/api/managementgroups/hierarchysettings) endpoint is called. To do so, use
+the following REST API URI and body format. This value is a _boolean_, so provide either **true** or
**false** for the value. A value of **true** enables this method of protecting your management group hierarchy:
To turn the setting back off, use the same endpoint and set
## PowerShell sample
-PowerShell does not have an 'Az' command to set the default management group or set require authorization, but as a workaround you can leverage the REST API with the PowerShell sample below:
+PowerShell doesn't have an 'Az' command to set the default management group or set require
+authorization, but as a workaround you can use the REST API with the PowerShell sample below:
```powershell $root_management_group_id = "Enter the ID of root management group"
governance Manage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/management-groups/manage.md
To delete a management group, the following requirements must be met:
1. You need write permissions on the management group ("Owner", "Contributor", or "Management Group Contributor"). To see what permissions you have, select the management group and then select
- **IAM**. To learn more on Azure roles, see
+ **IAM**. To learn more on Azure roles, see
[Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md). ### Delete in the portal
template.
}, "resources": [ {
- "scope": "/",
+ "scope": "/",
"type": "Microsoft.Management/managementGroups/subscriptions", "apiVersion": "2020-05-01", "name": "[concat(parameters('targetMgId'), '/', parameters('subscriptionId'))]",
template.
1. In the menu that opens, select if you want a new or use an existing management group. - Selecting new will create a new management group.
- - Selecting an existing will present you with a drop-down of all the management groups you can
- move to this management group.
+ - Selecting an existing will present you with a dropdown list of all the management groups you
+ can move to this management group.
:::image type="content" source="./media/add_context_MG.png" alt-text="Screenshot of the 'Add management group' options for creating a new management group." border="false":::
governance Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/management-groups/overview.md
Title: Organize your resources with management groups - Azure Governance
-description: Learn about the management groups, how their permissions work, and how to use them.
+description: Learn about the management groups, how their permissions work, and how to use them.
Last updated 04/28/2021
you can assign your own account as owner of the root management group.
the hierarchy. - No one is given default access to the root management group. Azure AD Global Administrators are the only users that can elevate themselves to gain access. Once they have access to the root
- management group, the global administrators can assign any Azure role to other users to manage
+ management group, the global administrators can assign any Azure role to other users to manage
it. - In SDK, the root management group, or 'Tenant Root', operates as a management group.
The following chart shows the list of roles and the supported actions on managem
|Resource Policy Contributor | | | | | | X | | |User Access Administrator | | | | | X | X | |
-\*: MG Contributor and MG Reader only allow users to do those actions on the management group scope.
+\*: MG Contributor and MG Reader only allow users to do those actions on the management group scope.
\*\*: Role Assignments on the Root management group aren't required to move a subscription or management group to and from it. See [Manage your resources with management groups](manage.md) for details on moving items within the hierarchy.
since both are custom-defined fields when creating a management group.
... { "Name": "MG Test Custom Role",
- "Id": "id",
+ "Id": "id",
"IsCustom": true, "Description": "This role provides members understand custom roles.", "Actions": [
There are limitations that exist when using custom roles on management groups.
> [!IMPORTANT] > Adding a management group to `AssignableScopes` is currently in preview. This preview version is
-> provided without a service level agreement, and it's not recommended for production workloads.
+> provided without a service-level agreement, and it's not recommended for production workloads.
> Certain features might not be supported or might have constrained capabilities. For more > information, see > [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
governance Assign Policy Azurecli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/assign-policy-azurecli.md
Title: "Quickstart: New policy assignment with Azure CLI" description: In this quickstart, you use Azure CLI to create an Azure Policy assignment to identify non-compliant resources. Last updated 03/31/2021-+ # Quickstart: Create a policy assignment to identify non-compliant resources with Azure CLI
The preceding command uses the following information:
- **Name** - The actual name of the assignment. For this example, _audit-vm-manageddisks_ was used. - **DisplayName** - Display name for the policy assignment. In this case, you're using _Audit VMs without managed disks Assignment_.-- **Policy** ΓÇô The policy definition ID, based on which you're using to create the assignment. In
+- **Policy** - The policy definition ID, based on which you're using to create the assignment. In
this case, it's the ID of policy definition _Audit VMs that do not use managed disks_. To get the policy definition ID, run this command: `az policy definition list --query "[?displayName=='Audit VMs that do not use managed disks']"`
governance Assign Policy Bicep https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/assign-policy-bicep.md
To learn more about assigning policies to validate that new resources are compli
tutorial for: > [!div class="nextstepaction"]
-> [Creating and managing policies](./tutorials/create-and-manage.md)
+> [Creating and managing policies](./tutorials/create-and-manage.md)
governance Assign Policy Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/assign-policy-dotnet.md
The preceding commands use the following information:
_audit-vm-manageddisks_. - **displayName** - Display name for the policy assignment. In this case, you're using _Audit VMs without managed disks Assignment_.-- **policyDefID** ΓÇô The policy definition path, based on which you're using to create the
+- **policyDefID** - The policy definition path, based on which you're using to create the
assignment. In this case, it's the ID of policy definition _Audit VMs that do not use managed disks_. - **description** - A deeper explanation of what the policy does or why it's assigned to this scope.
governance Assign Policy Javascript https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/assign-policy-javascript.md
The preceding commands use the following information:
_audit-vm-manageddisks_. - **displayName** - Display name for the policy assignment. In this case, you're using _Audit VMs without managed disks Assignment_.-- **policyDefID** ΓÇô The policy definition path, based on which you're using to create the
+- **policyDefID** - The policy definition path, based on which you're using to create the
assignment. In this case, it's the ID of policy definition _Audit VMs that do not use managed disks_. - **description** - A deeper explanation of what the policy does or why it's assigned to this scope.
governance Assign Policy Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/assign-policy-portal.md
To learn more about assigning policies to validate that new resources are compli
tutorial for: > [!div class="nextstepaction"]
-> [Creating and managing policies](./tutorials/create-and-manage.md)
+> [Creating and managing policies](./tutorials/create-and-manage.md)
governance Assign Policy Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/assign-policy-powershell.md
The preceding commands use the following information:
- **Name** - The actual name of the assignment. For this example, _audit-vm-manageddisks_ was used. - **DisplayName** - Display name for the policy assignment. In this case, you're using _Audit VMs without managed disks Assignment_.-- **Definition** ΓÇô The policy definition, based on which you're using to create the assignment. In
+- **Definition** - The policy definition, based on which you're using to create the assignment. In
this case, it's the ID of policy definition _Audit VMs that do not use managed disks_. - **Scope** - A scope determines what resources or grouping of resources the policy assignment gets enforced on. It could range from a subscription to resource groups. Be sure to replace
To learn more about assigning policies to validate that new resources are compli
tutorial for: > [!div class="nextstepaction"]
-> [Creating and managing policies](./tutorials/create-and-manage.md)
+> [Creating and managing policies](./tutorials/create-and-manage.md)
governance Assign Policy Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/assign-policy-python.md
The preceding commands use the following information:
Assignment details: - **display_name** - Display name for the policy assignment. In this case, you're using _Audit VMs without managed disks Assignment_.-- **policy_definition_id** ΓÇô The policy definition path, based on which you're using to create the
+- **policy_definition_id** - The policy definition path, based on which you're using to create the
assignment. In this case, it's the ID of policy definition _Audit VMs that do not use managed disks_. In this example, the policy definition is a built-in and the path doesn't include management group or subscription information.
governance Assign Policy Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/assign-policy-rest-api.md
Title: "Quickstart: New policy assignment with REST API" description: In this quickstart, you use REST API to create an Azure Policy assignment to identify non-compliant resources. Last updated 05/01/2021-+ # Quickstart: Create a policy assignment to identify non-compliant resources with REST API
Request Body:
- **DisplayName** - Display name for the policy assignment. In this case, you're using _Audit VMs without managed disks Assignment_. - **Description** - A deeper explanation of what the policy does or why it's assigned to this scope.-- **policyDefinitionId** ΓÇô The policy definition ID, based on which you're using to create the
+- **policyDefinitionId** - The policy definition ID, based on which you're using to create the
assignment. In this case, it's the ID of policy definition _Audit VMs that do not use managed disks_. - **nonComplianceMessages** - Set the message seen when a resource is denied due to non-compliance
governance Definition Structure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/concepts/definition-structure.md
We recommend that you set **mode** to `all` in most cases. All policy definition
the portal use the `all` mode. If you use PowerShell or Azure CLI, you can specify the **mode** parameter manually. If the policy definition doesn't include a **mode** value, it defaults to `all` in Azure PowerShell and to `null` in Azure CLI. A `null` mode is the same as using `indexed` to
-support backwards compatibility.
+support backward compatibility.
`indexed` should be used when creating policies that enforce tags or locations. While not required, it prevents resources that don't support tags and locations from showing up as non-compliant in the
_common_ properties used by Azure Policy and in built-ins. Each `metadata` prope
### Common metadata properties - `version` (string): Tracks details about the version of the contents of a policy definition.-- `category` (string): Determines under which category in Azure portal the policy definition is
+- `category` (string): Determines under which category in the Azure portal the policy definition is
displayed. - `preview` (boolean): True or false flag for if the policy definition is _preview_. - `deprecated` (boolean): True or false flag for if the policy definition has been marked as
_common_ properties used by Azure Policy and in built-ins. Each `metadata` prope
## Parameters Parameters help simplify your policy management by reducing the number of policy definitions. Think
-of parameters like the fields on a form ΓÇô `name`, `address`, `city`, `state`. These parameters
+of parameters like the fields on a form - `name`, `address`, `city`, `state`. These parameters
always stay the same, however their values change based on the individual filling out the form. Parameters work the same way when building policies. By including parameters in a policy definition, you can reuse that policy for different scenarios by using different values.
properties](#parameter-properties).
### strongType
-Within the `metadata` property, you can use **strongType** to provide a multi-select list of options
+Within the `metadata` property, you can use **strongType** to provide a multiselect list of options
within the Azure portal. **strongType** can be a supported _resource type_ or an allowed value. To
-determine if a _resource type_ is valid for **strongType**, use
+determine whether a _resource type_ is valid for **strongType**, use
[Get-AzResourceProvider](/powershell/module/az.resources/get-azresourceprovider). The format for a _resource type_ **strongType** is `<Resource Provider>/<Resource Type>`. For example, `Microsoft.Network/virtualNetworks/subnets`.
governance Effects https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/concepts/effects.md
meet the designed governance controls of Azure Policy. With a
[Resource Provider mode](./definition-structure.md#resource-provider-modes), the Resource Provider manages the evaluation and outcome and reports the results back to Azure Policy. -- **Disabled** is checked first to determine if the policy rule should be evaluated.
+- **Disabled** is checked first to determine whether the policy rule should be evaluated.
- **Append** and **Modify** are then evaluated. Since either could alter the request, a change made may prevent an audit or deny effect from triggering. These effects are only available with a Resource Manager mode.
manages the evaluation and outcome and reports the results back to Azure Policy.
- **Audit** is evaluated last. After the Resource Provider returns a success code on a Resource Manager mode request,
-**AuditIfNotExists** and **DeployIfNotExists** evaluate to determine if additional compliance
+**AuditIfNotExists** and **DeployIfNotExists** evaluate to determine whether additional compliance
logging or action is required. Additionally, `PATCH` requests that only modify `tags` related fields restricts policy evaluation to
related resources to match.
### AuditIfNotExists example
-Example: Evaluates Virtual Machines to determine if the Antimalware extension exists then audits
-when missing.
+Example: Evaluates Virtual Machines to determine whether the Antimalware extension exists then
+audits when missing.
```json {
related resources to match and the template deployment to execute.
### DeployIfNotExists example
-Example: Evaluates SQL Server databases to determine if transparentDataEncryption is enabled. If
-not, then a deployment to enable is executed.
+Example: Evaluates SQL Server databases to determine whether transparentDataEncryption is enabled.
+If not, then a deployment to enable is executed.
```json "if": {
Gatekeeper v3 admission control rule.
passed via **values** from Azure Policy. - **constraint** (required) - The CRD implementation of the Constraint template. Uses parameters passed via **values** as
- `{{ .Values.<valuename> }}`. In the example below, these values are `{{ .Values.cpuLimit }}` and
- `{{ .Values.memoryLimit }}`.
+ `{{ .Values.<valuename> }}`. In the following example, these values are `{{ .Values.cpuLimit }}`
+ and `{{ .Values.memoryLimit }}`.
- **values** (optional) - Defines any parameters and values to pass to the Constraint. Each value must exist in the Constraint template CRD.
needed for remediation and the **operations** used to add, update, or remove tag
The **operations** property array makes it possible to alter several tags in different ways from a single policy definition. Each operation is made up of **operation**, **field**, and **value** properties. Operation determines what the remediation task does to the tags, field determines which
-tag is altered, and value defines the new setting for that tag. The example below makes the
+tag is altered, and value defines the new setting for that tag. The following example makes the
following tag changes: - Sets the `environment` tag to "Test", even if it already exists with a different value.
is applied only when evaluating requests with API version greater or equals to '
## Layering policy definitions
-A resource may be impacted by several assignments. These assignments may be at the same scope or at
+A resource may be affected by several assignments. These assignments may be at the same scope or at
different scopes. Each of these assignments is also likely to have a different effect defined. The condition and effect for each policy is independently evaluated. For example:
governance Evaluate Impact https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/concepts/evaluate-impact.md
_DeployIfNotExists_.
Once you've validated your new policy definition is reporting correctly on existing resources, it's time to look at the impact of the policy when resources get created or updated. If the policy definition supports effect parameterization, use [Audit](./effects.md#audit). This configuration
-allows you to monitor the creation and updating of resources to see if the new policy definition
-triggers an entry in Azure Activity log for a resource that is non-compliant without impacting
-existing work or requests.
+allows you to monitor the creation and updating of resources to see whether the new policy
+definition triggers an entry in Azure Activity log for a resource that is non-compliant without
+impacting existing work or requests.
It's recommended to both update and create new resources that match your policy definition to see that the _Audit_ effect is correctly being triggered when expected. Be on the lookout for resource
-requests that shouldn't be impacted by the new policy definition that trigger the _Audit_ effect.
-These impacted resources are another example of _false positives_ and must be fixed in the policy
+requests that shouldn't be affected by the new policy definition that trigger the _Audit_ effect.
+These affected resources are another example of _false positives_ and must be fixed in the policy
definition before full implementation. In the event the policy definition is changed at this stage of testing, it's recommended to begin
capabilities.
- Learn how to [get compliance data](../how-to/get-compliance-data.md). - Learn how to [remediate non-compliant resources](../how-to/remediate-resources.md). - Review what a management group is with
- [Organize your resources with Azure management groups](../../management-groups/overview.md).
+ [Organize your resources with Azure management groups](../../management-groups/overview.md).
governance Event Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/concepts/event-overview.md
Azure Policy events enable applications to react to state changes. This integrat
the need for complicated code or expensive and inefficient polling services. Instead, events are pushed through [Azure Event Grid](../../../event-grid/index.yml) to subscribers such as [Azure Functions](../../../azure-functions/index.yml),
-[Azure Logic Apps](../../../logic-apps/index.yml), or even to your own custom http listener.
+[Azure Logic Apps](../../../logic-apps/index.yml), or even to your own custom HTTP listener.
Critically, you only pay for what you use. Azure Policy events are sent to the Azure Event Grid, which provides reliable delivery services to
for a full tutorial.
## Available Azure Policy events
-Event grid uses [event subscriptions](../../../event-grid/concepts.md#event-subscriptions) to route
+Event Grid uses [event subscriptions](../../../event-grid/concepts.md#event-subscriptions) to route
event messages to subscribers. Azure Policy event subscriptions can include three types of events: | Event type | Description |
event messages to subscribers. Azure Policy event subscriptions can include thre
Azure Policy events contain all the information you need to respond to changes in your data. You can identify an Azure Policy event when the `eventType` property starts with "Microsoft.PolicyInsights".
-Additional information about the usage of Event Grid event properties is documented in
+Additional information about the usage of Event Grid event properties is documented in
[Event Grid event schema](../../../event-grid/event-schema.md). | Property | Type | Description |
Learn more about Event Grid and give Azure Policy state change events a try:
- [Route policy state change events to Event Grid with Azure CLI](../tutorials/route-state-change-events.md) - [Azure Policy schema details for Event Grid](../../../event-grid/event-schema-policy.md)-- [About Event Grid](../../../event-grid/overview.md)
+- [About Event Grid](../../../event-grid/overview.md)
governance Guest Configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/concepts/guest-configuration.md
built-in content, Guest Configuration handles loading these tools automatically.
|Operating system|Validation tool|Notes| |-|-|-| |Windows|[PowerShell Desired State Configuration](/powershell/scripting/dsc/overview/overview) v2| Side-loaded to a folder only used by Azure Policy. Won't conflict with Windows PowerShell DSC. PowerShell Core isn't added to system path.|
-|Linux|[Chef InSpec](https://www.chef.io/inspec/)| Installs Chef InSpec version 2.2.61 in default location and added to system path. Dependencies for the InSpec package including Ruby and Python are installed as well. |
+|Linux|[Chef InSpec](https://www.chef.io/inspec/) | Installs Chef InSpec version 2.2.61 in default location and added to system path. Dependencies for the InSpec package including Ruby and Python are installed as well. |
### Validation frequency
compatible. The following table shows a list of supported operating systems on A
|Microsoft|Windows Client|Windows 10| |OpenLogic|CentOS|7.3 -8| |Red Hat|Red Hat Enterprise Linux|7.4 - 8|
-|Suse|SLES|12 SP3-SP5, 15|
+|SUSE|SLES|12 SP3-SP5, 15|
Custom virtual machine images are supported by Guest Configuration policy definitions as long as they're one of the operating systems in the table above.
Group](../../../virtual-network/manage-network-security-group.md#create-a-securi
used to reference the Guest Configuration service rather than manually maintaining the [list of IP ranges](https://www.microsoft.com/en-us/download/details.aspx?id=56519) for Azure datacenters.
-### Communicate over private link in Azure
+### Communicate over Private Link in Azure
Virtual machines can use [private link](../../../private-link/private-link-overview.md) for communication to the Guest Configuration service. Apply tag with the name `EnablePrivateNetworkGC`
are met on the machine. The requirements are described in section
### What is a Guest Assignment?
-When an Azure Policy is assigned, if it's in the category "Guest Configuration"
-there's metadata included to describe a Guest Assignment.
-You can think of a Guest Assignment as a link between a machine and an Azure Policy scenario.
-For example, the snippet below associates the Azure Windows Baseline configuration
-with minimum version `1.0.0` to any machines in scope of the policy. By default,
-the Guest Assignment will only perform an audit of the machine.
+When an Azure Policy is assigned, if it's in the category "Guest Configuration" there's metadata
+included to describe a Guest Assignment. You can think of a Guest Assignment as a link between a
+machine and an Azure Policy scenario. For example, the following snippet associates the Azure
+Windows Baseline configuration with minimum version `1.0.0` to any machines in scope of the policy.
+By default, the Guest Assignment will only perform an audit of the machine.
```json "metadata": {
the Guest Assignment will only perform an audit of the machine.
//additional metadata properties exist ```
-Guest Assignments are created automatically per machine by the Guest Configuration
-service. The resource type is `Microsoft.GuestConfiguration/guestConfigurationAssignments`.
-Azure Policy uses the **complianceStatus** property of the Guest Assignment resource
-to report compliance status. For more information, see [getting compliance
-data](../how-to/get-compliance-data.md).
+Guest Assignments are created automatically per machine by the Guest Configuration service. The
+resource type is `Microsoft.GuestConfiguration/guestConfigurationAssignments`. Azure Policy uses the
+**complianceStatus** property of the Guest Assignment resource to report compliance status. For more
+information, see [getting compliance data](../how-to/get-compliance-data.md).
#### Auditing operating system settings following industry baselines
governance Initiative Definition Structure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/concepts/initiative-definition-structure.md
there are some _common_ properties used by Azure Policy and in built-ins.
- `version` (string): Tracks details about the version of the contents of a policy initiative definition.-- `category` (string): Determines under which category in Azure portal the policy definition is
+- `category` (string): Determines under which category in the Azure portal the policy definition is
displayed. > [!NOTE]
there are some _common_ properties used by Azure Policy and in built-ins.
## Parameters Parameters help simplify your policy management by reducing the number of policy definitions. Think
-of parameters like the fields on a form ΓÇô `name`, `address`, `city`, `state`. These parameters
+of parameters like the fields on a form - `name`, `address`, `city`, `state`. These parameters
always stay the same, however their values change based on the individual filling out the form. Parameters work the same way when building policy initiatives. By including parameters in a policy initiative definition, you can reuse that parameter in the included policies.
properties](#parameter-properties).
### strongType
-Within the `metadata` property, you can use **strongType** to provide a multi-select list of options
-within the Azure portal. **strongType** can be a supported _resource type_ or an allowed
-value. To determine if a _resource type_ is valid for **strongType**, use
+Within the `metadata` property, you can use **strongType** to provide a multiselect list of options
+within the Azure portal. **strongType** can be a supported _resource type_ or an allowed value. To
+determine whether a _resource type_ is valid for **strongType**, use
[Get-AzResourceProvider](/powershell/module/az.resources/get-azresourceprovider). Some resource types not returned by **Get-AzResourceProvider** are supported. Those resource types
governance Policy As Code https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/concepts/policy-as-code.md
Examples of these file formats are available in the
The recommended general workflow of Azure Policy as Code looks like this diagram: :::image type="complex" source="../media/policy-as-code/policy-as-code-workflow.png" alt-text="Diagram showing Azure Policy as Code workflow boxes from Create to Test to Deploy." border="false":::
- The diagram showing the Azure Policy as Code workflow boxes. Create covers creation of the policy and initiative definitions. Test covers assignment with enforcement mode disabled. A gateway check for the compliance status is followed by granting the assignments M S I permissions and remediating resources. Deploy covers updating the assignment with enforcement mode enabled.
+ The diagram showing the Azure Policy as Code workflow boxes. Create covers creation of the policy and initiative definitions. Test covers assignment with enforcement mode disabled. A gateway check for the compliance status is followed by granting the assignments M S I permissions and remediating resources. Deploy covers updating the assignment with enforcement mode enabled.
:::image-end::: ### Create and update policy definitions
governance Policy For Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/concepts/policy-for-kubernetes.md
existing cluster.
on GitHub. > [!NOTE]
- > Because of the relationship between Azure Policy Add-on and the resource group id, Azure
+ > Because of the relationship between Azure Policy Add-on and the resource group ID, Azure
> Policy supports only one AKS Engine cluster for each resource group. To validate that the add-on installation was successful and that the _azure-policy_ and _gatekeeper_
following steps:
1. In the left pane of the Azure Policy page, select **Definitions**.
-1. From the Category drop-down list box, use **Select all** to clear the filter and then select
+1. From the Category dropdown list box, use **Select all** to clear the filter and then select
**Kubernetes**. 1. Select the policy definition, then select the **Assign** button.
governance Recommended Policies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/concepts/recommended-policies.md
it's recommended to transition these policy assignments from one per resource to
- Review examples at [Azure Policy samples](../samples/index.md). - Review [Understanding policy effects](./effects.md).-- Learn how to [remediate non-compliant resources](../how-to/remediate-resources.md).
+- Learn how to [remediate non-compliant resources](../how-to/remediate-resources.md).
governance Author Policies For Arrays https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/how-to/author-policies-for-arrays.md
from each array member. Example:
``` This condition is true if the values of all `property` properties in `objectArray` are equal to
-`"value"`. For more examples, see [additional \[\*\] alias
+`"value"`. For more examples, see [Additional \[\*\] alias
examples](#additional--alias-examples). When using the `field()` function to reference an array alias, the returned value is an array of all
governance Determine Non Compliance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/how-to/determine-non-compliance.md
To view the compliance details, follow these steps:
1. On the **Overview** or **Compliance** page, select a policy in a **compliance state** that is _Non-compliant_.
-1. Under the **Resource compliance** tab of the **Policy compliance** page, right-click or select
- the ellipsis of a resource in a **compliance state** that is _Non-compliant_. Then select **View
- compliance details**.
+1. Under the **Resource compliance** tab of the **Policy compliance** page, select and hold (or
+ right-click) or select the ellipsis of a resource in a **compliance state** that is
+ _Non-compliant_. Then select **View compliance details**.
:::image type="content" source="../media/determine-non-compliance/view-compliance-details.png" alt-text="Screenshot of the 'View compliance details' link on the Resource compliance tab." border="false":::
governance Get Compliance Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/how-to/get-compliance-data.md
on:
schedule: - cron: '0 8 * * *' # runs every morning 8am jobs:
- assess-policy-compliance:
+ assess-policy-compliance:
runs-on: ubuntu-latest steps: - name: Login to Azure
While the compliance scan is running, checking the `$job` object outputs results
```azurepowershell-interactive $job
-Id Name PSJobTypeName State HasMoreData Location Command
- - -- -- -- -
-2 Long Running O… AzureLongRunni… Running True localhost Start-AzPolicyCompliance…
+Id Name PSJobTypeName State HasMoreData Location Command
+-- - - -- -- -- -
+2 Long Running O... AzureLongRunni... Running True localhost Start-AzPolicyCompliance...
``` When the compliance scan completes, the **State** property changes to _Completed_.
evaluation for the resulting compliance state:
> existence condition to be FALSE to be non-compliant. When TRUE, the IF condition triggers > evaluation of the existence condition for the related resources.
-For example, assume that you have a resource group ΓÇô ContsoRG, with some storage accounts
+For example, assume that you have a resource group - ContsoRG, with some storage accounts
(highlighted in red) that are exposed to public networks. :::image type="complex" source="../media/getting-compliance-data/resource-group01.png" alt-text="Diagram of storage accounts exposed to public networks in the Contoso R G resource group." border="false":::
- Diagram showing images for five storage accounts in the Contoso R G resource group. Storage accounts one and three are blue, while storage accounts two, four, and five are red.
+ Diagram showing images for five storage accounts in the Contoso R G resource group. Storage accounts one and three are blue, while storage accounts two, four, and five are red.
:::image-end::: In this example, you need to be wary of security risks. Now that you've created a policy assignment,
Besides **Compliant** and **Non-compliant**, policies and resources have four ot
- **Not registered**: The Azure Policy Resource Provider hasn't been registered or the account logged in doesn't have permission to read compliance data.
-Azure Policy uses the **type**, **name**, or **kind** fields in the definition to determine if a
-resource is a match. When the resource matches, it's considered applicable and has a status of
+Azure Policy uses the **type**, **name**, or **kind** fields in the definition to determine whether
+a resource is a match. When the resource matches, it's considered applicable and has a status of
either **Compliant**, **Non-compliant**, or **Exempt**. If either **type**, **name**, or **kind** is the only property in the definition, then all included and non-exempt resources are considered applicable and are evaluated.
history.
:::image type="content" source="../media/getting-compliance-data/compliance-components.png" alt-text="Screenshot of Component Compliance tab and compliance details for a Resource Provider mode assignment." border="false":::
-Back on the resource compliance page, right-click on the row of the event you would like to gather
-more details on and select **Show activity logs**. The activity log page opens and is pre-filtered
-to the search showing details for the assignment and the events. The activity log provides
-additional context and information about those events.
+Back on the resource compliance page, select and hold (or right-click) on the row of the event you
+would like to gather more details on and select **Show activity logs**. The activity log page opens
+and is pre-filtered to the search showing details for the assignment and the events. The activity
+log provides additional context and information about those events.
:::image type="content" source="../media/getting-compliance-data/compliance-activitylog.png" alt-text="Screenshot of the Activity Log for Azure Policy activities and evaluations." border="false":::
and the definition information for each assignment. Each policy object in the hi
### Query for resources In the example above, **value.policyAssignments.policyDefinitions.results.queryResultsUri** provides
-a sample Uri for all non-compliant resources for a specific policy definition. Looking at the
+a sample URI for all non-compliant resources for a specific policy definition. Looking at the
**$filter** value, ComplianceState is equal (eq) to 'NonCompliant', PolicyAssignmentId is specified for the policy definition, and then the PolicyDefinitionId itself. The reason for including the PolicyAssignmentId in the filter is because the PolicyDefinitionId could exist in several policy or
definition.
### View events When a resource is created or updated, a policy evaluation result is generated. Results are called
-_policy events_. Use the following Uri to view recent policy events associated with the
+_policy events_. Use the following URI to view recent policy events associated with the
subscription. ```http
governance Guest Configuration Create Group Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/how-to/guest-configuration-create-group-policy.md
Title: How to create Guest Configuration policy definitions from Group Policy baseline for Windows
-description: Learn how to convert Group Policy from the Windows Server 2019 Security Baseline into a policy definition.
+description: Learn how to convert Group Policy from the Windows Server 2019 Security Baseline into a policy definition.
Last updated 03/31/2021
Guest Configuration and Baseline Management modules.
```azurepowershell-interactive $NewGuestConfigurationPolicySplat = @{
- ContentUri = $Uri
- DisplayName = 'Server 2019 Configuration Baseline'
- Description 'Validation of using a completely custom baseline configuration for Windows VMs'
+ ContentUri = $Uri
+ DisplayName = 'Server 2019 Configuration Baseline'
+ Description 'Validation of using a completely custom baseline configuration for Windows VMs'
Path = 'C:\git\policyfiles\policy'
- Platform = Windows
+ Platform = Windows
} New-GuestConfigurationPolicy @NewGuestConfigurationPolicySplat ```
initiative with [Portal](../assign-policy-portal.md), [Azure CLI](../assign-poli
Assigning a policy definition with _DeployIfNotExists_ effect requires an additional level of access. To grant the least privilege, you can create a custom role definition that extends
-**Resource Policy Contributor**. The example below creates a role named **Resource Policy
+**Resource Policy Contributor**. The following example creates a role named **Resource Policy
Contributor DINE** with the additional permission _Microsoft.Authorization/roleAssignments/write_. ```azurepowershell-interactive
governance Guest Configuration Create Linux https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/how-to/guest-configuration-create-linux.md
Title: How to create Guest Configuration policies for Linux description: Learn how to create an Azure Policy Guest Configuration policy for Linux. Last updated 03/31/2021-+ # How to create Guest Configuration policies for Linux
non-Azure machine.
> [!IMPORTANT] > Custom policy definitions with Guest Configuration in the Azure Government and
-> Azure China environments is a Preview feature.
+> Azure China 21Vianet environments is a Preview feature.
> > The Guest Configuration extension is required to perform audits in Azure virtual machines. To > deploy the extension at scale across all Linux machines, assign the following policy definition:
uncompressed.
Guest Configuration on Linux uses the `ChefInSpecResource` resource to provide the engine with the name of the [InSpec profile](https://www.inspec.io/docs/reference/profiles/). **Name** is the only
-required resource property. Create a YaML file and a Ruby script file, as detailed below.
+required resource property. Create a YAML file and a Ruby script file, as detailed below.
-First, create the YaML file used by InSpec. The file provides basic information about the
+First, create the YAML file used by InSpec. The file provides basic information about the
environment. An example is given below: ```yaml
AuditFilePathExists -out ./Config
``` Save this file with name `config.ps1` in the project folder. Run it in PowerShell by executing
-`./config.ps1` in the terminal. A new mof file will be created.
+`./config.ps1` in the terminal. A new MOF file is be created.
The `Node AuditFilePathExists` command isn't technically required but it produces a file named
-`AuditFilePathExists.mof` rather than the default, `localhost.mof`. Having the .mof file name follow
+`AuditFilePathExists.mof` rather than the default, `localhost.mof`. Having the .MOF file name follow
the configuration makes it easy to organize many files when operating at scale. You should now have a project structure as below:
You should now have a project structure as below:
/ linux-path inspec.yml / controls
- linux-path.rb
+ linux-path.rb
``` The supporting files must be packaged together. The completed package is used by Guest Configuration
Parameters of the `Publish-GuestConfigurationPackage` cmdlet:
storage account - **Force**: Overwrite existing package in the storage account with the same name
-The example below publishes the package to a storage container name 'guestconfiguration'.
+The following example publishes the package to a storage container name 'guestconfiguration'.
```azurepowershell-interactive Publish-GuestConfigurationPackage -Path ./AuditFilePathExists/AuditFilePathExists.zip -ResourceGroupName myResourceGroupName -StorageAccountName myStorageAccountName
package and creates a policy definition.
Parameters of the `New-GuestConfigurationPolicy` cmdlet: -- **ContentUri**: Public http(s) uri of Guest Configuration content package.
+- **ContentUri**: Public HTTP(s) URI of Guest Configuration content package.
- **DisplayName**: Policy display name. - **Description**: Policy description. - **Parameter**: Policy parameters provided in hashtable format.
describe file(attr_path) do
end ```
-Add the property **AttributesYmlContent** in your configuration with any string as the value.
-The Guest Configuration agent automatically creates the YAML file
-used by InSpec to store attributes. See the example below.
+Add the property **AttributesYmlContent** in your configuration with any string as the value. The
+Guest Configuration agent automatically creates the YAML file used by InSpec to store attributes.
+See the following example.
```powershell Configuration AuditFilePathExists
unique from previous versions. You can include a version number in the name such
specify that the package should be considered newer or older than other packages. Second, update the parameters used with the `New-GuestConfigurationPolicy` cmdlet following each of
-the explanations below.
+the following explanations.
- **Version**: When you run the `New-GuestConfigurationPolicy` cmdlet, you must specify a version number greater than what is currently published.
governance Guest Configuration Create https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/how-to/guest-configuration-create.md
non-Azure machine.
> [!IMPORTANT] > Custom policy definitions with Guest Configuration in the Azure Government and
-> Azure China environments is a Preview feature.
+> Azure China 21Vianet environments is a Preview feature.
> > The Guest Configuration extension is required to perform audits in Azure virtual machines. To > deploy the extension at scale across all Windows machines, assign the following policy
For an overview of DSC concepts and terminology, see
When Guest Configuration audits a machine the sequence of events is different than in Windows PowerShell DSC.
-1. The agent first runs `Test-TargetResource` to determine if the configuration is in the correct
- state.
-1. The boolean value returned by the function determines if the Azure Resource Manager status for
+1. The agent first runs `Test-TargetResource` to determine whether the configuration is in the
+ correct state.
+1. The Boolean value returned by the function determines if the Azure Resource Manager status for
the Guest Assignment should be Compliant/Not-Compliant. 1. The provider runs `Get-TargetResource` to return the current state of each setting so details are available both about why a machine isn't compliant and to confirm that the current state is
return @{
The Reasons property must be added to the schema MOF for the resource as an embedded class. ```mof
-[ClassVersion("1.0.0.0")]
+[ClassVersion("1.0.0.0")]
class Reason { [Read] String Phrase;
AuditBitLocker
``` Run this script in a PowerShell terminal or save this file with name `config.ps1` in the project
-folder. Run it in PowerShell by executing `./config.ps1` in the terminal. A new mof file is created.
+folder. Run it in PowerShell by executing `./config.ps1` in the terminal. A new MOF file is created.
The `Node AuditBitlocker` command isn't technically required but it produces a file named
-`AuditBitlocker.mof` rather than the default, `localhost.mof`. Having the .mof file name follow the
+`AuditBitlocker.mof` rather than the default, `localhost.mof`. Having the .MOF file name follow the
configuration makes it easy to organize many files when operating at scale. Once the MOF is compiled, the supporting files must be packaged together. The completed package is
New-GuestConfigurationPackage -Name AuditBitlocker -Configuration ./AuditBitlock
The next step is to publish the file to Azure Blob Storage. There are no special requirements for the storage account, but it's a good idea to host the file in a region near your machines. If you
-don't have a storage account, use the following example. The commands below, including
+don't have a storage account, use the following example. The following commands, including
`Publish-GuestConfigurationPackage`, require the `Az.Storage` module. ```azurepowershell-interactive
Parameters of the `Publish-GuestConfigurationPackage` cmdlet:
storage account - **Force**: Overwrite existing package in the storage account with the same name
-The example below publishes the package to a storage container name 'guestconfiguration'.
+The following example publishes the package to a storage container name 'guestconfiguration'.
```azurepowershell-interactive Publish-GuestConfigurationPackage -Path ./AuditBitlocker.zip -ResourceGroupName myResourceGroupName -StorageAccountName myStorageAccountName
package and creates a policy definition.
Parameters of the `New-GuestConfigurationPolicy` cmdlet: -- **ContentUri**: Public http(s) uri of Guest Configuration content package.
+- **ContentUri**: Public HTTP(s) URI of Guest Configuration content package.
- **DisplayName**: Policy display name. - **Description**: Policy description. - **Parameter**: Policy parameters provided in hashtable format.
unique from previous versions. You can include a version number in the name such
specify that the package should be considered newer or older than other packages. Second, update the parameters used with the `New-GuestConfigurationPolicy` cmdlet following each of
-the explanations below.
+the following explanations.
- **Version**: When you run the `New-GuestConfigurationPolicy` cmdlet, you must specify a version number greater than what is currently published.
governance Programmatically Create https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/how-to/programmatically-create.md
This article walks you through programmatically creating and managing policies. Azure Policy definitions enforce different rules and effects over your resources. Enforcement makes sure that
-resources stay compliant with your corporate standards and service level agreements.
+resources stay compliant with your corporate standards and service-level agreements.
For information about compliance, see [getting compliance data](get-compliance-data.md).
Use the following procedure to create a policy definition.
Replace the preceding {subscriptionId} with the ID of your subscription or {managementGroupId} with the ID of your [management group](../../management-groups/overview.md).
- For more information about the structure of the query, see [Azure Policy Definitions ΓÇô Create or Update](/rest/api/policy/policydefinitions/createorupdate)
+ For more information about the structure of the query, see
+ [Azure Policy Definitions - Create or Update](/rest/api/policy/policydefinitions/createorupdate)
and
- [Policy Definitions ΓÇô Create or Update At Management Group](/rest/api/policy/policydefinitions/createorupdateatmanagementgroup)
+ [Policy Definitions - Create or Update At Management Group](/rest/api/policy/policydefinitions/createorupdateatmanagementgroup).
Use the following procedure to create a policy assignment and assign the policy definition at the resource group level.
governance Remediate Resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/how-to/remediate-resources.md
doesn't impact its operation with Azure Policy.
> [!IMPORTANT] > In the following scenarios, the assignment's managed identity must be > [manually granted access](#manually-configure-the-managed-identity) or the remediation deployment
-> will fail:
+> fails:
> > - If the assignment is created through SDK
-> - If a resource modified by **deployIfNotExists** or **modify** is outside the scope of the policy assignment
+> - If a resource modified by **deployIfNotExists** or **modify** is outside the scope of the policy
+> assignment
> - If the template accesses properties on resources outside the scope of the policy assignment ## Configure policy definition
The first step is to define the roles that **deployIfNotExists** and **modify**
definition to successfully deploy the content of your included template. Under the **details** property, add a **roleDefinitionIds** property. This property is an array of strings that match roles in your environment. For a full example, see the [deployIfNotExists
-example](../concepts/effects.md#deployifnotexists-example) or the [modify examples](../concepts/effects.md#modify-examples).
+example](../concepts/effects.md#deployifnotexists-example) or the
+[modify examples](../concepts/effects.md#modify-examples).
```json "details": {
To create a **remediation task**, follow these steps:
1. On the **New remediation task** page, filter the resources to remediate by using the **Scope** ellipses to pick child resources from where the policy is assigned (including down to the
- individual resource objects). Additionally, use the **Locations** drop-down to further filter the
- resources. Only resources listed in the table will be remediated.
+ individual resource objects). Additionally, use the **Locations** dropdown list to further filter
+ the resources. Only resources listed in the table will be remediated.
:::image type="content" source="../media/remediate-resources/select-resources.png" alt-text="Screenshot of the Remediate node and the grid of resources to remediate." border="false":::
To create a **remediation task**, follow these steps:
progress. The filtering used for the task is shown along with a list of the resources being remediated.
-1. From the **Remediation task** page, right-click on a resource to view either the remediation
+1. From the **Remediation task** page, select and hold (or right-click) on a resource to view either the remediation
task's deployment or the resource. At the end of the row, select on **Related events** to see details such as an error message.
governance Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/overview.md
that assignment. Subscopes can be excluded, if necessary. For more information,
[Scope in Azure Policy](./concepts/scope.md). Azure Policy uses a [JSON format](./concepts/definition-structure.md) to form the logic the
-evaluation uses to determine if a resource is compliant or not. Definitions include metadata and the
-policy rule. The defined rule can use functions, parameters, logical operators, conditions, and
-property [aliases](./concepts/definition-structure.md#aliases) to match exactly the scenario you
+evaluation uses to determine whether a resource is compliant or not. Definitions include metadata
+and the policy rule. The defined rule can use functions, parameters, logical operators, conditions,
+and property [aliases](./concepts/definition-structure.md#aliases) to match exactly the scenario you
want. The policy rule determines which resources in the scope of the assignment get evaluated. ### Understand evaluation outcomes
For more information about policy parameters, see
### Initiative definition
-An initiative definition is a collection of policy definitions that are tailored towards achieving
+An initiative definition is a collection of policy definitions that are tailored toward achieving
a singular overarching goal. Initiative definitions simplify managing and assigning policy definitions. They simplify by grouping a set of policies as one single item. For example, you could create an initiative titled **Enable Monitoring in Azure Security Center**, with a goal to monitor
all the available security recommendations in your Azure Security Center.
Under this initiative, you would have policy definitions such as: -- **Monitor unencrypted SQL Database in Security Center** ΓÇô For monitoring unencrypted SQL databases
+- **Monitor unencrypted SQL Database in Security Center** - For monitoring unencrypted SQL databases
and servers.-- **Monitor OS vulnerabilities in Security Center** ΓÇô For monitoring servers that don't satisfy the
+- **Monitor OS vulnerabilities in Security Center** - For monitoring servers that don't satisfy the
configured baseline.-- **Monitor missing Endpoint Protection in Security Center** ΓÇô For monitoring servers without an
+- **Monitor missing Endpoint Protection in Security Center** - For monitoring servers without an
installed endpoint protection agent. Like policy parameters, initiative parameters help simplify initiative management by reducing
options:
- Use the parameters of the policy definitions within this initiative: In this example, _allowedLocations_ and _allowedSingleLocation_ become initiative parameters for **initiativeC**. - Provide values to the parameters of the policy definitions within this initiative definition. In
- this example, you can provide a list of locations to **policyA**'s parameter ΓÇô
- **allowedLocations** and **policyB**'s parameter ΓÇô **allowedSingleLocation**. You can also provide
+ this example, you can provide a list of locations to **policyA**'s parameter -
+ **allowedLocations** and **policyB**'s parameter - **allowedSingleLocation**. You can also provide
values when assigning this initiative. - Provide a list of _value_ options that can be used when assigning this initiative. When you assign this initiative, the inherited parameters from the policy definitions within the initiative, can
governance Built In Policies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/built-in-policies.md
This page is an index of Azure Policy built-in policy definitions.
-The name of each built-in links to the policy definition in Azure portal. Use the link in the
+The name of each built-in links to the policy definition in the Azure portal. Use the link in the
**Source** column to view the source on the [Azure Policy GitHub repo](https://github.com/Azure/azure-policy). The built-ins are grouped by the **category** property in **metadata**. To jump to a specific **category**, use the menu on the right
governance Pattern Count Operator https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/pattern-count-operator.md
allow inbound Remote Desktop Protocol (RDP) traffic.
### Explanation The core components of the **count** operator are _field_, _where_, and the condition. Each is
-highlighted in the snippet below.
+highlighted in the following snippet.
- _field_ tells count which [alias](../concepts/definition-structure.md#aliases) to evaluate members of. Here, we're looking at the **securityRules\[\*\]** alias _array_ of the network security
highlighted in the snippet below.
- Review other [patterns and built-in definitions](./index.md). - Review the [Azure Policy definition structure](../concepts/definition-structure.md).-- Review [Understanding policy effects](../concepts/effects.md).
+- Review [Understanding policy effects](../concepts/effects.md).
governance Pattern Effect Details https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/pattern-effect-details.md
definition while others require several properties.
## Sample 1: Simple effect
-This policy definition checks to see if the tag defined in parameter **tagName** exists on the
+This policy definition checks to see whether the tag defined in parameter **tagName** exists on the
evaluated resource. If the tag doesn't yet exist, the [modify](../concepts/effects.md#modify) effect is triggered to add the tag with the value in parameter **tagValue**.
the _add_ **operation** and the parameters are used to set the tag and its value
This policy definition audits each virtual machine for when an extension, defined in parameters **publisher** and **type**, doesn't exist. It uses [auditIfNotExists](../concepts/effects.md#auditifnotexists) to check a resource related to the
-virtual machine to see if an instance exists that matches the defined parameters. This example
+virtual machine to see whether an instance exists that matches the defined parameters. This example
checks the **extensions** type. :::code language="json" source="~/policy-templates/patterns/pattern-effect-details-2.json":::
checks the **extensions** type.
An **auditIfNotExists** effect requires the **policyRule.then.details** block to define both a **type** and the **existenceCondition** to look for. The **existenceCondition** uses policy language elements, such as [logical operators](../concepts/definition-structure.md#logical-operators), to
-determine if a matching related resource exists. In this example, the values checked against each
-[alias](../concepts/definition-structure.md#aliases) are defined in parameters.
+determine whether a matching related resource exists. In this example, the values checked against
+each [alias](../concepts/definition-structure.md#aliases) are defined in parameters.
## Next steps - Review other [patterns and built-in definitions](./index.md). - Review the [Azure Policy definition structure](../concepts/definition-structure.md).-- Review [Understanding policy effects](../concepts/effects.md).
+- Review [Understanding policy effects](../concepts/effects.md).
governance Pattern Group With Initiative https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/pattern-group-with-initiative.md
parameters. The values are provided when the initiative is assigned.
#### Includes policy definitions Each included policy definition must provide the **policyDefinitionId** and a **parameters** array
-if the policy definition accepts parameters. In the snippet below, the included policy definition
-takes two parameters: **tagName** and **tagValue**. **tagName** is defined with a literal, but
-**tagValue** uses the parameter **costCenterValue** defined by the initiative. This passthrough of
-values improves reuse.
+if the policy definition accepts parameters. In the following snippet, the included policy
+definition takes two parameters: **tagName** and **tagValue**. **tagName** is defined with a
+literal, but **tagValue** uses the parameter **costCenterValue** defined by the initiative. This
+passthrough of values improves reuse.
:::code language="json" source="~/policy-templates/patterns/pattern-group-with-initiative.json" range="30-40":::
values improves reuse.
- Review other [patterns and built-in definitions](./index.md). - Review the [Azure Policy definition structure](../concepts/definition-structure.md).-- Review [Understanding policy effects](../concepts/effects.md).
+- Review [Understanding policy effects](../concepts/effects.md).
governance Pattern Logical Operators https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/pattern-logical-operators.md
and **anyOf**. They're optional and can be nested to create complex scenarios.
## Sample 1: One logical operator This policy definition evaluates [Azure Cosmos DB](../../../cosmos-db/introduction.md) accounts to
-see if automatic failovers and multiple write locations are configured. When they aren't, the
+see whether automatic failovers and multiple write locations are configured. When they aren't, the
[audit](../concepts/effects.md#audit) triggers and creates a log entry when the non-compliant resource is created or updated.
This policy definition evaluates resources for a naming pattern. If a resource d
This **policyRule.if** block also includes a single **allOf**, but each condition is wrapped with the **not** logical operator. The conditional inside the **not** logical operator evaluates first
-and then evaluates the **not** to determine if the entire clause is true or false. If both **not**
-logical operators evaluate to true, the policy effect triggers.
+and then evaluates the **not** to determine whether the entire clause is true or false. If both
+**not** logical operators evaluate to true, the policy effect triggers.
## Sample 3: Combining logical operators This policy definition evaluates [Spring on Azure](/azure/developer/java/spring-framework) accounts
-to see if either trace isn't enabled or if trace isn't in a successful state.
+to see whether either trace isn't enabled or if trace isn't in a successful state.
:::code language="json" source="~/policy-templates/patterns/pattern-logical-operators-3.json":::
conditions in the **anyOf** are true, the policy effect triggers.
- Review other [patterns and built-in definitions](./index.md). - Review the [Azure Policy definition structure](../concepts/definition-structure.md).-- Review [Understanding policy effects](../concepts/effects.md).
+- Review [Understanding policy effects](../concepts/effects.md).
governance Pattern Parameters https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/pattern-parameters.md
The parameter is then used in the **policyRule.then** block for the _effect_.
- Review other [patterns and built-in definitions](./index.md). - Review the [Azure Policy definition structure](../concepts/definition-structure.md).-- Review [Understanding policy effects](../concepts/effects.md).
+- Review [Understanding policy effects](../concepts/effects.md).
governance Pattern Tags https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/pattern-tags.md
# Azure Policy pattern: tags
-[Tags](../../..//azure-resource-manager/management/tag-resources.md) are an important part of
+[Tags](../../../azure-resource-manager/management/tag-resources.md) are an important part of
managing, organizing, and governing your Azure resources. Azure Policy makes it possible to configure tags on your new and existing resources at scale with the [modify](../concepts/effects.md#modify) effect and
update existing resources.
- Review other [patterns and built-in definitions](./index.md). - Review the [Azure Policy definition structure](../concepts/definition-structure.md).-- Review [Understanding policy effects](../concepts/effects.md).
+- Review [Understanding policy effects](../concepts/effects.md).
governance General https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/troubleshoot/general.md
operate as intended.
To troubleshoot your policy definition, do the following: 1. First, wait the appropriate amount of time for an evaluation to finish and compliance results
- to become available in Azure portal or SDK.
+ to become available in the Azure portal or SDK.
1. To start a new evaluation scan with Azure PowerShell or the REST API, see [On-demand evaluation scan](../how-to/get-compliance-data.md#on-demand-evaluation-scan).
Ensure that the domains and ports mentioned in the following articles are open:
The add-on can't reach the Azure Policy service endpoint, and it returns one of the following errors: -- `azure.BearerAuthorizer#WithAuthorization: Failed to refresh the Token for request to https://gov-prod-policy-data.trafficmanager.net/checkDataPolicyCompliance?api-version=2019-01-01-preview: StatusCode=404`
+- `azure.BearerAuthorizer#WithAuthorization: Failed to refresh the Token for request to https://gov-prod-policy-data.trafficmanager.net/checkDataPolicyCompliance?api-version=2019-01-01-preview: StatusCode=404`
- `adal: Refresh request failed. Status Code = '404'. Response body: getting assigned identities for pod kube-system/azure-policy-8c785548f-r882p in CREATED state failed after 16 attempts, retry duration [5]s, error: <nil>` #### Cause
governance Create And Manage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/tutorials/create-and-manage.md
# Tutorial: Create and manage policies to enforce compliance Understanding how to create and manage policies in Azure is important for staying compliant with
-your corporate standards and service level agreements. In this tutorial, you learn to use Azure
+your corporate standards and service-level agreements. In this tutorial, you learn to use Azure
Policy to do some of the more common tasks related to creating, assigning, and managing policies across your organization, such as:
create a virtual machine in the G series, the request is denied.
> for an initiative definition. - The name of the policy definition - _Require VM SKUs not in the G series_
- - The description of what the policy definition is intended to do ΓÇô _This policy definition
+ - The description of what the policy definition is intended to do - _This policy definition
enforces that all virtual machines created in this scope have SKUs other than the G series to reduce cost._ - Choose from existing options (such as _Compute_), or create a new category for this policy definition. - Copy the following JSON code and then update it for your needs with: - The policy parameters.
- - The policy rules/conditions, in this case ΓÇô VM SKU size equal to G series
- - The policy effect, in this case ΓÇô **Deny**.
+ - The policy rules/conditions, in this case - VM SKU size equal to G series
+ - The policy effect, in this case - **Deny**.
Here's what the JSON should look like. Paste your revised code into the Azure portal.
overview](../overview.md).
1. Policy definitions added to the initiative that have parameters are displayed in a grid. The _value type_ can be 'Default value', 'Set value', or 'Use Initiative Parameter'. If 'Set value' is selected, the related value is entered under _Value(s)_. If the parameter on the policy
- definition has a list of allowed values, the entry box is a drop-down selector. If 'Use
- Initiative Parameter' is selected, a drop-down select is provided with the names of initiative
- parameters created on the **Initiative parameters** tab.
+ definition has a list of allowed values, the entry box is a dropdown list selector. If 'Use
+ Initiative Parameter' is selected, a dropdown list select is provided with the names of
+ initiative parameters created on the **Initiative parameters** tab.
:::image type="content" source="../media/create-and-manage/initiative-definition-3.png" alt-text="Screenshot of the options for allowed values for the allowed locations definition parameter on the policy parameters tab of the initiative definition page.":::
overview](../overview.md).
> creation of the initiative definition and has no impact on policy evaluation or the scope of > the initiative when assigned.
- Set the 'Allowed locations' _value type_ to 'Set value' and select 'East US 2' from the
- drop-down. For the two instances of the _Add or replace a tag on resources_ policy definitions,
- set the **Tag Name** parameters to 'Env' and 'CostCenter and the **Tag Value** parameters to
- 'Test' and 'Lab' as shown below. Leave the others as 'Default value'. Using the same definition
- twice in the initiative but with different parameters, this configuration adds or replace an
- 'Env' tag with the value 'Test' and a 'CostCenter' tag with the value of 'Lab' on resources in
- scope of the assignment.
+ Set the 'Allowed locations' _value type_ to 'Set value' and select 'East US 2' from the dropdown
+ list. For the two instances of the _Add or replace a tag on resources_ policy definitions, set
+ the **Tag Name** parameters to 'Env' and 'CostCenter and the **Tag Value** parameters to 'Test'
+ and 'Lab' as shown below. Leave the others as 'Default value'. Using the same definition twice in
+ the initiative but with different parameters, this configuration adds or replace an 'Env' tag
+ with the value 'Test' and a 'CostCenter' tag with the value of 'Lab' on resources in scope of the
+ assignment.
:::image type="content" source="../media/create-and-manage/initiative-definition-4.png" alt-text="Screenshot of the entered options for allowed values for the allowed locations definition parameter and values for both tag parameter sets on the policy parameters tab of the initiative definition page.":::
New-AzPolicySetDefinition -Name 'VMPolicySetDefinition' -Metadata '{"category":"
:::image type="content" source="../media/create-and-manage/assign-definition.png" alt-text="Screenshot of the 'Assign' button on the initiative definition page." border="false":::
- You can also right-click on the selected row or select the ellipsis at the end of the row for a
- contextual menu. Then select **Assign**.
+ You can also select and hold (or right-click) on the selected row or select the ellipsis at the
+ end of the row for a contextual menu. Then select **Assign**.
:::image type="content" source="../media/create-and-manage/select-right-click.png" alt-text="Screenshot of the context menu for an initiative to select the Assign functionality." border="false":::
New-AzPolicySetDefinition -Name 'VMPolicySetDefinition' -Metadata '{"category":"
being applied to them. - Initiative definition and Assignment name: Get Secure (pre-populated as name of initiative being assigned).
- - Description: This initiative assignment is tailored to enforce this group of policy
+ - Description: This initiative assignment is tailored to enforce this group of policy
definitions. - Policy enforcement: Leave as the default _Enabled_. - Assigned by: Automatically filled based on who is logged in. This field is optional, so custom
In this tutorial, you successfully accomplished the following tasks:
To learn more about the structures of policy definitions, look at this article: > [!div class="nextstepaction"]
-> [Azure Policy definition structure](../concepts/definition-structure.md)
+> [Azure Policy definition structure](../concepts/definition-structure.md)
governance Create Custom Policy Definition https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/tutorials/create-custom-policy-definition.md
often enforce:
Whatever the business driver for creating a custom policy, the steps are the same for defining the new custom policy.
-Before creating a custom policy, check the [policy samples](../samples/index.md) to see if a policy
-that matches your needs already exists.
+Before creating a custom policy, check the [policy samples](../samples/index.md) to see whether a
+policy that matches your needs already exists.
The approach to creating a custom policy follows these steps:
governance Govern Tags https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/tutorials/govern-tags.md
your Azure resources into a taxonomy. When following
tags can be the basis for applying your business policies with Azure Policy or [tracking costs with Cost Management](../../../cost-management-billing/costs/cost-mgt-best-practices.md#tag-shared-resources). No matter how or why you use tags, it's important that you can quickly add, change, and remove those
-tags on your Azure resources. To see if your Azure resource supports tagging, see
+tags on your Azure resources. To see whether your Azure resource supports tagging, see
[Tag support](../../../azure-resource-manager/management/tag-support.md). Azure Policy's [Modify](../concepts/effects.md#modify) effect is designed to aid in the governance
In this tutorial, you learned about the following tasks:
To learn more about the structures of policy definitions, look at this article: > [!div class="nextstepaction"]
-> [Azure Policy definition structure](../concepts/definition-structure.md)
+> [Azure Policy definition structure](../concepts/definition-structure.md)
governance Policy As Code Github https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/tutorials/policy-as-code-github.md
on:
schedule: - cron: '0 8 * * *' # runs every morning 8am jobs:
- assess-policy-compliance:
+ assess-policy-compliance:
runs-on: ubuntu-latest steps: - name: Login to Azure
governance Route State Change Events https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/tutorials/route-state-change-events.md
The preceding command uses the following information:
enforced on. It could range from a subscription to resource groups. Be sure to replace &lt;scope&gt; with the name of your resource group. The format for a resource group scope is `/subscriptions/<SubscriptionID>/resourceGroups/<ResourceGroup>`.-- **Policy** ΓÇô The policy definition ID, based on which you're using to create the assignment. In
+- **Policy** - The policy definition ID, based on which you're using to create the assignment. In
this case, it's the ID of policy definition _Require a tag on resource groups_. To get the policy definition ID, run this command: `az policy definition list --query "[?displayName=='Require a tag on resource groups']"`
governance Guidance For Throttled Requests https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/resource-graph/concepts/guidance-for-throttled-requests.md
In every query response, Azure Resource Graph adds two throttling headers:
- `x-ms-user-quota-resets-after` (hh:mm:ss): The time duration until a user's quota consumption is reset.
-When a security principal has access to more than 5000 subscriptions within the tenant or management
-group [query scope](./query-language.md#query-scope), the response is limited to the first 5000
-subscriptions and the `x-ms-tenant-subscription-limit-hit` header is returned as `true`.
+When a security principal has access to more than 5,000 subscriptions within the tenant or
+management group [query scope](./query-language.md#query-scope), the response is limited to the
+first 5,000 subscriptions and the `x-ms-tenant-subscription-limit-hit` header is returned as `true`.
To illustrate how the headers work, let's look at a query response that has the header and values of `x-ms-user-quota-remaining: 10` and `x-ms-user-quota-resets-after: 00:00:03`.
async Task ExecuteQueries(IEnumerable<string> queries)
## Pagination
-Since Azure Resource Graph returns at most 1000 entries in a single query response, you may need to
+Since Azure Resource Graph returns at most 1,000 entries in a single query response, you may need to
[paginate](./work-with-data.md#paging-results) your queries to get the complete dataset you're looking for. However, some Azure Resource Graph clients handle pagination differently than others.
looking for. However, some Azure Resource Graph clients handle pagination differ
- Azure CLI / Azure PowerShell When using either Azure CLI or Azure PowerShell, queries to Azure Resource Graph are automatically
- paginated to fetch at most 5000 entries. The query results return a combined list of entries from
+ paginated to fetch at most 5,000 entries. The query results return a combined list of entries from
all paginated calls. In this case, depending on the number of entries in the query result, a single paginated query may consume more than one query quota. For example, in the following examples, a single run of the query may consume up to five query quota:
governance Query Language https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/resource-graph/concepts/query-language.md
properties from related resource types. Here is the list of tables available in
|MaintenanceResources |Partial, join _to_ only. (preview) |Includes resources _related_ to `Microsoft.Maintenance`. | |PatchAssessmentResources|No |Includes resources _related_ to Azure Virtual Machines patch assessment. | |PatchInstallationResources|No |Includes resources _related_ to Azure Virtual Machines patch installation. |
-|PolicyResources |No |Includes resources _related_ to `Microsoft.PolicyInsights`. (**Preview**)|
+|PolicyResources |No |Includes resources _related_ to `Microsoft.PolicyInsights`. (**Preview**) |
|RecoveryServicesResources |Partial, join _to_ only. (preview) |Includes resources _related_ to `Microsoft.DataProtection` and `Microsoft.RecoveryServices`. | |SecurityResources |Partial, join _to_ only. (preview) |Includes resources _related_ to `Microsoft.Security`. | |ServiceHealthResources |No |Includes resources _related_ to `Microsoft.ResourceHealth`. |
query scope is all resources, which includes
[Azure Lighthouse](../../../lighthouse/concepts/azure-delegated-resource-management.md) delegated resources, that the authenticated user can access. The new `managementGroupId` property takes the management group ID, which is different from the name of the management group. When
-`managementGroupId` is specified, resources from the first 5000 subscriptions in or under the
+`managementGroupId` is specified, resources from the first 5,000 subscriptions in or under the
specified management group hierarchy are included. `managementGroupId` can't be used at the same time as `subscriptions`.
query or the property name is interpreted incorrectly and doesn't provide the ex
- **bash** - `\`
- Example query that escapes the property _\$type_ in bash:
+ Example query that escapes the property _\$type_ in Bash:
```kusto where type=~'Microsoft.Insights/alertRules' | project name, properties.condition.\$type
governance Work With Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/resource-graph/concepts/work-with-data.md
When it's necessary to break a result set into smaller sets of records for proce
result set would exceed the maximum allowed value of _1000_ returned records, use paging. The [REST API](/rest/api/azureresourcegraph/resourcegraph(2019-04-01)/resources/resources) **QueryResponse** provides values to indicate of a results set has been broken up:
-**resultTruncated** and **$skipToken**. **resultTruncated** is a boolean value that informs the
+**resultTruncated** and **$skipToken**. **resultTruncated** is a Boolean value that informs the
consumer if there are more records not returned in the response. This condition can also be identified when the **count** property is less than the **totalRecords** property. **totalRecords** defines how many records that match the query.
defines how many records that match the query.
column or when there are less resources available than a query is requesting. When **resultTruncated** is **true**, the **$skipToken** property isn't set.
-The following examples show how to **skip** the first 3000 records and return the **first** 1000
+The following examples show how to **skip** the first 3,000 records and return the **first** 1,000
records after those records skipped with Azure CLI and Azure PowerShell: ```azurecli-interactive
governance First Query Azurecli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/resource-graph/first-query-azurecli.md
Title: "Quickstart: Your first Azure CLI query" description: In this quickstart, you follow the steps to enable the Resource Graph extension for Azure CLI and run your first query. Last updated 05/01/2021-+ # Quickstart: Run your first Resource Graph query using Azure CLI
governance First Query Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/resource-graph/first-query-dotnet.md
packages and run your first query. To learn more about the Resource Graph langua
query language details page. > [!div class="nextstepaction"]
-> [Get more information about the query language](./concepts/query-language.md)
+> [Get more information about the query language](./concepts/query-language.md)
governance First Query Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/resource-graph/first-query-portal.md
# Quickstart: Run your first Resource Graph query using Azure Resource Graph Explorer
-The power of Azure Resource Graph is available directly in Azure portal through Azure Resource Graph
-Explorer. Resource Graph Explorer provides browsable information about the Azure Resource Manager
-resource types and properties that you can query. Resource Graph Explorer also provides a clean
-interface for working with multiple queries, evaluating the results, and even converting the results
-of some queries into a chart that can be pinned to an Azure dashboard.
+The power of Azure Resource Graph is available directly in the Azure portal through Azure Resource
+Graph Explorer. Resource Graph Explorer provides browsable information about the Azure Resource
+Manager resource types and properties that you can query. Resource Graph Explorer also provides a
+clean interface for working with multiple queries, evaluating the results, and even converting the
+results of some queries into a chart that can be pinned to an Azure dashboard.
At the end of this quickstart, you'll have used Azure portal and Resource Graph Explorer to run your first Resource Graph query and pinned the results to a dashboard.
your Azure portal workflow, try out these example dashboards.
1. Select and download the sample dashboard you want to evaluate.
-1. In Azure portal, select **Dashboard** from the left pane.
+1. In the Azure portal, select **Dashboard** from the left pane.
1. Select **Upload**, then locate and select the downloaded sample dashboard file. Then select **Open**.
can do so with the following steps:
1. Select **Dashboard** from the left pane.
-1. From the dashboard drop-down, select the sample Resource Graph dashboard you wish to delete.
+1. From the dashboard dropdown list, select the sample Resource Graph dashboard you wish to delete.
1. Select **Delete** from the dashboard menu at the top of the dashboard and select **Ok** to confirm.
governance Get Resource Changes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/resource-graph/how-to/get-resource-changes.md
The **resourceChanges** endpoint accepts the following parameters in the request
- **resourceId** \[required\]: The Azure resource to look for changes on. - **interval** \[required\]: A property with _start_ and _end_ dates for when to check for a change event using the **Zulu Time Zone (Z)**.-- **fetchPropertyChanges** (optional): A boolean property that sets if the response object includes
+- **fetchPropertyChanges** (optional): A Boolean property that sets if the response object includes
property changes. Example request body:
governance Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/resource-graph/overview.md
For more information, see
## Running your first query Azure Resource Graph Explorer, part of Azure portal, enables running Resource Graph queries directly
-in Azure portal. Pin the results as dynamic charts to provide real-time dynamic information to your
-portal workflow. For more information, see
+in the Azure portal. Pin the results as dynamic charts to provide real-time dynamic information to
+your portal workflow. For more information, see
[First query with Azure Resource Graph Explorer](./first-query-portal.md). Resource Graph supports Azure CLI, Azure PowerShell, Azure SDK for Python, and more. The query is
governance Advanced https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/resource-graph/samples/advanced.md
Search-AzGraph -Query "Resources | where type =~ 'microsoft.compute/virtualmachi
## <a name="mvexpand-cosmosdb"></a>List Cosmos DB with specific write locations
-The following query limits to Cosmos DB resources, uses `mv-expand` to expand the property bag for
-**properties.writeLocations**, then project specific fields and limit the results further to
+The following query limits to Azure Cosmos DB resources, uses `mv-expand` to expand the property bag
+for **properties.writeLocations**, then project specific fields and limit the results further to
**properties.writeLocations.locationName** values matching either 'East US' or 'West US'. ```kusto
Resources
| join kind=leftouter( Resources | where type == 'microsoft.compute/virtualmachines/extensions'
- | extend
+ | extend
VMId = toupper(substring(id, 0, indexof(id, '/extensions'))), ExtensionName = name ) on $left.JoinID == $right.VMId
Search-AzGraph -Query "Resources | where type == 'microsoft.compute/virtualmachi
## <a name="count-gcnoncompliant"></a>Count of non-compliant Guest Configuration assignments
-Displays a count of non-compliant machines per [Guest Configuration assignment reason](../../policy/how-to/determine-non-compliance.md#compliance-details-for-guest-configuration). Limits results to first 100 for performance.
+Displays a count of non-compliant machines per
+[Guest Configuration assignment reason](../../policy/how-to/determine-non-compliance.md#compliance-details-for-guest-configuration).
+Limits results to first 100 for performance.
```kusto GuestConfigurationResources
governance Starter https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/resource-graph/samples/starter.md
resources you're looking for.
We'll walk through the following starter queries: - [Count Azure resources](#count-resources)-- [Count key vault resources](#count-keyvaults)
+- [Count Key Vault resources](#count-keyvaults)
- [List resources sorted by name](#list-resources) - [Show all virtual machines ordered by name in descending order](#show-vms) - [Show first five virtual machines by name and their OS type](#show-sorted)
Search-AzGraph -Query "Resources | summarize count()"
-## <a name="count-keyvaults"></a>Count key vault resources
+## <a name="count-keyvaults"></a>Count Key Vault resources
This query uses `count` instead of `summarize` to count the number of records returned. Only key vaults are included in the count.
advisorresources
solution = tostring(properties.shortDescription.solution), currency = tostring(properties.extendedProperties.savingsCurrency) | summarize
- dcount(resources),
+ dcount(resources),
bin(sum(savings), 0.01) by solution, currency | project solution, dcount_resources, sum_savings, currency
governance Shared Query Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/resource-graph/shared-query-azure-cli.md
created a shared query. To learn more about the Resource Graph language, continu
language details page. > [!div class="nextstepaction"]
-> [Get more information about the query language](./concepts/query-language.md)
+> [Get more information about the query language](./concepts/query-language.md)
governance General https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/resource-graph/troubleshoot/general.md
There are several methods of dealing with throttled requests:
#### Issue
-Customers with access to more than 1000 subscriptions, including cross-tenant subscriptions with
+Customers with access to more than 1,000 subscriptions, including cross-tenant subscriptions with
[Azure Lighthouse](../../../lighthouse/overview.md), can't fetch data across all subscriptions in a single call to Azure Resource Graph. #### Cause
-Azure CLI and PowerShell forward only the first 1000 subscriptions to Azure Resource Graph. The REST
-API for Azure Resource Graph accepts a maximum number of subscriptions to perform the query on.
+Azure CLI and PowerShell forward only the first 1,000 subscriptions to Azure Resource Graph. The
+REST API for Azure Resource Graph accepts a maximum number of subscriptions to perform the query on.
#### Resolution
-Batch requests for the query with a subset of subscriptions to stay under the 1000 subscription
+Batch requests for the query with a subset of subscriptions to stay under the 1,000 subscription
limit. The solution is using the **Subscription** parameter in PowerShell. ```azurepowershell-interactive
channels for more support:
- Get answers from Azure experts through [Azure Forums](https://azure.microsoft.com/support/forums/).-- Connect with [@AzureSupport](https://twitter.com/azuresupport) ΓÇô the official Microsoft Azure
+- Connect with [@AzureSupport](https://twitter.com/azuresupport) - the official Microsoft Azure
account for improving customer experience by connecting the Azure community to the right resources: answers, support, and experts. - If you need more help, you can file an Azure support incident. Go to the
- [Azure support site](https://azure.microsoft.com/support/options/) and select **Get Support**.
+ [Azure support site](https://azure.microsoft.com/support/options/) and select **Get Support**.
governance Create Share Query https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/resource-graph/tutorials/create-share-query.md
Title: "Tutorial: Manage queries in Azure portal"
+ Title: "Tutorial: Manage queries in the Azure portal"
description: In this tutorial, you create a Resource Graph Query and share the new query with others in the Azure portal. Last updated 05/01/2021
follow these steps:
Select **Run query** to see the query results in the bottom pane. For more information about this query, see
- [Samples ΓÇô Count virtual machines by OS type](../samples/starter.md#count-os).
+ [Samples - Count virtual machines by OS type](../samples/starter.md#count-os).
1. Select **Save** or **Save as**, enter **Count VMs by OS** as the name, leave the type as **Private query**, and then select **Save** at the bottom of the **Save query** pane. The tab
use it. To create a new Shared query, follow these steps:
Select **Run query** to see the query results in the bottom pane. For more information about this query, see
- [Samples ΓÇô Count virtual machines by OS type](../samples/starter.md#count-os).
+ [Samples - Count virtual machines by OS type](../samples/starter.md#count-os).
1. Select **Save** or **Save as**.
Explorer**.
The Resource Graph query is listed alongside other resources that are part of a resource group. Selecting the Resource Graph query opens the page for that query. The ellipsis and shortcut menu
-options (triggered by right-clicking) work the same as on the Resource Graph query page.
+options, triggered by select and hold (or right-click), work the same as on the Resource Graph query
+page.
### Query Resource Graph
longer want them.
## Next steps
-In this tutorial, you've created Private and Shared queries. To learn more about the Resource graph
+In this tutorial, you've created Private and Shared queries. To learn more about the Resource Graph
language, continue to the query language details page. > [!div class="nextstepaction"]
hdinsight Hdinsight Migrate Granular Access Cluster Configurations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hdinsight-migrate-granular-access-cluster-configurations.md
Title: Granular role-based access Azure HDInsight cluster configurations description: Learn about the changes required as part of the migration to granular role-based access for HDInsight cluster configurations.-- Last updated 04/20/2020
healthcare-apis Overview Of Search https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/fhir/overview-of-search.md
Previously updated : 4/21/2021 Last updated : 5/3/2021 # Overview of FHIR search
-The FHIR specification defines the fundamentals of search for FHIR resources. This article will guide you through some key aspects to searching resources in FHIR. For complete details about searching FHIR resources, refer to [Search](https://www.hl7.org/fhir/search.html) in the HL7 FHIR Specification.
+The FHIR specification defines the fundamentals of search for FHIR resources. This article will guide you through some key aspects to searching resources in FHIR. For complete details about searching FHIR resources, refer to [Search](https://www.hl7.org/fhir/search.html) in the HL7 FHIR Specification. Throughout this article, we will give examples of search syntax. Each search will be against your FHIR server, which typically has a URL of https://`<FHIRSERVERNAME>`.azurewebsites.net. In the examples, we will use the placeholder {{FHIR_URL}} for this URL.
FHIR searches can be against a specific resource type, a specified [compartment](https://www.hl7.org/fhir/compartmentdefinition.html), or all resources. The simplest way to execute a search in FHIR is to use a `GET` request. For example, if you want to pull all patients in the database, you could use the following request:
-`GET {{FHIR URL}}/Patient`
+```rest
+GET {{FHIR_URL}}/Patient
+```
You can also search using `POST`, which is useful if the query string is too long. To search using `POST`, the search parameters can be submitted as a form body. This allows for longer, more complex series of query parameters that might be difficult to see and understand in a query string.
-If the search request is successful, youΓÇÖll receive a FHIR bundle response with the type `searchset`. If the search fails, youΓÇÖll find these details in the `OperationOutcome` to help you understand why the search failed.
+If the search request is successful, youΓÇÖll receive a FHIR bundle response with the type `searchset`. If the search fails, youΓÇÖll find the error details in the `OperationOutcome` to help you understand why the search failed.
-In the following sections, weΓÇÖll cover the various aspects involved in searching. Once youΓÇÖve reviewed these details, refer to our **Samples page** that has examples of searches that you can make in the Azure API for FHIR.
+In the following sections, weΓÇÖll cover the various aspects involved in searching. Once youΓÇÖve reviewed these details, refer to our [samples page](search-samples.md) that has examples of searches that you can make in the Azure API for FHIR.
## Search parameters
-When you do a search, consider searching based on various attributes of the resource. These attributes are called search parameters. Each resource has a set of defined search parameters. The search parameter must be defined and indexed in the database for you to successfully search against it.
+When you do a search, you'll search based on various attributes of the resource. These attributes are called search parameters. Each resource has a set of defined search parameters. The search parameter must be defined and indexed in the database for you to successfully search against it.
-Each search parameter has a defined data type. The Azure API for FHIR supports all [data types](https://www.hl7.org/fhir/search.html#ptypes) except the type **special**:
+Each search parameter has a defined [data types](https://www.hl7.org/fhir/search.html#ptypes). The support for the various data types is outlined below:
-| **Search parameter type** | **Supported - PaaS** | **Supported - OSS (SQL)** | **Supported - OSS (Cosmos DB)** |
-| - | -- | - | - |
+| **Search parameter type** | **Supported - PaaS** | **Supported - OSS (SQL)** | **Supported - OSS (Cosmos DB)** | **Comment**|
+| - | -- | - | - ||
| number | Yes | Yes | Yes | | date | Yes | Yes | Yes | | string | Yes | Yes | Yes | | token | Yes | Yes | Yes | | reference | Yes | Yes | Yes |
-| composite | Yes | Yes | Yes |
+| composite | Partial | Partial | Partial | The list of supported composite types is described later in this article |
| quantity | Yes | Yes | Yes | | uri | Yes | Yes | Yes | | special | No | No | No |
Each search parameter has a defined data type. The Azure API for FHIR supports a
There are [common search parameters](https://www.hl7.org/fhir/search.html#all) that apply to all resources. These are listed below, along with their support within the Azure API for FHIR:
-| **Common search parameter** | **Supported - PaaS** | **Supported - OSS (SQL)** | **Supported - OSS (Cosmos DB)** | **Comment** |
-| -- | -- | - | - | |
-| _id | Yes | Yes | Yes | |
-| _lastUpdated | Yes | Yes | Yes | |
-| _tag | Yes | Yes | Yes | |
-| _type | Yes | Yes | Yes | |
-| _security | Yes | Yes | Yes | |
-| _profile | Yes | Yes | Yes | **Note**: If you created your R4 database before February 20, 2021, youΓÇÖll need to run a reindexing job to enable **_profile**. |
-| _text | No | No | No | |
-| _content | No | No | No | |
-| _has | Partial | Partial | Yes | |
-| _query | No | No | No | |
-| _filter | No | No | No | |
-| _list | No | No | No | |
-
-### Resource specific parameters
-
-With the Azure API for FHIR, we support almost all resource specific search parameters defined by the FHIR specification. The only search parameters we donΓÇÖt support are available in the links below:
+| **Common search parameter** | **Supported - PaaS** | **Supported - OSS (SQL)** | **Supported - OSS (Cosmos DB)** | **Comment** |
+| -- | -- | - | - | |
+| _id | Yes | Yes | Yes | |
+| _lastUpdated | Yes | Yes | Yes | |
+| _tag | Yes | Yes | Yes | |
+| _type | Yes | Yes | Yes | |
+| _security | Yes | Yes | Yes | |
+| _profile | Yes | Yes | Yes | If you created your R4 database before February 20, 2021, youΓÇÖll need to run a [reindex job](how-to-run-a-reindex.md) to enable **_profile**.|
+| _has | Partial | Yes | Partial | Support for _has is in MVP in the Azure API for FHIR and the OSS version backed by Cosmos DB. More details are included under the chaining section below. |
+| _query | No | No | No | |
+| _filter | No | No | No | |
+| _list | No | No | No | |
+| _text | No | No | No | |
+| _content | No | No | No | |
+
+### Resource-specific parameters
+
+With the Azure API for FHIR, we support almost all [resource-specific search parameters](https://www.hl7.org/fhir/searchparameter-registry.html) defined by the FHIR specification. The only search parameters we donΓÇÖt support are available in the links below:
* [STU3 Unsupported Search Parameters](https://github.com/microsoft/fhir-server/blob/main/src/Microsoft.Health.Fhir.Core/Data/Stu3/unsupported-search-parameters.json)
With the Azure API for FHIR, we support almost all resource specific search para
You can also see the current support for search parameters in the [FHIR Capability Statement](https://www.hl7.org/fhir/capabilitystatement.html) with the following request:
-`GET {{FHIR URL}}/metadata`
+```rest
+GET {{FHIR_URL}}/metadata
+```
To see the search parameters in the capability statement, navigate to `CapabilityStatement.rest.resource.searchParam` to see the search parameters for each resource and `CapabilityStatement.rest.searchParam` to find the search parameters for all resources. > [!NOTE]
-> The Azure API for FHIR does not automatically create or index any support search parameters that are not defined by the FHIR specification. However, we do provide support for you to to define your own search parameters.
+> The Azure API for FHIR does not automatically create or index any search parameters that are not defined by the FHIR specification. However, we do provide support for you to to define your own [search parameters](how-to-do-custom-search.md).
### Composite search parameters
+Composite search allows you to search against value pairs. For example, if you were searching for a height observation where the person was 60 inches, you would want to make sure that a single component of the observation contained the code of height **and** the value of 60. You wouldn't want to get an observation where a weight of 60 and height of 48 was stored, even though the observation would have entries that qualified for value of 60 and code of height, just in different component sections.
With the Azure API for FHIR, we support the following search parameter type pairings:
For more information, see the HL7 [Composite Search Parameters](https://www.hl7.
| :type (reference) | Yes | Yes | Yes | | :not | Yes | Yes | Yes | | :below (uri) | Yes | Yes | Yes |
-| :above (uri) | No | NO | No |
-| :in (token) | No | NO | No |
-| :below (token) | No | NO | No |
-| :above (token) | No | NO | No |
-| :not-in (token) | No | NO | No |
+| :above (uri) | No | No | No |
+| :in (token) | No | No | No |
+| :below (token) | No | No | No |
+| :above (token) | No | No | No |
+| :not-in (token) | No | No | No |
For search parameters that have a specific order (numbers, dates, and quantities), you can use a [prefix](https://www.hl7.org/fhir/search.html#prefix) on the parameter to help with finding matches. The Azure API for FHIR supports all prefixes. ### Search result parameters---
-To help manage the returned resources, there are other search result parameters that you can use in your search. For details on how to use each of the search result parameters, refer to the [HL7](https://www.hl7.org/fhir/search.html#return) website.
+To help manage the returned resources, there are search result parameters that you can use in your search. For details on how to use each of the search result parameters, refer to the [HL7](https://www.hl7.org/fhir/search.html#return) website.
| **Search result parameters** | **Supported - PaaS** | **Supported - OSS (SQL)** | **Supported - OSS (Cosmos DB)** | **Comments** | | - | -- | - | - | --| | _elements | Yes | Yes | Yes | Issue [1256](https://github.com/microsoft/fhir-server/issues/1256) | | _count | Yes | Yes | Yes | _count is limited to 1000 resources. If it's set higher than 1000, only 1000 will be returned and a warning will be returned in the bundle. |
-| _include | Yes | Yes | Yes | Included items are limited to 100. _include on PaaS and OSS on Cosmos DB does not include :iterate support [(#1313)](https://github.com/microsoft/fhir-server/issues/1313). |
-| _revinclude | Yes | Yes | Yes | Included items are limited to 100. _revinclude on PaaS and OSS on Cosmos DB does not include :iterate support [(#1313)](https://github.com/microsoft/fhir-server/issues/1313). Issue [#1319](https://github.com/microsoft/fhir-server/issues/1319) |
+| _include | Yes | Yes | Yes | Included items are limited to 100. _include on PaaS and OSS on Cosmos DB do not include :iterate support [(#1313)](https://github.com/microsoft/fhir-server/issues/1313). |
+| _revinclude | Yes | Yes | Yes | Included items are limited to 100. _revinclude on PaaS and OSS on Cosmos DB do not include :iterate support [(#1313)](https://github.com/microsoft/fhir-server/issues/1313). Issue [#1319](https://github.com/microsoft/fhir-server/issues/1319) |
| _summary | Yes | Yes | Yes | | | _total | Partial | Partial | Partial | _total=none and _total=accurate |
-| _sort | Partial | Partial | Partial | sort=_lastUpdated is supported |
+| _sort | Partial | Partial | Partial | sort=_lastUpdated is supported. For Azure API for FHIR and OSS Cosmos DB databases created after April 20, 2021 sort is also supported on first name, last name, and clinical date. |
| _contained | No | No | No | | | _containedType | No | No | No | | | _score | No | No | No | | By default, the Azure API for FHIR is set to lenient handling. This means that the server will ignore any unknown or unsupported parameters. If you want to use strict handling, you can use the **Prefer** header and set `handling=strict`. - ## Chained & reverse chained searching A [chained search](https://www.hl7.org/fhir/search.html#chaining) allows you to search using a search parameter on a resource referenced by another resource. For example, if you want to find encounters where the patientΓÇÖs name is Jane, use: `GET {{FHIR URL}}/Encounter?subject:Patient.name=Jane`
-Similarly, you can do a reverse chained search. This allows you to get resources where you specify criteria on other resources that refer to them. For more examples of chained and reverse chaining, refer to the [FHIR search examples](search-samples.md) page.
+Similarly, you can do a reverse chained search. This allows you to get resources where you specify criteria on other resources that refer to them. For more examples of chained and reverse chained search, refer to the [FHIR search examples](search-samples.md) page.
-**Note**: In the Azure API for FHIR and the open source backed by Cosmos DB, there's a limitation where each subquery required for the chained and reverse chained searches will only return 100 items. If there are more than 100 items found, youΓÇÖll receive the following error message:
-
-ΓÇ£Subqueries in a chained expression can't return more than 100 results, please use a more selective criteria.ΓÇ¥
-
-To get a successful query, youΓÇÖll need to be more specific in what you are looking for.
+> [!NOTE]
+> In the Azure API for FHIR and the open source backed by Cosmos DB, there's a limitation where each subquery required for the chained and reverse chained searches will only return 100 items. If there are more than 100 items found, youΓÇÖll receive the following error message: ΓÇ£Subqueries in a chained expression can't return more than 100 results, please use a more selective criteria.ΓÇ¥ To get a successful query, youΓÇÖll need to be more specific in what you are looking for.
## Pagination
-As mentioned above, the results from a search will be a paged bundle. By default, the search will return 10 results per page, but this can be increased (or decreased) by specifying `_count`. Within the bundle, there will be a self link that contains the current result of the search. If there are additional matches, the bundle will contain a next link. You can continue to use the next link to get the subsequent pages of results.
+As mentioned above, the results from a search will be a paged bundle. By default, the search will return 10 results per page, but this can be increased (or decreased) by specifying `_count`. Within the bundle, there will be a self link that contains the current result of the search. If there are additional matches, the bundle will contain a next link. You can continue to use the next link to get the subsequent pages of results. `_count` is limited to 1000 items or less.
Currently, the Azure API for FHIR only supports the next link in bundles, and it doesnΓÇÖt support first, last, or previous links. ## Next steps
-Now that you've learned about the basics of search, see the search samples page for details about how to search using different search parameters, modifiers, and other FHIR search tools.
+Now that you've learned about the basics of search, see the search samples page for details about how to search using different search parameters, modifiers, and other FHIR search scenarios.
>[!div class="nextstepaction"] >[FHIR search examples](search-samples.md)
healthcare-apis Search Samples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/fhir/search-samples.md
Previously updated : 04/20/2021- Last updated : 05/03/2021+ # FHIR search examples
-Below are some examples of using FHIR search operations, including search parameters and modifiers, chain and reverse chain search, composite search, viewing the next entry set for search results, and searching with a POST request. For more information about search, see [Overview of FHIR Search](overview-of-search.md).
+Below are some examples of using FHIR search operations, including search parameters and modifiers, chain and reverse chain search, composite search, viewing the next entry set for search results, and searching with a `POST` request. For more information about search, see [Overview of FHIR Search](overview-of-search.md).
## Search result parameters ### _include
-> [!NOTE]
-> **_include** and **_revinclude** is limited to 100 items.
- `_include` searches across resources for the ones that include the specified parameter of the resource. For example, you can search across `MedicationRequest` resources to find only the ones that include information about the prescriptions for a specific patient, which is the `reference` parameter `patient`: ```rest
Below are some examples of using FHIR search operations, including search parame
```
+> [!NOTE]
+> **_include** and **_revinclude** is limited to 100 items.
+ ### _revinclude `_revinclude` is an additional search on top of `_include`, searching across the resources that reference the search results from `_include`. For example, you can search `MedicationRequest` resources. For each resource returned, search for `DetectedIssue` resources that show the clinical issues with the `patient`:
industry Configure Rules Alerts In Azure Farmbeats https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/industry/agriculture/configure-rules-alerts-in-azure-farmbeats.md
Title: Configure rules and manage alerts description: Describes how to configure rules and manage alerts in FarmBeats-+ Last updated 11/04/2019-+ # Configure rules and manage alerts
industry Disaster Recovery For Farmbeats https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/industry/agriculture/disaster-recovery-for-farmbeats.md
Title: Disaster recovery for FarmBeats description: This article describes how data recovery protects from losing your data.-+ Last updated 04/13/2020-+ # Disaster recovery for FarmBeats
industry Generate Maps In Azure Farmbeats https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/industry/agriculture/generate-maps-in-azure-farmbeats.md
Title: Generate maps description: This article describes how to generate maps in Azure FarmBeats.-+ Last updated 11/04/2019-+ # Generate maps
industry Generate Soil Moisture Map In Azure Farmbeats https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/industry/agriculture/generate-soil-moisture-map-in-azure-farmbeats.md
Title: Generate Soil Moisture Heatmap description: Describes how to generate Soil Moisture Heatmap in Azure FarmBeats-+ Last updated 11/04/2019-+ # Generate Soil Moisture Heatmap
industry Get Drone Imagery In Azure Farmbeats https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/industry/agriculture/get-drone-imagery-in-azure-farmbeats.md
Title: Get drone imagery description: This article describes how to get drone imagery from partners.-+ Last updated 11/04/2019-+ # Get drone imagery from drone partners
industry Get Sensor Data From Sensor Partner https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/industry/agriculture/get-sensor-data-from-sensor-partner.md
Title: Get sensor data from the partners description: This article describes how to get sensor data from partners.-+ Last updated 11/04/2019-+ # Get sensor data from sensor partners
industry Imagery Partner Integration In Azure Farmbeats https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/industry/agriculture/imagery-partner-integration-in-azure-farmbeats.md
Title: Imagery partner integration description: This article describes imagery partner integration.-+ Last updated 11/04/2019-+
industry Ingest Historical Telemetry Data In Azure Farmbeats https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/industry/agriculture/ingest-historical-telemetry-data-in-azure-farmbeats.md
Title: Ingest historical telemetry data description: This article describes how to ingest historical telemetry data.-+ Last updated 11/04/2019-+
industry Integration Patterns In Azure Farmbeats https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/industry/agriculture/integration-patterns-in-azure-farmbeats.md
Title: Azure FarmBeats Architecture description: Describes the architecture of Azure FarmBeats-+ Last updated 11/04/2019-+ # Integration patterns
industry Manage Farms In Azure Farmbeats https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/industry/agriculture/manage-farms-in-azure-farmbeats.md
Title: Manage Farms description: Describes how to manage farms-+ Last updated 11/04/2019-+
industry Manage Users In Azure Farmbeats https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/industry/agriculture/manage-users-in-azure-farmbeats.md
Title: Manage users in Azure FarmBeats description: This article describes how to manage users in Azure FarmBeats.-+ Last updated 12/02/2019-+
industry Overview Azure Farmbeats https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/industry/agriculture/overview-azure-farmbeats.md
Title: What is Azure FarmBeats description: Provides an overview of Azure FarmBeats-+ Last updated 11/04/2019-+
industry References For Azure Farmbeats https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/industry/agriculture/references-for-azure-farmbeats.md
Title: References for Azure FarmBeats description: Explore reference links to Azure FarmBeats articles, such as the FarmBeats REST API and FarmBeats Data hub Swagger.-+ Last updated 11/04/2019-+ # Reference information for FarmBeats
industry Sensor Partner Integration In Azure Farmbeats https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/industry/agriculture/sensor-partner-integration-in-azure-farmbeats.md
Title: Sensor partner integration description: This article describes sensor partner integration.-+ Last updated 11/04/2019-+ # Sensor partner integration
industry Troubleshoot Azure Farmbeats https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/industry/agriculture/troubleshoot-azure-farmbeats.md
Title: Troubleshoot Azure FarmBeats description: This article describes how to troubleshoot Azure FarmBeats.-+ Last updated 11/04/2019-+ # Troubleshoot Azure FarmBeats
iot-central Howto Export Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-export-data.md
description: How to use the new data export to export your IoT data to Azure and
Previously updated : 03/24/2021 Last updated : 04/09/2021
This article describes how to use the new data export feature in Azure IoT Centr
For example, you can: -- Continuously export telemetry, property changes, device lifecycle, and device template lifecycle data in JSON format in near-real time.
+- Continuously export telemetry, property changes, device connectivity, device lifecycle, and device template lifecycle data in JSON format in near-real time.
- Filter the data streams to export data that matches custom conditions. - Enrich the data streams with custom values and property values from the device. - Send the data to destinations such as Azure Event Hubs, Azure Service Bus, Azure Blob Storage, and webhook endpoints.
Now that you have a destination to export your data to, set up data export in yo
| :- | :- | :-- | | Telemetry | Export telemetry messages from devices in near-real time. Each exported message contains the full contents of the original device message, normalized. | [Telemetry message format](#telemetry-format) | | Property changes | Export changes to device and cloud properties in near-real time. For read-only device properties, changes to the reported values are exported. For read-write properties, both reported and desired values are exported. | [Property change message format](#property-changes-format) |
+ | Device connectivity | Export device connected and disconnected events. | [Device connectivity message format](#device-connectivity-changes-format) |
| Device lifecycle | Export device registered and deleted events. | [Device lifecycle changes message format](#device-lifecycle-changes-format) | | Device template lifecycle | Export published device template changes including created, updated, and deleted. | [Device template lifecycle changes message format](#device-template-lifecycle-changes-format) |
Now that you have a destination to export your data to, set up data export in yo
|--|| |Telemetry|<ul><li>Filter by device name, device ID, and device template</li><li>Filter stream to only contain telemetry that meets the filter conditions</li><li>Filter stream to only contain telemetry from devices with properties matching the filter conditions</li><li>Filter stream to only contain telemetry that have *message properties* meeting the filter condition. *Message properties* (also known as *application properties*) are sent in a bag of key-value pairs on each telemetry message optionally sent by devices that use the device SDKs. To create a message property filter, enter the message property key you're looking for, and specify a condition. Only telemetry messages with properties that match the specified filter condition are exported. [Learn more about application properties from IoT Hub docs](../../iot-hub/iot-hub-devguide-messages-construct.md) </li></ul>| |Property changes|<ul><li>Filter by device name, device ID, and device template</li><li>Filter stream to only contain property changes that meet the filter conditions</li></ul>|
+ |Device connectivity|<ul><li>Filter by device name, device ID, and device template</li><li>Filter stream to only contain changes from devices with properties matching the filter conditions</li></ul>|
|Device lifecycle|<ul><li>Filter by device name, device ID, and device template</li><li>Filter stream to only contain changes from devices with properties matching the filter conditions</li></ul>| |Device template lifecycle|<ul><li>Filter by device template</li></ul>|
The following example shows an exported property change message received in Azur
} } ```
+## Device connectivity changes format
+Each message or record represents a connectivity event encountered by a single device. Information in the exported message includes:
+
+- `applicationId`: The ID of the IoT Central application.
+- `messageSource`: The source for the message - `deviceConnectivity`.
+- `messageType`: Either `connected` or `disconnected`.
+- `deviceId`: The ID of the device that was changed.
+- `schema`: The name and version of the payload schema.
+- `templateId`: The ID of the device template associated with the device.
+- `enqueuedTime`: The time at which this change occurred in IoT Central.
+- `enrichments`: Any enrichments set up on the export.
+
+For Event Hubs and Service Bus, IoT Central exports new messages data to your event hub or Service Bus queue or topic in near real time. In the user properties (also referred to as application properties) of each message, the `iotcentral-device-id`, `iotcentral-application-id`, `iotcentral-message-source`, and `iotcentral-message-type` are included automatically.
+
+For Blob storage, messages are batched and exported once per minute.
+
+The following example shows an exported device connectivity message received in Azure Blob Storage.
+
+```json
+{
+ "applicationId": "1dffa667-9bee-4f16-b243-25ad4151475e",
+ "messageSource": "deviceConnectivity",
+ "messageType": "connected",
+ "deviceId": "1vzb5ghlsg1",
+ "schema": "default@v1",
+ "templateId": "urn:qugj6vbw5:___qbj_27r",
+ "enqueuedTime": "2021-04-05T22:26:55.455Z",
+ "enrichments": {
+ "userSpecifiedKey": "sampleValue"
+ }
+}
+
+```
## Device lifecycle changes format Each message or record represents one change to a single device. Information in the exported message includes:
iot-hub Iot Hub Public Network Access https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-public-network-access.md
Last updated 03/22/2021
# Managing public network access for your IoT hub
-To restrict access to only [private endpoint for your IoT hub in your VNet](virtual-network-support.md), disable public network access. To do so, use Azure portal or the `publicNetworkAccess` API.
+To restrict access to only [private endpoint for your IoT hub in your VNet](virtual-network-support.md), disable public network access. To do so, use the Azure portal or the `publicNetworkAccess` API. You can also allow public access by using the portal or the `publicNetworkAccess` API.
## Turn off public network access using Azure portal
-1. Visit [Azure portal](https://portal.azure.com)
-2. Navigate to your IoT hub.
+1. Visit the [Azure portal](https://portal.azure.com)
+2. Navigate to your IoT hub. Go to **Resource Groups**, choose the appropriate group, and select your IoT Hub.
3. Select **Networking** from the left-side menu. 4. Under ΓÇ£Allow public network access toΓÇ¥, select **Disabled** 5. Select **Save**.
To restrict access to only [private endpoint for your IoT hub in your VNet](virt
To turn on public network access, selected **All networks**, then **Save**.
-## Accessing the IoT Hub after disabling public network access
+### Accessing the IoT Hub after disabling public network access
After public network access is disabled, the IoT Hub is only accessible through [its VNet private endpoint using Azure private link](virtual-network-support.md). This restriction includes accessing through Azure portal, because API calls to the IoT Hub service are made directly using your browser with your credentials.
-## IoT Hub endpoint, IP address, and ports after disabling public network access
+### IoT Hub endpoint, IP address, and ports after disabling public network access
IoT Hub is a multi-tenant Platform-as-a-Service (PaaS), so different customers share the same pool of compute, networking, and storage hardware resources. IoT Hub's hostnames map to a public endpoint with a publicly routable IP address over the internet. Different customers share this IoT Hub public endpoint, and IoT devices in over wide-area networks and on-premises networks can all access it. Disabling public network access is enforced on a specific IoT hub resource, ensuring isolation. To keep the service active for other customer resources using the public path, its public endpoint remains resolvable, IP addresses discoverable, and ports remain open. This is not a cause for concern as Microsoft integrates multiple layers of security to ensure complete isolation between tenants. To learn more, see [Isolation in the Azure Public Cloud](../security/fundamentals/isolation-choices.md#tenant-level-isolation).
-## IP Filter
+### IP Filter
If public network access is disabled, all [IP Filter](iot-hub-ip-filtering.md) rules are ignored. This is because all IPs from the public internet are blocked. To use IP Filter, use the **Selected IP ranges** option.
-## Bug fix with built-in Event Hub compatible endpoint
+### Bug fix with built-in Event Hub compatible endpoint
There is a bug with IoT Hub where the [built-in Event Hub compatible endpoint](iot-hub-devguide-messages-read-builtin.md) continues to be accessible via public internet when public network access to the IoT Hub is disabled. To learn more and contact us about this bug, see [Disabling public network access for IoT Hub disables access to built-in Event Hub endpoint](https://azure.microsoft.com/updates/iot-hub-public-network-access-bug-fix).+
+## Turn on network access using Azure portal
+
+1. Visit the [Azure portal](https://portal.azure.com)
+2. Navigate to your IoT hub. Go to **Resource Groups**, choose the appropriate group, and select your hub.
+3. Select **Networking** from the left-side menu.
+4. Under ΓÇ£Allow public network access toΓÇ¥, select **Selected IP Ranges**.
+5. In the **IP Filter** dialog that opens, select **Add your client IP address** and enter a name and an address range.
+6. Select **Save**. If the button is greyed out, make sure your client IP address is already added as an IP filter.
++
+### Turn on all network ranges
+
+1. Navigate to your IoT hub. Go to **Resource Groups**, choose the appropriate group, and select your hub.
+1. Select **Networking** from the left-side menu.
+1. Under ΓÇ£Allow public network access toΓÇ¥, select **All networks**.
+1. Select **Save**.
+
+### Check IoT hub access using Cloud Shell
+
+You can check IoT hub access by using Azure Cloud Shell. Make sure that you've turned on all network ranges and then issue the following commands. Replace "SubscriptionName" with the name of your subscription and "MyIoTHub" with the name of your hub.
+
+```azurecli
+ az account set -s "SubscriptionName"
+ az iot hub device-identity list --hub-name "MyIoTHub"
+```
+
+```azurepowershell
+ Set-AzContext -Name "SubscriptionName"
+ Get-AzIoTHubDevice -IotHubName "MyIoTHub"
+```
+### Troubleshooting
+
+If you have trouble accessing your IoT hub, your network configuration could be the problem. For example, if you see the following error message when trying to access the IoT devices page, check the **Networking** page to see if public network access is disabled or restricted to selected IP ranges.
+
+```
+ Unable to retrieve devices. Please ensure that your network connection is online and network settings allow connections from your IP address.
+```
+
+To get access to the IoT hub, request permission from your IT administrator to add your IP address in the IP address range or to enable public network access to all networks. If that fails to resolve the issue, check your local network settings or contact your local network administrator to fix connectivity to IoT Hub. For example, sometimes a proxy in the local network can interfere with access to IoT Hub.
+
+If the preceding commands do not work or you cannot turn on all network ranges, contact Microsoft support.
key-vault Disaster Recovery Guidance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/general/disaster-recovery-guidance.md
Azure Key Vault features multiple layers of redundancy to make sure that your ke
> [!NOTE] > This guide applies to vaults. Managed HSM pools use a different high availability and disaster recovery model. See [Managed HSM Disaster Recovery Guide](../managed-hsm/disaster-recovery-guide.md) for more information.
-The contents of your key vault are replicated within the region and to a secondary region at least 150 miles away, but within the same geography to maintain high durability of your keys and secrets. For details about specific region pairs, see [Azure paired regions](../../best-practices-availability-paired-regions.md). The exception to the paired regions model is Brazil South, which allows only the option to keep data resident within Brazil South. Brazil South uses zone redundant storage (ZRS) to replicate your data three times within the single location/region. For AKV Premium, only 2 of the 3 regions are used to replicate data from the HSM's.
+The contents of your key vault are replicated within the region and to a secondary region at least 150 miles away, but within the same geography to maintain high durability of your keys and secrets. For details about specific region pairs, see [Azure paired regions](../../best-practices-availability-paired-regions.md). The exception to the paired regions model is single region geo, for example Brazil South, Qatar. Such regions allow only the option to keep data resident within the same region. Both Brazil South and Qatar use zone redundant storage (ZRS) to replicate your data three times within the single location/region. For AKV Premium, only 2 of the 3 regions are used to replicate data from the HSM's.
If individual components within the key vault service fail, alternate components within the region step in to serve your request to make sure that there is no degradation of functionality. You don't need to take any action to start this process, it happens automatically and will be transparent to you.
key-vault Quick Create Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/keys/quick-create-template.md
Two resources are defined in the template:
More Azure Key Vault template samples can be found in [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/?resourceType=Microsoft.Keyvault&pageNumber=1&sort=Popular).
+## Parameters and definitions
+
+|**Keysize**|Specifies operations that can be performed by using the key. If you do not specify this parameter, all operations can be performed. The acceptable values for this parameter are a comma-separated list of key operations as defined by the [JSON Web Key (JWK) specification](https://tools.ietf.org/html/draft-ietf-jose-json-web-key-41):
+["sign", "verify", "encrypt", "decrypt", " wrapKey ", " unwrapKey "]|
+|**CurveName**| Elliptic curve name for EC key type. See [JsonWebKeyCurveName](https://docs.microsoft.com/rest/api/keyvault/createkey/createkey#jsonwebkeycurvename)|
+|**Kty**|The type of key to create. For valid values, see [JsonWebKeyType](https://docs.microsoft.com/rest/api/keyvault/createkey/createkey#jsonwebkeytype)|
+|**Tags**| Application specific metadata in the form of key-value pairs.|
+|**nbf**| Specifies the time, as a DateTime object, before which the key cannot be used. The format would be Unix time stamp (the number of seconds after Unix Epoch on January 1st, 1970 at UTC)|
+|**exp**| Specifies the expiration time, as a DateTime object. The format would be Unix time stamp (the number of seconds after Unix Epoch on January 1st, 1970 at UTC)|
+||
+++ ## Deploy the template You can use [Azure portal](../../azure-resource-manager/templates/deploy-portal.md), Azure PowerShell, Azure CLI, or REST API. To learn about deployment methods, see [Deploy templates](../../azure-resource-manager/templates/deploy-powershell.md).
In this quickstart, you created a key vault and a key using an ARM template, and
- Read an [Overview of Azure Key Vault](../general/overview.md) - Learn more about [Azure Resource Manager](../../azure-resource-manager/management/overview.md)-- Review the [Key Vault security overview](../general/security-features.md)
+- Review the [Key Vault security overview](../general/security-features.md)
machine-learning Dsvm Tutorial Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/data-science-virtual-machine/dsvm-tutorial-resource-manager.md
This quickstart will show you how to create an Ubuntu 18.04 Data Science Virtual
If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
-[![Deploy to Azure](../../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2F101-vm-ubuntu-DSVM-GPU-or-CPU%2Fazuredeploy.json)
+[![Deploy to Azure](../../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fapplication-workloads%2Fdatascience%2Fvm-ubuntu-DSVM-GPU-or-CPU%2Fazuredeploy.json)
## Prerequisites
If your environment meets the prerequisites and you're familiar with using ARM t
The template used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/101-vm-ubuntu-DSVM-GPU-or-CPU/). The following resources are defined in the template:
read -p "Enter the Azure location (e.g., centralus):" location &&
read -p "Enter the authentication type (must be 'password' or 'sshPublicKey') :" authenticationType && read -p "Enter the login name for the administrator account (may not be 'admin'):" adminUsername && read -p "Enter administrator account secure string (value of password or ssh public key):" adminPasswordOrKey &&
-templateUri="https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/101-vm-ubuntu-DSVM-GPU-or-CPU/azuredeploy.json" &&
+templateUri="https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/application-workloads/datascience/vm-ubuntu-DSVM-GPU-or-CPU/azuredeploy.json" &&
az group create --name $resourceGroupName --location "$location" && az deployment group create --resource-group $resourceGroupName --template-uri $templateUri --parameters adminUsername=$adminUsername authenticationType=$authenticationType adminPasswordOrKey=$adminPasswordOrKey && echo "Press [ENTER] to continue ..." &&
marketplace Anomaly Detection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/anomaly-detection.md
description: Learn how automatic anomaly detection for metered billing helps ins
Previously updated : 2/18/2021 Last updated : 5/03/2021
To help ensure that your customers are billed correctly, use the **Anomaly detec
[![Illustrates the Partner Center unacknowledged anomalies list on the Usage page.](./media/anomaly-detection/unacknowledged-anomalies.png)](./media/anomaly-detection/unacknowledged-anomalies.png#lightbox) ***Figure 4: Partner Center unacknowledged anomalies list***
+ By default, flagged anomalies that have an estimated financial impact greater than 100 USD are shown in Partner Center. However, you can select **All** from the **Estimated financial impact of anomaly** list to see all flagged anomalies.
+
+ :::image type="content" source="./media/anomaly-detection/all-anomalies.png" alt-text="Screenshot of all metered usage anomalies for the selected offer.":::
+ 1. You would also see an anomaly action log that shows the actions you took on the overage usages. In the action log, you will be able to see which overage usage events were marked as genuine or false. [![Illustrates the Anomaly action log on the Usage page.](./media/anomaly-detection/anomaly-action-log.png)](./media/anomaly-detection/anomaly-action-log.png#lightbox)
migrate Migrate Support Matrix Vmware Migration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/migrate-support-matrix-vmware-migration.md
The table summarizes agentless migration requirements for VMware VMs.
**IPv6** | Not supported. **Target disk** | VMs can only be migrated to managed disks (standard HDD, standard SSD, premium SSD) in Azure. **Simultaneous replication** | Up to 300 simultaneously replicating VMs per vCenter Server with 1 appliance. Up to 500 simultaneously replicating VMs per vCenter Server when an additional [scale-out appliance](./how-to-scale-out-for-migration.md) is deployed.
-**Automatic installation of Azure VM agent (Windows and Linux Agent)** | Supported for Windows Server 2008 R2 onwards. <br/> Supported for RHEL6, RHEL7, CentOS7, Ubuntu 14.04, Ubuntu 16.04, Ubuntu18.04. Review the list of [required packages](../virtual-machines/extensions/agent-linux.md#requirements)) for these Linux operating systems.
+**Automatic installation of Azure VM agent (Windows and Linux Agent)** | Supported for Windows Server 2008 R2 onwards. <br/> Supported for RHEL6, RHEL7, CentOS7, Ubuntu 14.04, Ubuntu 16.04, Ubuntu18.04.
+
+> [!Note]
+> For Linux VMs, ensure that the following packages are installed for successful installation of Microsoft Azure Linux agent (waagent):
+>- Python 2.6+
+>- OpenSSL 1.0+
+>- OpenSSH 5.3+
+>- Filesystem utilities: sfdisk, fdisk, mkfs, parted
+>- Password tools: chpasswd, sudo
+>- Text processing tools: sed, grep
+>- Network tools: ip-route
+>- Enable rc.local service on the source VM
> [!TIP] > Using the Azure portal you'll be able to select up to 10 VMs at a time to configure replication. To replicate more VMs you can use the portal and add the VMs to be replicated in multiple batches of 10 VMs, or use the Azure Migrate PowerShell interface to configure replication. Ensure that you don't configure simultaneous replication on more than the maximum supported number of VMs for simultaneous replications.
postgresql Concepts Networking https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/flexible-server/concepts-networking.md
Here are some concepts to be familiar with when using virtual networks with Post
* **Network security groups (NSG)** - Security rules in network security groups enable you to filter the type of network traffic that can flow in and out of virtual network subnets and network interfaces. See [network security group overview](../../virtual-network/network-security-groups-overview.md) documentation for more information.
-* **Private DNS integration** -
+* **Private DNS zone integration** -
Azure private DNS zone integration allows you to resolve the private DNS within the current VNET or any in-region peered VNET where the private DNS Zone is linked. See [private DNS zone documentation](https://docs.microsoft.com/azure/dns/private-dns-overview) for more details. Learn how to create a flexible server with private access (VNet integration) in [the Azure portal](how-to-manage-virtual-network-portal.md) or [the Azure CLI](how-to-manage-virtual-network-cli.md).
-> [!NOTE]
-> If you are using the custom DNS server then you must use a DNS forwarder to resolve the FQDN of Azure Database for PostgreSQL - Flexible Server. Refer to [name resolution that uses your own DNS server](../../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md#name-resolution-that-uses-your-own-dns-server) to learn more.
+### Integration with custom DNS server
+
+If you are using the custom DNS server then you must use a DNS forwarder to resolve the FQDN of Azure Database for PostgreSQL - Flexible Server. The forwarder IP address should be [168.63.129.16](https://docs.microsoft.com/azure/virtual-network/what-is-ip-address-168-63-129-16) and the custom DNS server should be inside the VNet. Refer to [name resolution that uses your own DNS server](../../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md#name-resolution-that-uses-your-own-dns-server) to learn more.
### Private DNS zone and VNET peering
Private DNS zone settings and VNET peering are independent of each other.
* By default, a new private DNS zone is auto-provisioned per server using the server name provided. However, if you want to setup your own private DNS zone to use with the flexible server, please see the [private DNS overview](https://docs.microsoft.com/azure/dns/private-dns-overview) documentation. * If you want to connect to the flexible server from a client that is provisioned in another VNET, you have to link the private DNS zone with the VNET. See [how to link the virtual network](https://docs.microsoft.com/azure/dns/private-dns-getstarted-portal#link-the-virtual-network) documentation.
+> [!NOTE]
+> Private DNS zone names that end with `private.postgres.database.azure.com` can only be linked.
### Unsupported virtual network scenarios * Public endpoint (or public IP or DNS) - A flexible server deployed to a virtual network cannot have a public endpoint
purview Register Scan Synapse Workspace https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/register-scan-synapse-workspace.md
There are three ways to set up authentication for an Azure Synapse source:
1. Navigate to your **Synapse workspace** 1. Navigate to the **Data** section and to one of your serverless SQL databases 1. Click on the ellipses icon and start a New SQL script
-1. Add the Azure Purview account MSI (represented by the account name) as **db_owner** on the dedicated SQL database by running the command below in your SQL script:
+1. Add the Azure Purview account MSI (represented by the account name) as **db_datareader** on the dedicated SQL database by running the command below in your SQL script:
```sql CREATE USER [PurviewAccountName] FROM EXTERNAL PROVIDER GO
- EXEC sp_addrolemember 'db_owner', [PurviewAccountName]
+ EXEC sp_addrolemember 'db_datareader', [PurviewAccountName]
GO ``` #### Using Managed identity for Serverless SQL databases
There are three ways to set up authentication for an Azure Synapse source:
1. Navigate to your **Synapse workspace** 1. Navigate to the **Data** section and to one of your serverless SQL databases 1. Click on the ellipses icon and start a New SQL script
-1. Add the **Service Principal ID** as **db_owner** on the dedicated SQL database by running the command below in your SQL script:
+1. Add the **Service Principal ID** as **db_datareader** on the dedicated SQL database by running the command below in your SQL script:
```sql CREATE USER [ServicePrincipalID] FROM EXTERNAL PROVIDER GO
- EXEC sp_addrolemember 'db_owner', [ServicePrincipalID]
+ EXEC sp_addrolemember 'db_datareader', [ServicePrincipalID]
GO ```
To manage or delete a scan, do the following:
## Next steps - [Browse the Azure Purview Data catalog](how-to-browse-catalog.md)-- [Search the Azure Purview Data Catalog](how-to-search-catalog.md)
+- [Search the Azure Purview Data Catalog](how-to-search-catalog.md)
remote-rendering Network Requirements https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/remote-rendering/reference/network-requirements.md
A stable, low-latency network connection to an Azure data center is critical for
The exact network requirements depend on your specific use case, such as the number and frequency of modifications to the remote scene graph as well as the complexity of the rendered view, but there are a number of guidelines to ensure that your experience is as good as possible: * Your internet connectivity needs to support at least **40 Mbps downstream** and **5 Mbps upstream** consistently for a single user session of Azure Remote Rendering, assuming there is no competing traffic on the network. We recommend higher rates for better experiences.
-* **Wi-Fi** is the recommended network type since it supports a low latency, high-bandwith, and stable connection. Some mobile networks introduce jitter that can lead to a poor experience.
+* **Wi-Fi** is the recommended network type since it supports a low latency, high-bandwidth, and stable connection. Some mobile networks introduce jitter that can lead to a poor experience.
* Using the **5-GHz Wi-Fi band** will usually produce better results than the 2.4-GHz Wi-Fi band, though both should work. * If there are other Wi-Fi networks nearby, avoid using Wi-Fi channels in use by these other networks. You can use network scanning tools like [WifiInfoView](https://www.nirsoft.net/utils/wifi_information_view.html) to verify whether the channels your Wi-Fi network uses, are free of competing traffic. * Strictly **avoid using Wi-Fi repeaters** or LAN-over-powerline forwarding.
route-server About Dual Homed Network https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/route-server/about-dual-homed-network.md
+
+ Title: 'About dual-homed network with Azure Route Server (Preview)'
+description: Learn about how Azure Route Server (Preview) works in a dual-homed network.
++++ Last updated : 04/30/2021+++
+# About dual-homed network with Azure Route Server (Preview)
+
+Azure Route Server supports your typical hub-and-spoke network topology. This configuration is when both the Route Server and network virtual appliance (NVA) is in the hub virtual network. Router Server also enabled you to configure a different topology called a dual-homed network. This configuration is when you have a spoke virtual network peered with two or more hub virtual networks. Virtual machines in the spoke virtual network can communicate through either hub virtual network to your on-premises or the internet.
+
+## How to set it up
+
+As can be seen in the following diagram, you need to:
+
+* Deploy an NVA in each hub virtual network and the route server in the spoke virtual network.
+* Enable VNet peering between the hub and spoke virtual networks.
+* Configure BGP peering between the Route Server and each NVA deployed.
++
+## How does it work?
+
+In the control plane, the NVA and the Route Server will exchange routes as if theyΓÇÖre deployed in the same virtual network. The NVA will learn about spoke virtual network addresses from the Route Server. The Route Server will learn routes from each of the NVAs. The Route Server will then program all the virtual machines in the spoke virtual network with the routes it learned.
+
+In the data plane, virtual machines in the spoke virtual network will see the security NVA or the VPN NVA in the hub as the next hop. Traffic destined for the Internet-bound traffic or the hybrid cross-premises traffic will now route through the NVAs in the hub virtual network. You can configure both hubs to be either active/active or active/passive. In the case when the active hub fails, the traffic to and from the virtual machines will fail over to the other hub. These failures include but not limited to: NVA failures or service connectivity failures. This set up ensures your network is configured for high availability.
+
+## Integration with ExpressRoute
+
+You can build a dual-homed network that involves two or more ExpressRoute connections. Along with the steps described above, you'll need to:
+
+* Create another Route Server in each hub virtual network that has an ExpressRoute gateway.
+* Configure BGP peering between the NVA and the Route Server in the hub virtual network.
+* [Enable route exchange](quickstart-configure-route-server-portal.md#configure-route-exchange) between the ExpressRoute gateway and the Route Server in the hub virtual network.
+* Make sure ΓÇ£Use Remote Gateway or Remote Route ServerΓÇ¥ is **disabled** in the spoke virtual network VNet peering configuration.
++
+### How does it work?
+
+In the control plane, the NVA in the hub virtual network will learn about on-premises routes from the ExpressRoute gateway through [route exchange](quickstart-configure-route-server-portal.md#configure-route-exchange) with the Route Server in the hub. In return, the NVA will send the spoke virtual network addresses to the ExpressRoute gateway using the same Route Server. The Route Server in both the spoke and hub virtual network will then program the on-premises network addresses to the virtual machines in their respective virtual network.
+
+> [!IMPORTANT]
+> BGP prevents a loop by verifying the AS number in the AS Path. If the receiving router sees its own AS number in the AS Path of a received BGP packet, it will drop the packet. In this example, both Route Servers have the same AS number, 65515. To prevent each Route Server from dropping the routes from the other Route Server, the NVA must apply **as-override** BGP policy when peering with each Route Server.
+>
+
+In the data plane, the virtual machines in the spoke virtual network will send all traffic destined for the on-premises network to the NVA in the hub virtual network first. Then the NVA will forward the traffic to the on-premises network through ExpressRoute. Traffic from on-premises will traverse the same data path in the reverse direction. You'll notice neither of the Route Servers are in the data path.
+
+## Next steps
+
+* [Learn how Azure Route Server works with ExpressRoute](expressroute-vpn-support.md)
+* [Learn how Azure Route Server works with a network virtual appliance](resource-manager-template-samples.md)
+
search Cognitive Search Incremental Indexing Conceptual https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/cognitive-search-incremental-indexing-conceptual.md
Incremental enrichment adds a cache to the enrichment pipeline. The indexer cach
Physically, the cache is stored in a blob container in your Azure Storage account. The cache also uses table storage for an internal record of processing updates. All indexes within a search service may share the same storage account for the indexer cache. Each indexer is assigned a unique and immutable cache identifier to the container it is using.
+> [!NOTE]
+> The indexer cache requires a general purpose storage account. For more information, review the [different types of storage accounts](https://docs.microsoft.com/azure/storage/common/storage-account-overview#types-of-storage-accounts).
+ ## Cache configuration You'll need to set the `cache` property on the indexer to start benefitting from incremental enrichment. The following example illustrates an indexer with caching enabled. Specific parts of this configuration are described in following sections. For more information, see [Set up incremental enrichment](search-howto-incremental-index.md).
search Search Howto Incremental Index https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-howto-incremental-index.md
Modify the cache object to include the following required and optional propertie
} ```
+> [!NOTE]
+> The indexer cache requires a general purpose v2 storage account. For more information, review the [different types of storage accounts](https://docs.microsoft.com/azure/storage/common/storage-account-overview#types-of-storage-accounts).
+ ### Step 3: Reset the indexer A reset of the indexer is required when setting up incremental enrichment for existing indexers to ensure all documents are in a consistent state. You can use the portal or an API client and the [Reset Indexer REST API](/rest/api/searchservice/reset-indexer) for this task.
security-center Defender For Kubernetes Azure Arc https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/defender-for-kubernetes-azure-arc.md
The extension can also protect Kubernetes clusters on other cloud providers, alt
| Release state | **Preview**<br>[!INCLUDE [Legalese](../../includes/security-center-preview-legal-text.md)]| | Required roles and permissions | [Security admin](../role-based-access-control/built-in-roles.md#security-admin) can dismiss alerts<br>[Security reader](../role-based-access-control/built-in-roles.md#security-reader) can view findings | | Pricing | Requires [Azure Defender for Kubernetes](defender-for-kubernetes-introduction.md) |
-| Supported Kubernetes distributions | [Azure Kubernetes Service on Azure Stack HCI](/azure-stack/aks-hci/overview)<br>[Kubernetes](https://kubernetes.io/docs/home/)<br> [AKS Engine](https://github.com/Azure/aks-engine)<br> [Red Hat OpenShift](https://www.openshift.com/learn/topics/kubernetes/) (version 4.6 or newer) |
+| Supported Kubernetes distributions | [Azure Kubernetes Service on Azure Stack HCI](/azure-stack/aks-hci/overview)<br>[Kubernetes](https://kubernetes.io/docs/home/)<br> [AKS Engine](https://github.com/Azure/aks-engine)<br> [Red Hat OpenShift](https://www.openshift.com/learn/topics/kubernetes/) (version 4.6 or newer)<br> [VMware Tanzu Kubernetes Grid](https://tanzu.vmware.com/kubernetes-grid)<br> [Rancher Kubernetes Engine](https://rancher.com/docs/rke/latest/en/) |
| Limitations | Azure Arc enabled Kubernetes and the Azure Defender extension **don't support** managed Kubernetes offerings like Google Kubernetes Engine and Elastic Kubernetes Service. [Azure Defender is natively available for Azure Kubernetes Service (AKS)](defender-for-kubernetes-introduction.md) and doesn't require connecting the cluster to Azure Arc. | | Environments and regions | Availability for this extension is the same as [Azure Arc enabled Kubernetes](../azure-arc/kubernetes/overview.md)|
security-center Security Center Wdatp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/security-center-wdatp.md
Confirm that your machine meets the necessary requirements for Defender for Endp
- For **Windows** servers, configure the network settings described in [Configure device proxy and Internet connectivity settings](/windows/security/threat-protection/microsoft-defender-atp/configure-proxy-internet) - For **on-premises** machines, connect it to Azure Arc as explained in [Connect hybrid machines with Azure Arc enabled servers](../azure-arc/servers/learn/quick-enable-hybrid-vm.md)
- - For **Windows Server 2019** and [Windows Virtual Desktop (WVD)](../virtual-desktop/overview.md) machines, confirm that your machines are running the Log Analytics agent and have the MicrosoftMonitoringAgent extension.
+ - For **Windows Server 2019** and [Windows Virtual Desktop (WVD)](../virtual-desktop/overview.md) machines, confirm that your machines have the MicrosoftMonitoringAgent extension.
1. Enable **Azure Defender for servers**. See [Quickstart: Enable Azure Defender](enable-azure-defender.md). 1. If you've already licensed and deployed Microsoft Defender for Endpoints on your servers, remove it using the procedure described in [Offboard Windows servers](/windows/security/threat-protection/microsoft-defender-atp/configure-server-endpoints#offboard-windows-servers).
Full instructions for switching from a non-Microsoft endpoint solution are avail
## Next steps - [Platforms and features supported by Azure Security Center](security-center-os-coverage.md)-- [Managing security recommendations in Azure Security Center](security-center-recommendations.md): Learn how recommendations help you protect your Azure resources.
+- [Managing security recommendations in Azure Security Center](security-center-recommendations.md): Learn how recommendations help you protect your Azure resources.
sentinel Store Logs In Azure Data Explorer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/store-logs-in-azure-data-explorer.md
Azure Sentinel provides full SIEM and SOAR capabilities, quick deployment and co
If you only need to access specific tables occasionally, such as for periodic investigations or audits, you may consider that retaining your data in Azure Sentinel is no longer cost-effective. At this point, we recommend storing data in Azure Data Explorer, which costs less, but still enables you to explore using the same KQL queries that you run in Azure Sentinel.
-You can access the data in Azure Data Explorer directly from Azure Sentinel using the [Log Analytics Azure Data Explorer proxy feature](//azure/azure-monitor/logs/azure-monitor-data-explorer-proxy). To do so, use cross cluster queries in your log search or workbooks.
+You can access the data in Azure Data Explorer directly from Azure Sentinel using the [Log Analytics Azure Data Explorer proxy feature](/azure/azure-monitor/logs/azure-monitor-data-explorer-proxy). To do so, use cross cluster queries in your log search or workbooks.
> [!IMPORTANT] > Core SIEM capabilities, including Analytic rules, UEBA, and the investigation graph, do not support data stored in Azure Data Explorer.
Regardless of where you store your data, continue hunting and investigating usin
For more information, see: - [Tutorial: Investigate incidents with Azure Sentinel](tutorial-investigate-cases.md)-- [Hunt for threats with Azure Sentinel](hunting.md)
+- [Hunt for threats with Azure Sentinel](hunting.md)
service-bus-messaging Service Bus Amqp Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/service-bus-amqp-dotnet.md
Title: Azure Service Bus with .NET and AMQP 1.0 | Microsoft Docs
-description: This article describes how to use Azure Service Bus from a .NET application using AMQP (Advanced Messaging Queuing Protocol).
+ Title: Use legacy WindowsAzure.ServiceBus .NET framework library with AMQP 1.0 | Microsoft Docs
+description: This article describes how to use the legacy WindowsAzure.ServiceBus .NET framework library with AMQP (Advanced Messaging Queuing Protocol).
Previously updated : 06/23/2020 Last updated : 04/30/2021
-# Use Service Bus from .NET with AMQP 1.0
-
-AMQP 1.0 support is available in the Service Bus package version 2.1 or later. You can ensure you have the latest version by downloading the Service Bus bits from [NuGet][NuGet].
+# Use legacy WindowsAzure.ServiceBus .NET framework library with AMQP 1.0
> [!NOTE]
-> You can use either Advanced Message Queuing Protocol (AMQP) or Service Bus Messaging Protocol (SBMP) with the .NET library for Service Bus. AMQP is the default protocol used by the .NET library. We recommend that you use the AMQP protocol (which is the default) and not override it.
-
-## Configure .NET applications to use AMQP 1.0
+> This article is for existing users of the WindowsAzure.ServiceBus package looking to switch to using AMQP within the same package. While this package will continue to receive critical bug fixes, we strongly encourage to upgrade to the new [Azure.Messaging.ServiceBus](https://www.nuget.org/packages/Azure.Messaging.ServiceBus) package instead which is available as of November 2020 and which support AMQP by default.
-By default, the Service Bus .NET client library communicates with the Service Bus service using AMQP protocol. You can also explicitly specify AMQP as the transport type as shown in the following section.
+By default, the WindowsAzure.ServiceBus package communicates with the Service Bus service using a dedicated SOAP-based protocol called Service Bus Messaging Protocol (SBMP). In version 2.1 support for AMQP 1.0 was added which we recommend using rather than the default protocol.
-In the current release, there are a few API features that are not supported when using AMQP. These unsupported features are listed in the section [Behavioral differences](#behavioral-differences). Some of the advanced configuration settings also have a different meaning when using AMQP.
+To use AMQP 1.0 instead of the default protocol requires explicit configuration on the Service Bus connection string, or in the client constructors via the [TransportType](/dotnet/api/microsoft.servicebus.messaging.transporttype) option. Other than this change, application code remains unchanged when using AMQP 1.0.
-### Configuration using App.config
+There are a few API features that are not supported when using AMQP. These unsupported features are listed in the section [Behavioral differences](#behavioral-differences). Some of the advanced configuration settings also have a different meaning when using AMQP.
-It is a good practice for applications to use the App.config configuration file to store settings. For Service Bus applications, you can use App.config to store the Service Bus connection string. An example App.config file is as follows:
+## Configure connection string to use AMQP 1.0
-```xml
-<?xml version="1.0" encoding="utf-8" ?>
-<configuration>
- <appSettings>
- <add key="Microsoft.ServiceBus.ConnectionString"
- value="Endpoint=sb://[namespace].servicebus.windows.net/;SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey=[SAS key];TransportType=Amqp" />
- </appSettings>
-</configuration>
-```
-
-The value of the `Microsoft.ServiceBus.ConnectionString` setting is the Service Bus connection string that is used to configure the connection to Service Bus. The format is as follows:
+Append your connection string with `;TransportType=Amqp` to instruct the client to make its connection to Service Bus using AMQP 1.0.
+For example,
`Endpoint=sb://[namespace].servicebus.windows.net/;SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey=[SAS key];TransportType=Amqp` Where `namespace` and `SAS key` are obtained from the [Azure portal][Azure portal] when you create a Service Bus namespace. For more information, see [Create a Service Bus namespace using the Azure portal][Create a Service Bus namespace using the Azure portal].
-When using AMQP, append the connection string with `;TransportType=Amqp`. This notation instructs the client library to make its connection to Service Bus using AMQP 1.0.
- ### AMQP over WebSockets To use AMQP over WebSockets, set `TransportType` in the connection string to `AmqpWebSockets`. For example: `Endpoint=sb://[namespace].servicebus.windows.net/;SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey=[SAS key];TransportType=AmqpWebSockets`.
-If you are using .NET Microsoft.Azure.ServiceBus library, set the [ServiceBusConnection.TransportType](/dotnet/api/microsoft.azure.servicebus.servicebusconnection.transporttype) to AmqpWebSockets of [TransportType enum](/dotnet/api/microsoft.azure.servicebus.transporttype).
-
-If you are using .NET Azure.Messaging.ServiceBus library, set the [ServiceBusClient.TransportType](/dotnet/api/azure.messaging.servicebus.servicebusclient.transporttype) to AmqpWebSockets of [ServiceBusTransportType enum](/dotnet/api/azure.messaging.servicebus.servicebustransporttype).
-- ## Message serialization When using the default protocol, the default serialization behavior of the .NET client library is to use the [DataContractSerializer][DataContractSerializer] type to serialize a [BrokeredMessage][BrokeredMessage] instance for transport between the client library and the Service Bus service. When using the AMQP transport mode, the client library uses the AMQP type system for serialization of the [brokered message][BrokeredMessage] into an AMQP message. This serialization enables the message to be received and interpreted by a receiving application that is potentially running on a different platform, for example, a Java application that uses the JMS API to access Service Bus.
To facilitate interoperability with non-.NET clients, use only .NET types that c
## Behavioral differences
-There are some small differences in the behavior of the Service Bus .NET API when using AMQP, compared to the default protocol:
+There are some small differences in the behavior of the WindowsAzure.ServiceBus API when using AMQP, compared to the default protocol:
* The [OperationTimeout][OperationTimeout] property is ignored. * `MessageReceiver.Receive(TimeSpan.Zero)` is implemented as `MessageReceiver.Receive(TimeSpan.FromSeconds(10))`.
Ready to learn more? Visit the following links:
[BrokeredMessage]: /dotnet/api/microsoft.servicebus.messaging.brokeredmessage [Microsoft.ServiceBus.Messaging.MessagingFactory.AcceptMessageSession]: /dotnet/api/microsoft.servicebus.messaging.messagingfactory.acceptmessagesession#Microsoft_ServiceBus_Messaging_MessagingFactory_AcceptMessageSession [OperationTimeout]: /dotnet/api/microsoft.servicebus.messaging.messagingfactorysettings.operationtimeout#Microsoft_ServiceBus_Messaging_MessagingFactorySettings_OperationTimeout
-[NuGet]: https://nuget.org/packages/WindowsAzure.ServiceBus/
[Azure portal]: https://portal.azure.com [Service Bus AMQP overview]: service-bus-amqp-overview.md
-[AMQP 1.0 protocol guide]: service-bus-amqp-protocol-guide.md
+[AMQP 1.0 protocol guide]: service-bus-amqp-protocol-guide.md
service-bus-messaging Service Bus Messaging Sql Filter https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/service-bus-messaging-sql-filter.md
Title: Azure Service Bus Subscription Rule SQL Filter syntax | Microsoft Docs description: This article provides details about SQL filter grammar. A SQL filter supports a subset of the SQL-92 standard. Previously updated : 11/24/2020 Last updated : 04/30/2021 # Subscription Rule SQL Filter Syntax
Service Bus Premium also supports the [JMS SQL message selector syntax](https://
## Arguments -- `<scope>` is an optional string indicating the scope of the `<property_name>`. Valid values are `sys` or `user`. The `sys` value indicates system scope where `<property_name>` is a public property name of the [BrokeredMessage class](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage). `user` indicates user scope where `<property_name>` is a key of the [BrokeredMessage class](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage) dictionary. `user` scope is the default scope if `<scope>` isn't specified.
+- `<scope>` is an optional string indicating the scope of the `<property_name>`. Valid values are `sys` or `user`.
+ - The `sys` value indicates system scope where `<property_name>` is any of the properties on the Service Bus message as described in [Messages, payloads, and serialization](service-bus-messages-payloads.md).
+ - The `user` value indicates user scope where `<property_name>` is a key of the custom properties that you can set on the message when sending to Service Bus.
+ - The `user` scope is the default scope if `<scope>` isn't specified.
## Remarks
The `property(name)` function returns the value of the property referenced by `n
## Considerations
-Consider the following [SqlFilter](/dotnet/api/microsoft.servicebus.messaging.sqlfilter) semantics:
+Consider the following Sql Filter semantics:
- Property names are case-insensitive. - Operators follow C# implicit conversion semantics whenever possible. -- System properties are public properties exposed in [BrokeredMessage](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage) instances.
+- System properties are any of the properties on the Service Bus message as described in [Messages, payloads, and serialization](service-bus-messages-payloads.md).
Consider the following `IS [NOT] NULL` semantics:
Consider the following [SqlFilter](/dotnet/api/microsoft.servicebus.messaging.sq
### Property evaluation semantics -- An attempt to evaluate a non-existent system property throws a [FilterException](/dotnet/api/microsoft.servicebus.messaging.filterexception) exception.
+- An attempt to evaluate a non-existent system property throws a `FilterException` exception.
- A property that doesn't exist is internally evaluated as **unknown**.
For examples, see [Service Bus filter examples](service-bus-filter-examples.md).
- [SqlFilter class (Java)](/java/api/com.microsoft.azure.servicebus.rules.SqlFilter) - [SqlRuleFilter (JavaScript)](/javascript/api/@azure/service-bus/sqlrulefilter) - [`az servicebus topic subscription rule`](/cli/azure/servicebus/topic/subscription/rule)-- [New-AzServiceBusRule](/powershell/module/az.servicebus/new-azservicebusrule)
+- [New-AzServiceBusRule](/powershell/module/az.servicebus/new-azservicebusrule)
service-bus-messaging Service Bus Messaging Sql Rule Action https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/service-bus-messaging-sql-rule-action.md
A *SQL action* is used to manipulate message metadata after a message has been s
## Arguments -- `<scope>` is an optional string indicating the scope of the `<property_name>`. Valid values are `sys` or `user`. The `sys` value indicates system scope where `<property_name>` is a public property name of the [BrokeredMessage Class](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage). `user` indicates user scope where `<property_name>` is a key of the [BrokeredMessage Class](/dotnet/api/microsoft.servicebus.messaging.brokeredmessage) dictionary. `user` scope is the default scope if `<scope>` isn't specified.
+- `<scope>` is an optional string indicating the scope of the `<property_name>`. Valid values are `sys` or `user`.
+ - The `sys` value indicates system scope where `<property_name>` is any of the properties on the Service Bus message as described in [Messages, payloads, and serialization](service-bus-messages-payloads.md).
+ - The `user` value indicates user scope where `<property_name>` is a key of the custom properties that you can set on the message when sending to Service Bus.
+ - The `user` scope is the default scope if `<scope>` isn't specified.
### Remarks
For examples, see [Service Bus filter examples](service-bus-filter-examples.md).
- [SqlRuleAction class (Java)](/java/api/com.microsoft.azure.servicebus.rules.sqlruleaction) - [SqlRuleAction (JavaScript)](/javascript/api/@azure/service-bus/sqlruleaction) - [`az servicebus topic subscription rule`](/cli/azure/servicebus/topic/subscription/rule)-- [New-AzServiceBusRule](/powershell/module/az.servicebus/new-azservicebusrule)
+- [New-AzServiceBusRule](/powershell/module/az.servicebus/new-azservicebusrule)
service-bus-messaging Service Bus Partitioning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/service-bus-partitioning.md
committableTransaction.Commit();
If any of the properties that serve as a partition key are set, Service Bus pins the message to a specific partition. This behavior occurs whether or not a transaction is used. It is recommended that you don't specify a partition key if it isn't necessary.
-### Use sessions with partitioned entities
+### Use transactions in sessions with partitioned entities
To send a transactional message to a session-aware topic or queue, the message must have the session ID property set. If the partition key property is specified as well, it must be identical to the session ID property. If they differ, Service Bus returns an invalid operation exception.
Currently Service Bus imposes the following limitations on partitioned queues an
* Partitioned queues and topics don't support sending messages that belong to different sessions in a single transaction. * Service Bus currently allows up to 100 partitioned queues or topics per namespace. Each partitioned queue or topic counts towards the quota of 10,000 entities per namespace (doesn't apply to Premium tier).
-## Other features
-- Add or remove rule is supported with partitioned entities, but not under transactions. -- AMQP is supported for sending and receiving messages to and from a partitioned entity.-- AMQP is supported for the following operations: Batch Send, Batch Receive, Receive by Sequence Number, Peek, Renew Lock, Schedule Message, Cancel Scheduled Message, Add Rule, Remove Rule, Session Renew Lock, Set Session State, Get Session State, and Enumerate Sessions.-- ## Next steps You can enable partitioning by using Azure portal, PowerShell, CLI, Resource Manager template, .NET, Java, Python, and JavaScript. For more information, see [Enable partitioning](enable-partitions.md).
service-bus-messaging Service Bus Sas https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/service-bus-sas.md
Title: Azure Service Bus access control with Shared Access Signatures description: Overview of Service Bus access control using Shared Access Signatures overview, details about SAS authorization with Azure Service Bus. Previously updated : 01/19/2021 Last updated : 04/27/2021
SAS guards access to Service Bus based on authorization rules. Those are configu
Shared Access Signatures are a claims-based authorization mechanism using simple tokens. Using SAS, keys are never passed on the wire. Keys are used to cryptographically sign information that can later be verified by the service. SAS can be used similar to a username and password scheme where the client is in immediate possession of an authorization rule name and a matching key. SAS can also be used similar to a federated security model, where the client receives a time-limited and signed access token from a security token service without ever coming into possession of the signing key.
-SAS authentication in Service Bus is configured with named [Shared Access Authorization Rules](/dotnet/api/microsoft.servicebus.messaging.sharedaccessauthorizationrule) having associated access rights, and a pair of primary and secondary cryptographic keys. The keys are 256-bit values in Base64 representation. You can configure rules at the namespace level, on Service Bus [relays](../azure-relay/relay-what-is-it.md), [queues](service-bus-messaging-overview.md#queues), and [topics](service-bus-messaging-overview.md#topics).
+SAS authentication in Service Bus is configured with named [Shared Access Authorization Policies](#shared-access-authorization-policies) having associated access rights, and a pair of primary and secondary cryptographic keys. The keys are 256-bit values in Base64 representation. You can configure rules at the namespace level, on Service Bus [queues](service-bus-messaging-overview.md#queues) and [topics](service-bus-messaging-overview.md#topics).
-The [Shared Access Signature](/dotnet/api/microsoft.servicebus.sharedaccesssignaturetokenprovider) token contains the name of the chosen authorization rule, the URI of the resource that shall be accessed, an expiry instant, and an HMAC-SHA256 cryptographic signature computed over these fields using either the primary or the secondary cryptographic key of the chosen authorization rule.
+The Shared Access Signature token contains the name of the chosen authorization policy, the URI of the resource that shall be accessed, an expiry instant, and an HMAC-SHA256 cryptographic signature computed over these fields using either the primary or the secondary cryptographic key of the chosen authorization rule.
## Shared Access Authorization Policies Each Service Bus namespace and each Service Bus entity has a Shared Access Authorization policy made up of rules. The policy at the namespace level applies to all entities inside the namespace, irrespective of their individual policy configuration.
-For each authorization policy rule, you decide on three pieces of information: **name**, **scope**, and **rights**. The **name** is just that; a unique name within that scope. The scope is easy enough: it's the URI of the resource in question. For a Service Bus namespace, the scope is the fully qualified domain name (FQDN), such as `https://<yournamespace>.servicebus.windows.net/`.
+For each authorization policy rule, you decide on three pieces of information: **name**, **scope**, and **rights**. The **name** is just that; a unique name within that scope. The scope is easy enough: it's the URI of the resource in question. For a Service Bus namespace, the scope is the fully qualified namespace, such as `https://<yournamespace>.servicebus.windows.net/`.
The rights conferred by the policy rule can be a combination of:
The following recommendations for using shared access signatures can help mitiga
## Configuration for Shared Access Signature authentication
-You can configure the [SharedAccessAuthorizationRule](/dotnet/api/microsoft.servicebus.messaging.sharedaccessauthorizationrule) rule on Service Bus namespaces, queues, or topics. Configuring a [SharedAccessAuthorizationRule](/dotnet/api/microsoft.servicebus.messaging.sharedaccessauthorizationrule) on a Service Bus subscription is currently not supported, but you can use rules configured on a namespace or topic to secure access to subscriptions. For a working sample that illustrates this procedure, see the [Using Shared Access Signature (SAS) authentication with Service Bus Subscriptions](https://code.msdn.microsoft.com/Using-Shared-Access-e605b37c) sample.
+You can configure the Shared Access Authorization Policy on Service Bus namespaces, queues, or topics. Configuring it on a Service Bus subscription is currently not supported, but you can use rules configured on a namespace or topic to secure access to subscriptions. For a working sample that illustrates this procedure, see the [Using Shared Access Signature (SAS) authentication with Service Bus Subscriptions](https://code.msdn.microsoft.com/Using-Shared-Access-e605b37c) sample.
![SAS](./media/service-bus-sas/service-bus-namespace.png)
SharedAccessSignature sig=<signature-string>&se=<expiry>&skn=<keyName>&sr=<URL-e
urlencode(base64(hmacsha256(urlencode('https://<yournamespace>.servicebus.windows.net/') + "\n" + '<expiry instant>', '<signing key>'))) ```
-Here's an example C# code for generating a SAS token:
-
-```csharp
-private static string createToken(string resourceUri, string keyName, string key)
-{
- TimeSpan sinceEpoch = DateTime.UtcNow - new DateTime(1970, 1, 1);
- var week = 60 * 60 * 24 * 7;
- var expiry = Convert.ToString((int)sinceEpoch.TotalSeconds + week);
- string stringToSign = HttpUtility.UrlEncode(resourceUri) + "\n" + expiry;
- HMACSHA256 hmac = new HMACSHA256(Encoding.UTF8.GetBytes(key));
- var signature = Convert.ToBase64String(hmac.ComputeHash(Encoding.UTF8.GetBytes(stringToSign)));
- var sasToken = String.Format(CultureInfo.InvariantCulture, "SharedAccessSignature sr={0}&sig={1}&se={2}&skn={3}", HttpUtility.UrlEncode(resourceUri), HttpUtility.UrlEncode(signature), expiry, keyName);
- return sasToken;
-}
-```
- > [!IMPORTANT] > For examples of generating a SAS token using different programming languages, see [Generate SAS token](/rest/api/eventhub/generate-sas-token).
A SAS token is valid for all resources prefixed with the `<resourceURI>` used in
## Regenerating keys
-It is recommended that you periodically regenerate the keys used in the [SharedAccessAuthorizationRule](/dotnet/api/microsoft.servicebus.messaging.sharedaccessauthorizationrule) object. The primary and secondary key slots exist so that you can rotate keys gradually. If your application generally uses the primary key, you can copy the primary key into the secondary key slot, and only then regenerate the primary key. The new primary key value can then be configured into the client applications, which have continued access using the old primary key in the secondary slot. Once all clients are updated, you can regenerate the secondary key to finally retire the old primary key.
+It is recommended that you periodically regenerate the keys used in the Shared Access Authorization Policy. The primary and secondary key slots exist so that you can rotate keys gradually. If your application generally uses the primary key, you can copy the primary key into the secondary key slot, and only then regenerate the primary key. The new primary key value can then be configured into the client applications, which have continued access using the old primary key in the secondary slot. Once all clients are updated, you can regenerate the secondary key to finally retire the old primary key.
-If you know or suspect that a key is compromised and you have to revoke the keys, you can regenerate both the [PrimaryKey](/dotnet/api/microsoft.servicebus.messaging.sharedaccessauthorizationrule) and the [SecondaryKey](/dotnet/api/microsoft.servicebus.messaging.sharedaccessauthorizationrule) of a [SharedAccessAuthorizationRule](/dotnet/api/microsoft.servicebus.messaging.sharedaccessauthorizationrule), replacing them with new keys. This procedure invalidates all tokens signed with the old keys.
+If you know or suspect that a key is compromised and you have to revoke the keys, you can regenerate both the primary key and the secondary key of a Shared Access Authorization Policy, replacing them with new keys. This procedure invalidates all tokens signed with the old keys.
## Shared Access Signature authentication with Service Bus
For a sample of a Service Bus application that illustrates the configuration and
## Access Shared Access Authorization rules on an entity
-With Service Bus .NET Framework libraries, you can access a [Microsoft.ServiceBus.Messaging.SharedAccessAuthorizationRule](/dotnet/api/microsoft.servicebus.messaging.sharedaccessauthorizationrule) object configured on a Service Bus queue or topic through the [AuthorizationRules](/dotnet/api/microsoft.servicebus.messaging.authorizationrules) collection in the corresponding [QueueDescription](/dotnet/api/microsoft.servicebus.messaging.queuedescription) or [TopicDescription](/dotnet/api/microsoft.servicebus.messaging.topicdescription).
-
-The following code shows how to add authorization rules for a queue.
-
-```csharp
-// Create an instance of NamespaceManager for the operation
-NamespaceManager nsm = NamespaceManager.CreateFromConnectionString(
- <connectionString> );
-QueueDescription qd = new QueueDescription( <qPath> );
-
-// Create a rule with send rights with keyName as "contosoQSendKey"
-// and add it to the queue description.
-qd.Authorization.Add(new SharedAccessAuthorizationRule("contosoSendKey",
- SharedAccessAuthorizationRule.GenerateRandomKey(),
- new[] { AccessRights.Send }));
-
-// Create a rule with listen rights with keyName as "contosoQListenKey"
-// and add it to the queue description.
-qd.Authorization.Add(new SharedAccessAuthorizationRule("contosoQListenKey",
- SharedAccessAuthorizationRule.GenerateRandomKey(),
- new[] { AccessRights.Listen }));
-
-// Create a rule with manage rights with keyName as "contosoQManageKey"
-// and add it to the queue description.
-// A rule with manage rights must also have send and receive rights.
-qd.Authorization.Add(new SharedAccessAuthorizationRule("contosoQManageKey",
- SharedAccessAuthorizationRule.GenerateRandomKey(),
- new[] {AccessRights.Manage, AccessRights.Listen, AccessRights.Send }));
-
-// Create the queue.
-nsm.CreateQueue(qd);
-```
+Use the get/update operation on queues or topics in of the [management libraries for Service Bus](service-bus-management-libraries.md) to access/update the corresponding Shared Access Authorization Rules. You can also add the rules when creating the queues or topics using these libraries.
## Use Shared Access Signature authorization
-Applications using the Azure .NET SDK with the Service Bus .NET libraries can use SAS authorization through the [SharedAccessSignatureTokenProvider](/dotnet/api/microsoft.servicebus.sharedaccesssignaturetokenprovider) class. The following code illustrates the use of the token provider to send messages to a Service Bus queue. Alternative to the usage shown here, you can also pass a previously issued token to the token provider factory method.
-
-```csharp
-Uri runtimeUri = ServiceBusEnvironment.CreateServiceUri("sb",
- <yourServiceNamespace>, string.Empty);
-MessagingFactory mf = MessagingFactory.Create(runtimeUri,
- TokenProvider.CreateSharedAccessSignatureTokenProvider(keyName, key));
-QueueClient sendClient = mf.CreateQueueClient(qPath);
-
-//Sending hello message to queue.
-BrokeredMessage helloMessage = new BrokeredMessage("Hello, Service Bus!");
-helloMessage.MessageId = "SAS-Sample-Message";
-sendClient.Send(helloMessage);
-```
-
-You can also use the token provider directly for issuing tokens to pass to other clients.
+Applications using any of the Service Bus SDK in any of the officially supported languages like .NET, Java, JavaScript and Python can make use of SAS authorization through the connection strings passed to the client constructor.
Connection strings can include a rule name (*SharedAccessKeyName*) and rule key (*SharedAccessKey*) or a previously issued token (*SharedAccessSignature*). When those are present in the connection string passed to any constructor or factory method accepting a connection string, the SAS token provider is automatically created and populated.
-Note that to use SAS authorization with Service Bus relays, you can use SAS keys configured on the Service Bus namespace. If you explicitly create a relay on the namespace ([NamespaceManager](/dotnet/api/microsoft.servicebus.namespacemanager) with a [RelayDescription](/dotnet/api/microsoft.servicebus.messaging.relaydescription)) object, you can set the SAS rules just for that relay. To use SAS authorization with Service Bus subscriptions, you can use SAS keys configured on a Service Bus namespace or on a topic.
+To use SAS authorization with Service Bus subscriptions, you can use SAS keys configured on a Service Bus namespace or on a topic.
## Use the Shared Access Signature (at HTTP level)
In the previous section, you saw how to use the SAS token with an HTTP POST requ
Before starting to send data to Service Bus, the publisher must send the SAS token inside an AMQP message to a well-defined AMQP node named **$cbs** (you can see it as a "special" queue used by the service to acquire and validate all the SAS tokens). The publisher must specify the **ReplyTo** field inside the AMQP message; this is the node in which the service replies to the publisher with the result of the token validation (a simple request/reply pattern between publisher and service). This reply node is created "on the fly," speaking about "dynamic creation of remote node" as described by the AMQP 1.0 specification. After checking that the SAS token is valid, the publisher can go forward and start to send data to the service.
-The following steps show how to send the SAS token with AMQP protocol using the [AMQP.NET Lite](https://github.com/Azure/amqpnetlite) library. This is useful if you can't use the official Service Bus SDK (for example on WinRT, .NET Compact Framework, .NET Micro Framework and Mono) developing in C\#. Of course, this library is useful to help understand how claims-based security works at the AMQP level, as you saw how it works at the HTTP level (with an HTTP POST request and the SAS token sent inside the "Authorization" header). If you don't need such deep knowledge about AMQP, you can use the official Service Bus SDK with .NET Framework applications, which will do it for you.
+The following steps show how to send the SAS token with AMQP protocol using the [AMQP.NET Lite](https://github.com/Azure/amqpnetlite) library. This is useful if you can't use the official Service Bus SDK (for example on WinRT, .NET Compact Framework, .NET Micro Framework and Mono) developing in C\#. Of course, this library is useful to help understand how claims-based security works at the AMQP level, as you saw how it works at the HTTP level (with an HTTP POST request and the SAS token sent inside the "Authorization" header). If you don't need such deep knowledge about AMQP, you can use the official Service Bus SDK in any of the supported languages like .NET, Java, JavaScript, Python and Go, which will do it for you.
### C&#35;
The following table shows the access rights required for various operations on S
| Deadletter a message |Listen |Any valid queue address | | Get the state associated with a message queue session |Listen |Any valid queue address | | Set the state associated with a message queue session |Listen |Any valid queue address |
-| Schedule a message for later delivery; for example, [ScheduleMessageAsync()](/dotnet/api/microsoft.azure.servicebus.queueclient.schedulemessageasync#Microsoft_Azure_ServiceBus_QueueClient_ScheduleMessageAsync_Microsoft_Azure_ServiceBus_Message_System_DateTimeOffset_) |Listen | Any valid queue address
+| Schedule a message for later delivery |Listen | Any valid queue address
| **Topic** | | | | Create a topic |Manage |Any namespace address | | Delete a topic |Manage |Any valid topic address |
service-fabric Service Fabric Versions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-fabric/service-fabric-versions.md
The tables in this article outline the Service Fabric and platform versions that
## Windows
-| Service Fabric runtime |Can upgrade directly from|Can downgrade to|Compatible SDK or NuGet package version|Supported dotnet runtimes** |OS Version |End of support |
+| Service Fabric runtime |Can upgrade directly from|Can downgrade to|Compatible SDK or NuGet package version|Supported .NET runtimes** |OS Version |End of support |
| | | | | | | |
-| 8.0 RTO | 7.1 CU10 | 7.2 | Less than or equal to version 5.0 | .NET 5.0 (GA), .NET Core 3.1, .NET Core 2.1, <br>All >=4.5 .NET Full Framework| [See supported OS version](#supported-windows-versions-and-support-end-date) | Current version |
-| 7.2 CU7 | 7.0 CU9 | 7.1 | Less than or equal to version 4.2 | .NET 5.0 (Preview support), .NET Core 3.1, .NET Core 2.1,<br>All >= 4.5 Net Full Framework | [See supported OS version](#supported-windows-versions-and-support-end-date) | November 30, 2021 |
-| 7.2 CU6 | 7.0 CU4 |7.1 | Less than or equal to version 4.2 | .NET 5.0 (Preview support), .NET Core 3.1, .NET Core 2.1,<br>All >= 4.5 Net Full Framework | [See supported OS version](#supported-windows-versions-and-support-end-date)| November 30, 2021 |
-| 7.2 RTO-CU5 | 7.0 CU4 | 7.1 |Less than or equal to version 4.2 | .NET Core 3.1, .NET Core 2.1,<br>All >= 4.5 Net Full Framework | [See supported OS version](#supported-windows-versions-and-support-end-date)| November 30, 2021 |
-| 7.1 |7.0 CU3 |N/A | Less than or equal to version 4.1 | .NET Core 3.1, .NET Core 2.1,<br>All >= 4.5 Net Full Framework | [See supported OS version](#supported-windows-versions-and-support-end-date) | July 31, 2021 |
+| 8.0 RTO | 7.1 CU10 | 7.2 | Less than or equal to version 5.0 | .NET 5.0 (GA), >= .NET Core 2.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | Current version |
+| 7.2 CU7 | 7.0 CU9 | 7.1 | Less than or equal to version 4.2 | .NET 5.0 (Preview support), >= .NET Core 2.1,<br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | November 30, 2021 |
+| 7.2 CU6 | 7.0 CU4 |7.1 | Less than or equal to version 4.2 | .NET 5.0 (Preview support), >= .NET Core 2.1,<br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date)| November 30, 2021 |
+| 7.2 RTO-CU5 | 7.0 CU4 | 7.1 |Less than or equal to version 4.2 | >= .NET Core 2.1,<br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date)| November 30, 2021 |
+| 7.1 |7.0 CU3 |N/A | Less than or equal to version 4.1 | >= .NET Core 2.1,<br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | July 31, 2021 |
** Service Fabric does not provide a .NET Core runtime. The service author is responsible for ensuring it is <a href="/dotnet/core/deploying/">available</a>.
Support for Service Fabric on a specific OS ends when support for the OS version
## Linux
-| Service Fabric runtime | Can upgrade directly from |Can downgrade to |Compatible SDK or NuGet package version | Supported dotnet runtimes** | OS version | End of support |
+| Service Fabric runtime | Can upgrade directly from |Can downgrade to |Compatible SDK or NuGet package version | Supported .NET runtimes** | OS version | End of support |
| | | | | | | |
-| 8.0 RTO | 7.1 CU8 | 7.2 | Less than or equal to version 5.0 | .NET Core 3.1, .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | Current version |
-| 7.2 CU7 | 7.0 CU9 | 7.1 | Less than or equal to version 4.2 | .NET Core 3.1, .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | November 30, 2021 |
-| 7.2 RTO-CU6 | 7.0 CU4 | 7.1 | Less than or equal to version 4.2 | .NET Core 3.1, .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | November 30, 2021 |
-| 7.1 | 7.0 CU3 | N/A | Less than or equal to version 4.1 | .NET Core 3.1, .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | July 31, 2021 |
+| 8.0 RTO | 7.1 CU8 | 7.2 | Less than or equal to version 5.0 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | Current version |
+| 7.2 CU7 | 7.0 CU9 | 7.1 | Less than or equal to version 4.2 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | November 30, 2021 |
+| 7.2 RTO-CU6 | 7.0 CU4 | 7.1 | Less than or equal to version 4.2 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | November 30, 2021 |
+| 7.1 | 7.0 CU3 | N/A | Less than or equal to version 4.1 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | July 31, 2021 |
** Service Fabric does not provide a .NET Core runtime and the service author is responsible for ensuring it is <a href="/dotnet/core/deploying/">available</a>
The following table lists the version names of Service Fabric and their correspo
| 8.0 RTO | 8.0.514.9590 | 8.0.513.1 | | 7.2 CU7 | 7.2.477.9590 | 7.2.476.1 | | 7.2 CU6 | 7.2.457.9590 | 7.2.456.1 |
+| 7.2 CU7 | 7.2.477.9590 | 7.2.476.1 |
+
+## Supported .NET runtimes
+
+The following table lists the .NET runtimes supported by Service Fabric:
+
+| Service Fabric runtime | Supported .NET runtimes for Windows |Supported .NET runtimes for Linux |
+| | | |
+| 8.0 RTO | .NET 5.0, >= .NET Core 2.1, All >= .NET Framework 4.5 | >= .NET Core 2.1|
| 7.2 CU5 | 7.2.452.9590 | 7.2.454.1 | | 7.2 CU4 | 7.2.445.9590 | 7.2.447.1 | | 7.2 CU3 | 7.2.433.9590 | NA |
The following table lists the version names of Service Fabric and their correspo
| 5.3 CU3 | 5.3.311.9590 | Not applicable| | 5.3 CU2 | 5.3.301.9590 | Not applicable| | 5.3 CU1 | 5.3.204.9494 | Not applicable|
-| 5.3 RTO | 5.3.121.9494 | Not applicable|
+| 5.3 RTO | 5.3.121.9494 | Not applicable|
site-recovery Site Recovery Manage Registration And Protection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/site-recovery/site-recovery-manage-registration-and-protection.md
If you replicate VMware VMs or Windows/Linux physical servers to Azure, you can
3. Note the ID of the VMM server. 4. Disassociate replication policies from clouds on the VMM server you want to remove. In **Site Recovery Infrastructure** > **For System Center VMM** > **Replication Policies**, double-click the associated policy. Right-click the cloud > **Disassociate**. 5. Delete the VMM server or active node. In **Site Recovery Infrastructure** > **For System Center VMM** > **VMM Servers**, right-click the server > **Delete**.
-6. If your VMM server was in a Disconnected state, then download and run the below cleanup script on the VMM server. Open PowerShell with the **Run as Administrator** option, to change the execution policy for the default (LocalMachine) scope. In the script, specify the ID of the VMM server you want to remove. The script removes registration and cloud pairing information from the server.
-
- ```
- pushd .
- try
- {
- $error.Clear()
- "This script will remove the old Hyper-V Recovery Manager related properties for this VMM. This can be run in below scenarios :"
- "1. Complete VMM site clean up."
- "2. VMM site clean up in case the associated VMM has become unresponsive. Input in this case will be the VMM ID of the unresponsive server."
-
- $choice = Read-Host "Enter your choice "
-
- if($choice -eq 1)
- {
- $vmmid = get-itemproperty 'hklm:\software\Microsoft\Microsoft System Center Virtual Machine Manager Server\Setup' -Name VMMID
- $vmmid = $vmmid.VmmID
-
- # $fullCleanup = 1 indicates that clean up all hyper-V recovery manager settings from this VMM.
- $fullCleanup = 1
-
- }
- else
- {
- try
- {
- [GUID]$vmmid = Read-Host "Enter the VMMId for the unresponsive VMM server "
- }
- catch
- {
- Write-Host "Error occured" -ForegroundColor "Red"
- $error[0]
- return
- }
-
- # $fullCleanup = 0 indicates that clean up only those clouds/VMs which are protecting/protected by the objects on the given VMMId.
- $fullCleanup = 0
- }
-
- if($vmmid -ne "")
- {
-
- Write-Host "Proceeding to remove Hyper-V Recovery Manager related properties for this VMM with ID: " $vmmid
- Write-Host "Before running the script ensure that the VMM service is running."
- Write-Host "In a VMM cluster ensure that the Windows Cluster service is running and run the script on each node."
- Write-Host "The VMM service (or the Cluster role) will be stopped when the script runs. After the script completes, restart the VMM or Cluster service."
-
- $choice = Read-Host "Do you want to continue (Y/N) ?"
- ""
- if($choice.ToLower() -eq "y" -or $choice.ToLower() -eq "yes" )
- {
- $isCluster = $false
- $path = 'HKLM:\SOFTWARE\Microsoft\Microsoft System Center Virtual Machine Manager Server\Setup'
- $key = Get-Item -LiteralPath $path -ErrorAction SilentlyContinue
- $name = 'HAVMMName'
- if ($key)
- {
- $clusterName = $key.GetValue($name, $null)
- if($clusterName -eq $null)
- {
- $serviceName = "SCVMMService"
- $service = Get-Service -Name $serviceName
- if ($service.Status -eq "Running")
- {
- "Stopping the VMM service..."
- net stop $serviceName
- }
- else
- {
- if($service.Status -eq "Stopped")
- {
- "VMM service is not running."
- }
- else
- {
- "Could not stop the VMM service as it is starting or stopping. Please try again later"
- return
- }
-
- }
- }
- else
- {
- $isCluster = $True
- $isPrimaryNode = $false
- $clusterName = $key.GetValue($name, $null)
-
- Write-Host "Clustered VMM detected"
-
- $clusService = Get-Service -Name ClusSvc
- Add-Type -AssemblyName System.ServiceProcess
- if ($clusService.Status -ne [System.ServiceProcess.ServiceControllerStatus]::Running)
- {
- Write-Host "Windows Cluster service is not running on this machine. Please start Windows cluster service before running this script"
- return
- }
-
- $clusterResources = Get-ClusterResource -Cluster $clusterName
- Write-Host "Searching for VMM cluster resource....."
-
- foreach ($clusterResource in $clusterResources)
- {
- if ($clusterResource.Name -match 'VMM Service')
- {
- Write-Host "Found SCVMM Cluster Resource" $clusterResource
- Write-Host "Cluster owner node is " $clusterResource.OwnerNode
- $currentHostName = [System.Net.Dns]::GetHostName()
- $clusterCheckpointList = get-clustercheckpoint -ResourceName $clusterResource.Name
- Write-Host "Current node is " $currentHostName
-
- if ([string]::Compare($clusterResource.OwnerNode, $currentHostName, $True) -eq 0)
- {
- $isPrimaryNode = $True
- Write-Host "Current node owns VMM cluster resource"
- Write-Host "Shutting VMM Cluster Resource down"
- Stop-ClusterResource $clusterResource
- }
- else
- {
- Write-Error "Current node does not own VMM cluster resource. Please run on this script on $clusterResource.OwnerNode"
- Exit
- }
-
- break
- }
- }
- }
- }
- else
- {
- Write-Error ΓÇ£Failed to find registry keys associated with VMMΓÇ¥
- return
- }
-
- ""
- "Connect to SCVMM database using"
- "1. Windows Authentication"
- "2. SQL Server Authentication"
-
- $mode = Read-Host "Enter your choice "
- ""
-
- cd 'hklm:\software\Microsoft\Microsoft System Center Virtual Machine Manager Server\Settings\Sql'
- $connectionString = get-itemproperty . -Name ConnectionString
- $conn = New-Object System.Data.SqlClient.SqlConnection
-
- if($mode -eq 1)
- {
- "Connecting to SQL via Windows Authentication..."
- $conn.ConnectionString = $connectionString.ConnectionString
- }
- else
- {
- "Connecting to SQL via SQL Server Authentication..."
-
- $credential = Get-Credential
- $loginName = $credential.UserName
- $password = $credential.password
- $password.MakeReadOnly();
- $conn.ConnectionString = $connectionString.ConnectionString.ToString().split(";",2)[1]
- $sqlcred = New-Object System.Data.SqlClient.SqlCredential($loginName, $password)
- $conn.Credential = $sqlcred
- }
-
- Write-Host "Connection string: " $conn.ConnectionString
- $conn.Open()
- $transaction = $conn.BeginTransaction("CleanupTransaction");
-
- try
- {
- $sql = "SELECT TOP 1 [Id]
- FROM [sysobjects]
- WHERE [Name] = 'tbl_DR_ProtectionUnit'
- AND [xType] = 'U'"
- $cmd = New-Object System.Data.SqlClient.SqlCommand($sql,$conn)
- $cmd.Transaction = $transaction
- $rdr = $cmd.ExecuteReader()
- $PUTableExists = $rdr.HasRows
- $rdr.Close()
- $SCVMM2012R2Detected = $false
- if($PUTableExists)
- {
- $sql = "SELECT [Id]
- FROM [tbl_DR_ProtectionUnit]"
- $cmd = New-Object System.Data.SqlClient.SqlCommand($sql,$conn)
- $cmd.Transaction = $transaction
- $rdr = $cmd.ExecuteReader()
- $SCVMM2012R2Detected = $rdr.HasRows
- $rdr.Close()
- }
-
- ""
- "Getting all clouds configured for protection..."
-
- $sql = "SELECT [PrimaryCloudID],
- [RecoveryCloudID],
- [PrimaryCloudName],
- [RecoveryCloudName]
- FROM [tbl_Cloud_CloudDRPairing]
- WHERE [PrimaryVMMID] = @VMMId
- OR [RecoveryVMMID] = @VMMId"
- $cmd = New-Object System.Data.SqlClient.SqlCommand($sql,$conn)
-
- $cmd.Parameters.AddWithValue("@VMMId",$vmmid) | Out-Null
- $cmd.Transaction = $transaction
- $da = New-Object System.Data.SqlClient.SqlDataAdapter
- $da.SelectCommand = $cmd
- $ds = New-Object System.Data.DataSet
- $da.Fill($ds, "Clouds") | Out-Null
-
- if($ds.Tables["Clouds"].Rows.Count -eq 0 )
- {
- "No clouds were found in protected or protecting status."
- }
- else
- {
- "Cloud pairing list populated."
-
- ""
- "Listing the clouds and their VMs..."
-
- $vmIds = @()
-
- foreach ($row in $ds.tables["Clouds"].rows)
- {
- ""
- "'{0}' protected by '{1}'" -f $row.PrimaryCloudName.ToString(), $row.RecoveryCloudName.ToString()
-
- $sql = "SELECT [ObjectId],
- [Name]
- FROM [tbl_WLC_VObject]
- WHERE [CloudId] IN (@PrimaryCloudId,@RecoveryCloudId)"
- $cmd = New-Object System.Data.SqlClient.SqlCommand($sql,$conn)
- $cmd.Transaction = $transaction
- $cmd.Parameters.AddWithValue("@PrimaryCloudId",$row.PrimaryCloudId.ToString()) | Out-Null
- $cmd.Parameters.AddWithValue("@RecoveryCloudId",$row.RecoveryCloudId.ToString()) | Out-Null
- $rdr = $cmd.ExecuteReader()
- if($rdr.HasRows)
- {
- "VM list:"
- }
- else
- {
- "No VMs found."
- }
- while($rdr.Read())
- {
- Write-Host $rdr["Name"].ToString()
- $vmIds = $vmIds + $rdr["ObjectId"].ToString();
- }
-
- $rdr.Close()
- }
-
-
- if($vmIds.Count -eq 0)
- {
- "No protected VMs are present."
- }
- else
- {
- ""
- "Removing recovery settings from all protected VMs..."
-
- if($SCVMM2012R2Detected)
- {
- $sql = "UPDATE vm
- SET [DRState] = 0,
- [DRErrors] = NULL,
- [ProtectionUnitId] = NULL
- FROM
- [tbl_WLC_VMInstance] vm
- INNER JOIN [tbl_WLC_VObject] vObj
- ON vm.[VMInstanceId] = vObj.[ObjectId]
- INNER JOIN [tbl_Cloud_CloudDRPairing] cpair
- ON vObj.[CloudId] = cpair.[PrimaryCloudID]
- OR vObj.[CloudId] = cpair.[RecoveryCloudID]
- WHERE cpair.[PrimaryVMMId] = @VMMId
- OR cpair.[RecoveryVMMID] = @VMMId"
- $cmd = New-Object System.Data.SqlClient.SqlCommand($sql,$conn)
- $cmd.Transaction = $transaction
- $cmd.Parameters.AddWithValue("@VMMId",$vmmid) | Out-Null
- $cmd.ExecuteNonQuery() | Out-Null
- }
- else
- {
- $sql = "UPDATE vm
- SET [DRState] = 0,
- [DRErrors] = NULL
- FROM
- [tbl_WLC_VMInstance] vm
- INNER JOIN [tbl_WLC_VObject] vObj
- ON vm.[VMInstanceId] = vObj.[ObjectId]
- INNER JOIN [tbl_Cloud_CloudDRPairing] cpair
- ON vObj.[CloudId] = cpair.[PrimaryCloudID]
- OR vObj.[CloudId] = cpair.[RecoveryCloudID]
- WHERE cpair.[PrimaryVMMId] = @VMMId
- OR cpair.[RecoveryVMMID] = @VMMId"
- $cmd = New-Object System.Data.SqlClient.SqlCommand($sql,$conn)
- $cmd.Transaction = $transaction
- $cmd.Parameters.AddWithValue("@VMMId",$vmmid) | Out-Null
- $cmd.ExecuteNonQuery() | Out-Null
- }
-
-
- $sql = "UPDATE hwp
- SET [IsDRProtectionRequired] = 0
- FROM
- [tbl_WLC_HWProfile] hwp
- INNER JOIN [tbl_WLC_VObject] vObj
- ON hwp.[HWProfileId] = vObj.[HWProfileId]
- INNER JOIN [tbl_Cloud_CloudDRPairing] cpair
- ON vObj.[CloudId] = cpair.[PrimaryCloudID]
- OR vObj.[CloudId] = cpair.[RecoveryCloudID]
- WHERE cpair.[PrimaryVMMId] = @VMMId
- OR cpair.[RecoveryVMMID] = @VMMId"
- $cmd = New-Object System.Data.SqlClient.SqlCommand($sql,$conn)
- $cmd.Transaction = $transaction
- $cmd.Parameters.AddWithValue("@VMMId",$vmmid) | Out-Null
- $cmd.ExecuteNonQuery() | Out-Null
-
- "Recovery settings removed successfully for {0} VMs" -f $vmIds.Count
- }
-
-
- ""
- "Removing recovery settings from all clouds..."
- if($SCVMM2012R2Detected)
- {
- if($fullCleanup -eq 1)
- {
- $sql = "DELETE phost
- FROM [tbl_DR_ProtectionUnit_HostRelation] phost
- INNER JOIN [tbl_Cloud_CloudScopeRelation] csr
- ON phost.[ProtectionUnitId] = csr.[ScopeId]
- WHERE csr.[ScopeType] = 214"
- $cmd = New-Object System.Data.SqlClient.SqlCommand($sql,$conn)
- $cmd.Transaction = $transaction
- $cmd.ExecuteNonQuery() | Out-Null
--
- $sql = "UPDATE [tbl_Cloud_Cloud]
- SET [IsDRProtected] = 0,
- [IsDRProvider] = 0,
- [DisasterRecoverySupported] = 0"
- $cmd = New-Object System.Data.SqlClient.SqlCommand($sql,$conn)
- $cmd.Transaction = $transaction
- $cmd.ExecuteNonQuery() | Out-Null
-
- }
- else
- {
- $sql = "DELETE phost
- FROM [tbl_DR_ProtectionUnit_HostRelation] phost
- INNER JOIN [tbl_Cloud_CloudScopeRelation] csr
- ON phost.[ProtectionUnitId] = csr.[ScopeId]
- INNER JOIN [tbl_Cloud_CloudDRPairing] cpair
- ON csr.[CloudId] = cpair.[primaryCloudId]
- OR csr.[CloudId] = cpair.[recoveryCloudId]
- WHERE csr.ScopeType = 214
- AND cpair.[PrimaryVMMId] = @VMMId
- OR cpair.[RecoveryVMMID] = @VMMId"
- $cmd = New-Object System.Data.SqlClient.SqlCommand($sql,$conn)
- $cmd.Transaction = $transaction
- $cmd.Parameters.AddWithValue("@VMMId",$vmmid) | Out-Null
- $cmd.ExecuteNonQuery() | Out-Null
-
- $sql = "UPDATE cloud
- SET [IsDRProtected] = 0,
- [IsDRProvider] = 0
- FROM
- [tbl_Cloud_Cloud] cloud
- INNER JOIN [tbl_Cloud_CloudDRPairing] cpair
- ON cloud.[ID] = cpair.[PrimaryCloudID]
- OR cloud.[ID] = cpair.[RecoveryCloudID]
- WHERE cpair.[PrimaryVMMId] = @VMMId
- OR cpair.[RecoveryVMMID] = @VMMId"
- $cmd = New-Object System.Data.SqlClient.SqlCommand($sql,$conn)
- $cmd.Transaction = $transaction
- $cmd.Parameters.AddWithValue("@VMMId",$vmmid) | Out-Null
- $cmd.ExecuteNonQuery() | Out-Null
-
- }
-
- }
-
- # VMM 2012 SP1 detected.
- else
- {
- $sql = "UPDATE cloud
- SET [IsDRProtected] = 0,
- [IsDRProvider] = 0
- FROM
- [tbl_Cloud_Cloud] cloud
- INNER JOIN [tbl_Cloud_CloudDRPairing] cpair
- ON cloud.[ID] = cpair.[PrimaryCloudID]
- OR cloud.[ID] = cpair.[RecoveryCloudID]
- WHERE cpair.[PrimaryVMMId] = @VMMId
- OR cpair.[RecoveryVMMID] = @VMMId"
- $cmd = New-Object System.Data.SqlClient.SqlCommand($sql,$conn)
- $cmd.Transaction = $transaction
- $cmd.Parameters.AddWithValue("@VMMId",$vmmid) | Out-Null
- $cmd.ExecuteNonQuery() | Out-Null
- }
-
- "Recovery settings removed successfully."
-
- ""
- "Deleting cloud pairing entities..."
-
- $sql = "DELETE FROM [tbl_Cloud_CloudDRPairing]
- WHERE [PrimaryVMMID] = @VMMId
- OR [RecoveryVMMID] = @VMMId"
- $cmd = New-Object System.Data.SqlClient.SqlCommand($sql,$conn)
- $cmd.Transaction = $transaction
- $cmd.Parameters.AddWithValue("@VMMId",$vmmid) | Out-Null
- $cmd.ExecuteNonQuery() | Out-Null
-
- "Cloud pairing entities deleted successfully."
- }
-
-
- if ($SCVMM2012R2Detected)
- {
- "Removing SAN related entries"
-
- $sql = "DELETE sanMap
- FROM [tbl_DR_ProtectionUnit_StorageArray] sanMap
- INNER JOIN [tbl_Cloud_CloudScopeRelation] csr
- ON sanMap.[ProtectionUnitId] = csr.[ScopeId]
- WHERE csr.[ScopeType] = 214"
- $cmd = New-Object System.Data.SqlClient.SqlCommand($sql,$conn)
- $cmd.Transaction = $transaction
- $cmd.ExecuteNonQuery() | Out-Null
-
- "SAN related entities deleted successfully"
- }
-
-
- if($fullCleanup -eq 1)
- {
- # In case of full cleanup reset all VMs protection data.
- ""
- "Removing stale entries for VMs..."
- if($SCVMM2012R2Detected)
- {
- $sql = "UPDATE [tbl_WLC_VMInstance]
- SET [DRState] = 0,
- [DRErrors] = NULL,
- [ProtectionUnitId] = NULL"
- $cmd = New-Object System.Data.SqlClient.SqlCommand($sql,$conn)
- $cmd.Transaction = $transaction
- $cmd.ExecuteNonQuery() | Out-Null
- }
- else
- {
- $sql = "UPDATE [tbl_WLC_VMInstance]
- SET [DRState] = 0,
- [DRErrors] = NULL"
- $cmd = New-Object System.Data.SqlClient.SqlCommand($sql,$conn)
- $cmd.Transaction = $transaction
- $cmd.ExecuteNonQuery() | Out-Null
- }
-
-
- $sql = "UPDATE [tbl_WLC_HWProfile]
- SET [IsDRProtectionRequired] = 0"
- $cmd = New-Object System.Data.SqlClient.SqlCommand($sql,$conn)
- $cmd.Transaction = $transaction
- $cmd.ExecuteNonQuery() | Out-Null
- # Done removing stale enteries
-
- # Cloud publish settings and registration details are cleaned up even if there are no paired clouds.
- if($SCVMM2012R2Detected)
- {
- ""
- "Removing cloud publish settings..."
-
- # Currently 214 scopeType points to only ProtectionProvider = 1,2 (HVR1 and HVR2).
- # Once new providers are introduced appropriate filtering should be done before delete
- # in below two queries.
- $sql = "DELETE punit
- FROM [tbl_DR_ProtectionUnit] punit
- INNER JOIN [tbl_Cloud_CloudScopeRelation] csr
- ON punit.[ID] = csr.[ScopeId]
- WHERE csr.[ScopeType] = 214"
- $cmd = New-Object System.Data.SqlClient.SqlCommand($sql,$conn)
- $cmd.Transaction = $transaction
- $cmd.ExecuteNonQuery() | Out-Null
-
-
- $sql = "DELETE FROM [tbl_Cloud_CloudScopeRelation]
- WHERE [ScopeType] = 214"
- $cmd = New-Object System.Data.SqlClient.SqlCommand($sql,$conn)
- $cmd.Transaction = $transaction
- $cmd.ExecuteNonQuery() | Out-Null
- "Cloud publish settings removed successfully."
- }
-
- ""
- "Un-registering VMM..."
-
- $currentTime = Get-Date
- $sql = "UPDATE [tbl_DR_VMMRegistrationDetails]
- SET [DRSubscriptionId] = '',
- [VMMFriendlyName] = '',
- [DRAdapterInstalledVersion] = '',
- [LastModifiedDate] = @LastModifiedTime,
- [DRAuthCertBlob] = NULL,
- [DRAuthCertThumbprint] = NULL,
- [HostSigningCertBlob] = NULL,
- [HostSigningCertThumbprint] = NULL,
- [DRAdapterUpdateVersion] = '',
- [OrgIdUserName] = ''
- WHERE [VMMId] = @VMMId"
- $cmd = New-Object System.Data.SqlClient.SqlCommand($sql,$conn)
- $cmd.Transaction = $transaction
- $param1 = $cmd.Parameters.AddWithValue("@LastModifiedTime", [System.Data.SqlDbType]::DateTime)
- $param1.Value = Get-Date
- $cmd.Parameters.AddWithValue("@VMMId",$vmmid) | Out-Null
- $cmd.ExecuteNonQuery() | Out-Null
-
- "Un-registration completed successfully."
-
- ""
- "Removing KEK..."
-
- $kekid = "06cda9f3-2e3d-49ee-8e18-2d9bd1d74034"
- $rolloverKekId = "fe0adfd7-309a-429a-b420-e8ed067338e6"
- $sql = "DELETE FROM [tbl_VMM_CertificateStore]
- WHERE [CertificateID] IN (@KEKId,@RolloverKekId)"
- $cmd = New-Object System.Data.SqlClient.SqlCommand($sql,$conn)
- $cmd.Transaction = $transaction
- $cmd.Parameters.AddWithValue("@KEKId",$kekid) | Out-Null
- $cmd.Parameters.AddWithValue("@RolloverKekId",$rolloverKekId) | Out-Null
- $cmd.ExecuteNonQuery() | Out-Null
-
- "Removing KEK completed successfully."
-
- if($error.Count -eq 0)
- {
- $transaction.Commit()
-
- ""
- "Removing registration related registry keys."
-
- $path = "software\Microsoft\Microsoft System Center Virtual Machine Manager Server\DRAdapter\Registration"
- if((Test-Path "hklm:\$path" ))
- {
- if($isCluster -and $isPrimaryNode)
- {
- foreach($checkpoint in $clusterCheckpointList)
- {
- $compareResult = [string]::Compare($path, $checkpoint.Name, $True)
-
- if($compareResult -eq 0)
- {
- Write-Host "Removing Checkpointing for $path"
- Remove-ClusterCheckpoint -CheckpointName $path
- }
- }
- }
-
- Remove-Item -Path "hklm:\$path"
-
- $proxyPath = "software\Microsoft\Microsoft System Center Virtual Machine Manager Server\DRAdapter\ProxySettings"
- if((Test-Path "hklm:\$proxyPath"))
- {
- if($isCluster -and $isPrimaryNode)
- {
- foreach($checkpoint in $clusterCheckpointList)
- {
- $compareResult = [string]::Compare($proxyPath, $checkpoint.Name, $True)
-
- if($compareResult -eq 0)
- {
- Write-Host "Removing Checkpointing for $proxyPath"
- Remove-ClusterCheckpoint -CheckpointName $proxyPath
- }
- }
- }
-
- Remove-Item -Path "hklm:\$proxyPath"
- }
-
- $backupPath = "software\Microsoft\Hyper-V Recovery Manager"
- if((Test-Path "hklm:\$backupPath"))
- {
- if($isCluster -and $isPrimaryNode)
- {
- foreach($checkpoint in $clusterCheckpointList)
- {
- $compareResult = [string]::Compare($backupPath, $checkpoint.Name, $True)
-
- if($compareResult -eq 0)
- {
- Write-Host "Removing Checkpointing for $backupPath"
- Remove-ClusterCheckpoint -CheckpointName $backupPath
- }
- }
- }
- Remove-Item "hklm:\$backupPath" -recurse
- }
- "Registry keys removed successfully."
- ""
- }
- else
- {
- "Could not delete registration key as hklm:\software\Microsoft\Microsoft System Center Virtual Machine Manager Server\DRAdapter\Registration doesn't exist."
- }
-
- Write-Host "SUCCESS!!" -ForegroundColor "Green"
- }
- else
- {
- $transaction.Rollback()
- Write-Error "Error occured"
- $error[0]
- ""
- Write-Error "FAILED"
- "All updates to the VMM database have been rolled back."
- }
- }
- else
- {
- if($error.Count -eq 0)
- {
- $transaction.Commit()
- Write-Host "SUCCESS!!" -ForegroundColor "Green"
- }
- else
- {
- $transaction.Rollback()
- Write-Error "FAILED"
- }
- }
-
- $conn.Close()
- }
- catch
- {
- $transaction.Rollback()
- Write-Host "Error occured" -ForegroundColor "Red"
- $error[0]
- Write-Error "FAILED"
- "All updates to the VMM database have been rolled back."
- }
- }
- }
- else
- {
- Write-Error "VMM Id is missing from hklm:\software\Microsoft\Microsoft System Center Virtual Machine Manager Server\Setup or VMMId is not provided."
- Write-Error "FAILED" -ForegroundColor
- }
- }
-
- catch
- {
- Write-Error "Error occured"
- $error[0]
- Write-Error "FAILED"
- }
-
- if($isCluster)
- {
- if($clusterResource.State -eq [Microsoft.FailoverClusters.PowerShell.ClusterResourceState]::Offline)
- {
- Write-Host "Cluster role is in stopped state."
- }
- else
- {
- Write-Host "Operation completed. Cluster role was not stopped."
- }
- }
- else
- {
- Write-Host "The VMM service is in stopped state."
- }
-
- popd
- # SIG # Begin signature block
- # MIId0wYJKoZIhvcNAQcCoIIdxDCCHcACAQExCzAJBgUrDgMCGgUAMGkGCisGAQQB
- # gjcCAQSgWzBZMDQGCisGAQQBgjcCAR4wJgIDAQAABBAfzDtgWUsITrck0sYpfvNR
- # AgEAAgEAAgEAAgEAAgEAMCEwCQYFKw4DAhoFAAQU3rRWHH5OCASnIAZsmgmowP/T
- # p6egghhkMIIEwzCCA6ugAwIBAgITMwAAAIgVUlHPFzd7VQAAAAAAiDANBgkqhkiG
- # 9w0BAQUFADB3MQswCQYDVQQGEwJVUzETMBEGA1UECBMKV2FzaGluZ3RvbjEQMA4G
- # A1UEBxMHUmVkbW9uZDEeMBwGA1UEChMVTWljcm9zb2Z0IENvcnBvcmF0aW9uMSEw
- # HwYDVQQDExhNaWNyb3NvZnQgVGltZS1TdGFtcCBQQ0EwHhcNMTUxMDA3MTgxNDAx
- # WhcNMTcwMTA3MTgxNDAxWjCBszELMAkGA1UEBhMCVVMxEzARBgNVBAgTCldhc2hp
- # bmd0b24xEDAOBgNVBAcTB1JlZG1vbmQxHjAcBgNVBAoTFU1pY3Jvc29mdCBDb3Jw
- # b3JhdGlvbjENMAsGA1UECxMETU9QUjEnMCUGA1UECxMebkNpcGhlciBEU0UgRVNO
- # OjdBRkEtRTQxQy1FMTQyMSUwIwYDVQQDExxNaWNyb3NvZnQgVGltZS1TdGFtcCBT
- # ZXJ2aWNlMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAyBEjpkOcrwAm
- # 9WRMNBv90OUqsqL7/17OvrhGMWgwAsx3sZD0cMoNxrlfHwNfCNopwH0z7EI3s5gQ
- # Z4Pkrdl9GjQ9/FZ5uzV24xfhdq/u5T2zrCXC7rob9FfhBtyTI84B67SDynCN0G0W
- # hJaBW2AFx0Dn2XhgYzpvvzk4NKZl1NYi0mHlHSjWfaqbeaKmVzp9JSfmeaW9lC6s
- # IgqKo0FFZb49DYUVdfbJI9ECTyFEtUaLWGchkBwj9oz62u9Kg6sh3+UslWTY4XW+
- # 7bBsN3zC430p0X7qLMwQf+0oX7liUDuszCp828HsDb4pu/RRyv+KOehVKx91UNcr
- # Dc9Z7isNeQIDAQABo4IBCTCCAQUwHQYDVR0OBBYEFJQRxg5HoMTIdSZj1v3l1GjM
- # 6KEMMB8GA1UdIwQYMBaAFCM0+NlSRnAK7UD7dvuzK7DDNbMPMFQGA1UdHwRNMEsw
- # SaBHoEWGQ2h0dHA6Ly9jcmwubWljcm9zb2Z0LmNvbS9wa2kvY3JsL3Byb2R1Y3Rz
- # L01pY3Jvc29mdFRpbWVTdGFtcFBDQS5jcmwwWAYIKwYBBQUHAQEETDBKMEgGCCsG
- # AQUFBzAChjxodHRwOi8vd3d3Lm1pY3Jvc29mdC5jb20vcGtpL2NlcnRzL01pY3Jv
- # c29mdFRpbWVTdGFtcFBDQS5jcnQwEwYDVR0lBAwwCgYIKwYBBQUHAwgwDQYJKoZI
- # hvcNAQEFBQADggEBAHoudDDxFsg2z0Y+GhQ91SQW1rdmWBxJOI5OpoPzI7P7X2dU
- # ouvkmQnysdipDYER0xxkCf5VAz+dDnSkUQeTn4woryjzXBe3g30lWh8IGMmGPWhq
- # L1+dpjkxKbIk9spZRdVH0qGXbi8tqemmEYJUW07wn76C+wCZlbJnZF7W2+5g9MZs
- # RT4MAxpQRw+8s1cflfmLC5a+upyNO3zBEY2gaBs1til9O7UaUD4OWE4zPuz79AJH
- # 9cGBQo8GnD2uNFYqLZRx3T2X+AVt/sgIHoUSK06fqVMXn1RFSZT3jRL2w/tD5uef
- # 4ta/wRmAStRMbrMWYnXAeCJTIbWuE2lboA3IEHIwggYHMIID76ADAgECAgphFmg0
- # AAAAAAAcMA0GCSqGSIb3DQEBBQUAMF8xEzARBgoJkiaJk/IsZAEZFgNjb20xGTAX
- # BgoJkiaJk/IsZAEZFgltaWNyb3NvZnQxLTArBgNVBAMTJE1pY3Jvc29mdCBSb290
- # IENlcnRpZmljYXRlIEF1dGhvcml0eTAeFw0wNzA0MDMxMjUzMDlaFw0yMTA0MDMx
- # MzAzMDlaMHcxCzAJBgNVBAYTAlVTMRMwEQYDVQQIEwpXYXNoaW5ndG9uMRAwDgYD
- # VQQHEwdSZWRtb25kMR4wHAYDVQQKExVNaWNyb3NvZnQgQ29ycG9yYXRpb24xITAf
- # BgNVBAMTGE1pY3Jvc29mdCBUaW1lLVN0YW1wIFBDQTCCASIwDQYJKoZIhvcNAQEB
- # BQADggEPADCCAQoCggEBAJ+hbLHf20iSKnxrLhnhveLjxZlRI1Ctzt0YTiQP7tGn
- # 0UytdDAgEesH1VSVFUmUG0KSrphcMCbaAGvoe73siQcP9w4EmPCJzB/LMySHnfL0
- # Zxws/HvniB3q506jocEjU8qN+kXPCdBer9CwQgSi+aZsk2fXKNxGU7CG0OUoRi4n
- # rIZPVVIM5AMs+2qQkDBuh/NZMJ36ftaXs+ghl3740hPzCLdTbVK0RZCfSABKR2YR
- # JylmqJfk0waBSqL5hKcRRxQJgp+E7VV4/gGaHVAIhQAQMEbtt94jRrvELVSfrx54
- # QTF3zJvfO4OToWECtR0Nsfz3m7IBziJLVP/5BcPCIAsCAwEAAaOCAaswggGnMA8G
- # A1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFCM0+NlSRnAK7UD7dvuzK7DDNbMPMAsG
- # A1UdDwQEAwIBhjAQBgkrBgEEAYI3FQEEAwIBADCBmAYDVR0jBIGQMIGNgBQOrIJg
- # QFYnl+UlE/wq4QpTlVnkpKFjpGEwXzETMBEGCgmSJomT8ixkARkWA2NvbTEZMBcG
- # CgmSJomT8ixkARkWCW1pY3Jvc29mdDEtMCsGA1UEAxMkTWljcm9zb2Z0IFJvb3Qg
- # Q2VydGlmaWNhdGUgQXV0aG9yaXR5ghB5rRahSqClrUxzWPQHEy5lMFAGA1UdHwRJ
- # MEcwRaBDoEGGP2h0dHA6Ly9jcmwubWljcm9zb2Z0LmNvbS9wa2kvY3JsL3Byb2R1
- # Y3RzL21pY3Jvc29mdHJvb3RjZXJ0LmNybDBUBggrBgEFBQcBAQRIMEYwRAYIKwYB
- # BQUHMAKGOGh0dHA6Ly93d3cubWljcm9zb2Z0LmNvbS9wa2kvY2VydHMvTWljcm9z
- # b2Z0Um9vdENlcnQuY3J0MBMGA1UdJQQMMAoGCCsGAQUFBwMIMA0GCSqGSIb3DQEB
- # BQUAA4ICAQAQl4rDXANENt3ptK132855UU0BsS50cVttDBOrzr57j7gu1BKijG1i
- # uFcCy04gE1CZ3XpA4le7r1iaHOEdAYasu3jyi9DsOwHu4r6PCgXIjUji8FMV3U+r
- # kuTnjWrVgMHmlPIGL4UD6ZEqJCJw+/b85HiZLg33B+JwvBhOnY5rCnKVuKE5nGct
- # xVEO6mJcPxaYiyA/4gcaMvnMMUp2MT0rcgvI6nA9/4UKE9/CCmGO8Ne4F+tOi3/F
- # NSteo7/rvH0LQnvUU3Ih7jDKu3hlXFsBFwoUDtLaFJj1PLlmWLMtL+f5hYbMUVbo
- # nXCUbKw5TNT2eb+qGHpiKe+imyk0BncaYsk9Hm0fgvALxyy7z0Oz5fnsfbXjpKh0
- # NbhOxXEjEiZ2CzxSjHFaRkMUvLOzsE1nyJ9C/4B5IYCeFTBm6EISXhrIniIh0EPp
- # K+m79EjMLNTYMoBMJipIJF9a6lbvpt6Znco6b72BJ3QGEe52Ib+bgsEnVLaxaj2J
- # oXZhtG6hE6a/qkfwEm/9ijJssv7fUciMI8lmvZ0dhxJkAj0tr1mPuOQh5bWwymO0
- # eFQF1EEuUKyUsKV4q7OglnUa2ZKHE3UiLzKoCG6gW4wlv6DvhMoh1useT8ma7kng
- # 9wFlb4kLfchpyOZu6qeXzjEp/w7FW1zYTRuh2Povnj8uVRZryROj/TCCBhAwggP4
- # oAMCAQICEzMAAABkR4SUhttBGTgAAAAAAGQwDQYJKoZIhvcNAQELBQAwfjELMAkG
- # A1UEBhMCVVMxEzARBgNVBAgTCldhc2hpbmd0b24xEDAOBgNVBAcTB1JlZG1vbmQx
- # HjAcBgNVBAoTFU1pY3Jvc29mdCBDb3Jwb3JhdGlvbjEoMCYGA1UEAxMfTWljcm9z
- # b2Z0IENvZGUgU2lnbmluZyBQQ0EgMjAxMTAeFw0xNTEwMjgyMDMxNDZaFw0xNzAx
- # MjgyMDMxNDZaMIGDMQswCQYDVQQGEwJVUzETMBEGA1UECBMKV2FzaGluZ3RvbjEQ
- # MA4GA1UEBxMHUmVkbW9uZDEeMBwGA1UEChMVTWljcm9zb2Z0IENvcnBvcmF0aW9u
- # MQ0wCwYDVQQLEwRNT1BSMR4wHAYDVQQDExVNaWNyb3NvZnQgQ29ycG9yYXRpb24w
- # ggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCTLtrY5j6Y2RsPZF9NqFhN
- # FDv3eoT8PBExOu+JwkotQaVIXd0Snu+rZig01X0qVXtMTYrywPGy01IVi7azCLiL
- # UAvdf/tqCaDcZwTE8d+8dRggQL54LJlW3e71Lt0+QvlaHzCuARSKsIK1UaDibWX+
- # 9xgKjTBtTTqnxfM2Le5fLKCSALEcTOLL9/8kJX/Xj8Ddl27Oshe2xxxEpyTKfoHm
- # 5jG5FtldPtFo7r7NSNCGLK7cDiHBwIrD7huTWRP2xjuAchiIU/urvzA+oHe9Uoi/
- # etjosJOtoRuM1H6mEFAQvuHIHGT6hy77xEdmFsCEezavX7qFRGwCDy3gsA4boj4l
- # AgMBAAGjggF/MIIBezAfBgNVHSUEGDAWBggrBgEFBQcDAwYKKwYBBAGCN0wIATAd
- # BgNVHQ4EFgQUWFZxBPC9uzP1g2jM54BG91ev0iIwUQYDVR0RBEowSKRGMEQxDTAL
- # BgNVBAsTBE1PUFIxMzAxBgNVBAUTKjMxNjQyKzQ5ZThjM2YzLTIzNTktNDdmNi1h
- # M2JlLTZjOGM0NzUxYzRiNjAfBgNVHSMEGDAWgBRIbmTlUAXTgqoXNzcitW2oynUC
- # lTBUBgNVHR8ETTBLMEmgR6BFhkNodHRwOi8vd3d3Lm1pY3Jvc29mdC5jb20vcGtp
- # b3BzL2NybC9NaWNDb2RTaWdQQ0EyMDExXzIwMTEtMDctMDguY3JsMGEGCCsGAQUF
- # BwEBBFUwUzBRBggrBgEFBQcwAoZFaHR0cDovL3d3dy5taWNyb3NvZnQuY29tL3Br
- # aW9wcy9jZXJ0cy9NaWNDb2RTaWdQQ0EyMDExXzIwMTEtMDctMDguY3J0MAwGA1Ud
- # EwEB/wQCMAAwDQYJKoZIhvcNAQELBQADggIBAIjiDGRDHd1crow7hSS1nUDWvWas
- # W1c12fToOsBFmRBN27SQ5Mt2UYEJ8LOTTfT1EuS9SCcUqm8t12uD1ManefzTJRtG
- # ynYCiDKuUFT6j
- # vDW+vCT5wN3nxO8DIlAUBbXMn7TJKAH2W7a/CDQ0p607Ivt3F7cqhEtrO1Rypehh
- # bkKQj4y/ebwc56qWHJ8VNjE8HlhfJAk8pAliHzML1v3QlctPutozuZD3jKAO4WaV
- # qJn5BJRHddW6l0SeCuZmBQHmNfXcz4+XZW/s88VTfGWjdSGPXC26k0LzV6mjEaEn
- # S1G4t0RqMP90JnTEieJ6xFcIpILgcIvcEydLBVe0iiP9AXKYVjAPn6wBm69FKCQr
- # IPWsMDsw9wQjaL8GHk4wCj0CmnixHQanTj2hKRc2G9GL9q7tAbo0kFNIFs0EYkbx
- # Cn7lBOEqhBSTyaPS6CvjJZGwD0lNuapXDu72y4Hk4pgExQ3iEv/Ij5oVWwT8okie
- # +fFLNcnVgeRrjkANgwoAyX58t0iqbefHqsg3RGSgMBu9MABcZ6FQKwih3Tj0DVPc
- # gnJQle3c6xN3dZpuEgFcgJh/EyDXSdppZzJR4+Bbf5XA/Rcsq7g7X7xl4bJoNKLf
- # cafOabJhpxfcFOowMIIHejCCBWKgAwIBAgIKYQ6Q0gAAAAAAAzANBgkqhkiG9w0B
- # AQsFADCBiDELMAkGA1UEBhMCVVMxEzARBgNVBAgTCldhc2hpbmd0b24xEDAOBgNV
- # BAcTB1JlZG1vbmQxHjAcBgNVBAoTFU1pY3Jvc29mdCBDb3Jwb3JhdGlvbjEyMDAG
- # A1UEAxMpTWljcm9zb2Z0IFJvb3QgQ2VydGlmaWNhdGUgQXV0aG9yaXR5IDIwMTEw
- # HhcNMTEwNzA4MjA1OTA5WhcNMjYwNzA4MjEwOTA5WjB+MQswCQYDVQQGEwJVUzET
- # MBEGA1UECBMKV2FzaGluZ3RvbjEQMA4GA1UEBxMHUmVkbW9uZDEeMBwGA1UEChMV
- # TWljcm9zb2Z0IENvcnBvcmF0aW9uMSgwJgYDVQQDEx9NaWNyb3NvZnQgQ29kZSBT
- # aWduaW5nIFBDQSAyMDExMIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEA
- # q/D6chAcLq3YbqqCEE00uvK2WCGfQhsqa+laUKq4BjgaBEm6f8MMHt03a8YS2Avw
- # OMKZBrDIOdUBFDFC04kNeWSHfpRgJGyvnkmc6Whe0t+bU7IKLMOv2akrrnoJr9eW
- # WcpgGgXpZnboMlImEi/nqwhQz7NEt13YxC4Ddato88tt8zpcoRb0RrrgOGSsbmQ1
- # eKagYw8t00CT+OPeBw3VXHmlSSnnDb6gE3e+lD3v++MrWhAfTVYoonpy4BI6t0le
- # 2O3tQ5GD2Xuye4Yb2T6xjF3oiU+EGvKhL1nkkDstrjNYxbc+/jLTswM9sbKvkjh+
- # 0p2ALPVOVpEhNSXDOW5kf1O6nA+tGSOEy/S6A4aN91/w0FK/jJSHvMAhdCVfGCi2
- # zCcoOCWYOUo2z3yxkq4cI6epZuxhH2rhKEmdX4jiJV3TIUs+UsS1Vz8kA/DRelsv
- # 1SPjcF0PUUZ3s/gA4bysAoJf28AVs70b1FVL5zmhD+kjSbwYuER8ReTBw3J64HLn
- # JN+/RpnF78IcV9uDjexNSTCnq47f7Fufr/zdsGbiwZeBe+3W7UvnSSmnEyimp31n
- # gOaKYnhfsi+E11ecXL93KCjx7W3DKI8sj0A3T8HhhUSJxAlMxdSlQy90lfdu+Hgg
- # WCwTXWCVmj5PM4TasIgX3p5O9JawvEagbJjS4NaIjAsCAwEAAaOCAe0wggHpMBAG
- # CSsGAQQBgjcVAQQDAgEAMB0GA1UdDgQWBBRIbmTlUAXTgqoXNzcitW2oynUClTAZ
- # BgkrBgEEAYI3FAIEDB4KAFMAdQBiAEMAQTALBgNVHQ8EBAMCAYYwDwYDVR0TAQH/
- # BAUwAwEB/zAfBgNVHSMEGDAWgBRyLToCMZBDuRQFTuHqp8cx0SOJNDBaBgNVHR8E
- # UzBRME+gTaBLhklodHRwOi8vY3JsLm1pY3Jvc29mdC5jb20vcGtpL2NybC9wcm9k
- # dWN0cy9NaWNSb29DZXJBdXQyMDExXzIwMTFfMDNfMjIuY3JsMF4GCCsGAQUFBwEB
- # BFIwUDBOBggrBgEFBQcwAoZCaHR0cDovL3d3dy5taWNyb3NvZnQuY29tL3BraS9j
- # ZXJ0cy9NaWNSb29DZXJBdXQyMDExXzIwMTFfMDNfMjIuY3J0MIGfBgNVHSAEgZcw
- # gZQwgZEGCSsGAQQBgjcuAzCBgzA/BggrBgEFBQcCARYzaHR0cDovL3d3dy5taWNy
- # b3NvZnQuY29tL3BraW9wcy9kb2NzL3ByaW1hcnljcHMuaHRtMEAGCCsGAQUFBwIC
- # MDQeMiAdAEwAZQBnAGEAbABfAHAAbwBsAGkAYwB5AF8AcwB0AGEAdABlAG0AZQBu
- # AHQALiAdMA0GCSqGSIb3DQEBCwUAA4ICAQBn8oalmOBUeRou09h0ZyKbC5YR4WOS
- # mUKWfdJ5DJDBZV8uLD74w3LRbYP+vj/oCso7v0epo/Np22O/IjWll11lhJB9i0ZQ
- # VdgMknzSGksc8zxCi1LQsP1r4z4HLimb5j0bpdS1HXeUOeLpZMlEPXh6I/MTfaaQ
- # dION9MsmAkYqwooQu6SpBQyb7Wj6aC6VoCo/KmtYSWMfCWluWpiW5IP0wI/zRive
- # /DvQvTXvbiWu5a8n7dDd8w6vmSiXmE0OPQvyCInWH8MyGOLwxS3OW560STkKxgrC
- # xq2u5bLZ2xWIUUVYODJxJxp/sfQn+N4sOiBpmLJZiWhub6e3dMNABQamASooPoI/
- # E01mC8CzTfXhj38cbxV9Rad25UAqZaPDXVJihsMdYzaXht/a8/jyFqGaJ+HNpZfQ
- # 7l1jQeNbB5yHPgZ3BtEGsXUfFL5hYbXw3MYbBL7fQccOKO7eZS/sl/ahXJbYANah
- # Rr1Z85elCUtIEJmAH9AAKcWxm6U/RXceNcbSoqKfenoi+kiVH6v7RyOA9Z74v2u3
- # S5fi63V4GuzqN5l5GEv/1rMjaHXmr/r8i+sLgOppO6/8MO0ETI7f33VtY5E90Z1W
- # Tk+/gFcioXgRMiF670EKsT/7qMykXcGhiJtXcVZOSEXAQsmbdlsKgEhr/Xmfwb1t
- # bWrJUnMTDXpQzTGCBNkwggTVAgEBMIGVMH4xCzAJBgNVBAYTAlVTMRMwEQYDVQQI
- # EwpXYXNoaW5ndG9uMRAwDgYDVQQHEwdSZWRtb25kMR4wHAYDVQQKExVNaWNyb3Nv
- # ZnQgQ29ycG9yYXRpb24xKDAmBgNVBAMTH01pY3Jvc29mdCBDb2RlIFNpZ25pbmcg
- # UENBIDIwMTECEzMAAABkR4SUhttBGTgAAAAAAGQwCQYFKw4DAhoFAKCB7TAZBgkq
- # hkiG9w0BCQMxDAYKKwYBBAGCNwIBBDAcBgorBgEEAYI3AgELMQ4wDAYKKwYBBAGC
- # NwIBFTAjBgkqhkiG9w0BCQQxFgQUBdBqDyVXnqZzMp1OJYf3joRoaTAwgYwGCisG
- # AQQBgjcCAQwxfjB8oE6ATABNAGkAYwByAG8AcwBvAGYAdAAgAEEAegB1AHIAZQAg
- # AFMAaQB0AGUAIABSAGUAYwBvAHYAZQByAHkAIABQAHIAbwB2AGkAZABlAHKhKoAo
- # aHR0cDovL2dvLm1pY3Jvc29mdC5jb20vP2xpbmtpZD05ODI3Mzk1IDANBgkqhkiG
- # 9w0BAQEFAASCAQBTkB941lb+sBGlUfrKY0rio8iWs3zcjnJUshSKfimD2pJLYdHx
- # hiBkoWXz/nM5ruhKh9Iu62xvqNNTDLt5H2PxvjCrH0v3TpSaRp6QnxIzIKSgtUnT
- # /nxqpvT8QMbecpHXKARw+WcDlZBZWv5PZBoJBytoT+hRuYFOlUsVH7emimic9BlI
- # lW+yX8Ip9txXOOoQluBgkIJ59fpNGS+p3t/hxwaYWSiOD5J+Ug7IELRmg1PfiCMW
- # bg5hXYbvl18qaWFZIf3AXlY+22rYZvx0/hHwqLr/ULNDXF/ylMct2mxzzspN1u9P
- # cJGLbFcxDNaxxzxEEY6ZVup1ycgI59W+16USoYICKDCCAiQGCSqGSIb3DQEJBjGC
- # AhUwggIRAgEBMIGOMHcxCzAJBgNVBAYTAlVTMRMwEQYDVQQIEwpXYXNoaW5ndG9u
- # MRAwDgYDVQQHEwdSZWRtb25kMR4wHAYDVQQKExVNaWNyb3NvZnQgQ29ycG9yYXRp
- # b24xITAfBgNVBAMTGE1pY3Jvc29mdCBUaW1lLVN0YW1wIFBDQQITMwAAAIgVUlHP
- # Fzd7VQAAAAAAiDAJBgUrDgMCGgUAoF0wGAYJKoZIhvcNAQkDMQsGCSqGSIb3DQEH
- # ATAcBgkqhkiG9w0BCQUxDxcNMTYwMzIyMTg0OTUwWjAjBgkqhkiG9w0BCQQxFgQU
- # Urmh+SC+zZzOARYhxu4k2PZFcIIwDQYJKoZIhvcNAQEFBQAEggEAW1kLw6IKNCm6
- # 1nvELi0fHxB898JSoh+eRpVzm+ffOmTEiRqT3S0VZB24U6/FUkMwbNsRcRXeQ4aP
- # RXHHlz2OtrHw/SCdNxFZQ6/4Kq/2a0VQRUtZKe4gZ+rQb7TX3axUf1A0FXTmZg0m
- # 9wX8uiww0tsdrfEVQiluLrLdypGhFppZbf3T1/OlC11udPPfzfRN3HrKBuuYpCKx
- # 8BzNYjCNRbGtsRjYTKQABuGtnTc+XrsLR6qPStI2sjS8qKVN155xu048VBK6FXLt
- # RnrqKUMM6fsMKnWQwjoBauyFe54/p22HKQskWNwmHOg1CSOC31z9XaPkL3FHT+U4
- # EUkEgDZz3A==
- # SIG # End signature block
-
- ```
+6. If your VMM server was in a Disconnected state, then download and run the [cleanup script](unregister-vmm-server-script.md) on the VMM server. Open PowerShell with the **Run as Administrator** option, to change the execution policy for the default (LocalMachine) scope. In the script, specify the ID of the VMM server you want to remove. The script removes registration and cloud pairing information from the server.
5. Run the cleanup script on any secondary VMM server. 6. Run the cleanup script on any other passive VMM cluster nodes that have the Provider installed. 7. Uninstall the Provider manually on the VMM server. If you have a cluster, remove from all nodes.
Hyper-V hosts that aren't managed by VMM are gathered into a Hyper-V site. Remov
1. In **Protected Items** > **Replicated Items**, right-click the machine > **Disable replication**. 2. In **Disable replication** page, select one of these options: - **Disable replication and remove (recommended)** - This option remove the replicated item from Azure Site Recovery and the replication for the machine is stopped. Replication configuration on Configuration Server is cleaned up and Site Recovery billing for this protected server is stopped. Note that this option can only be used when Configuration Server is in connected state.
- - **Remove** - This option is supposed to be used only if the source environment is deleted or not accessible (not connected). This removes the replicated item from Azure Site Recovery (billing is stopped). Replication configuration on the Configuration Server **will not** be cleaned up.
+ - **Remove** - This option is supposed to be used only if the source environment is deleted or not accessible (not connected). This removes the replicated item from Azure Site Recovery (billing is stopped). Replication configuration on the Configuration Server **will not** be cleaned up.
> [!NOTE] > In both the options mobility service will not be uninstalled from the protected servers, you need to uninstall it manually. If you plan to protect the server again using the same Configuration server, you can skip uninstalling the mobility service.
Hyper-V hosts that aren't managed by VMM are gathered into a Hyper-V site. Remov
1. In **Protected Items** > **Replicated Items**, right-click the machine > **Disable replication**. 2. In **Disable replication**, you can select the following options: - **Disable replication and remove (recommended)** - This option remove the replicated item from Azure Site Recovery and the replication for the machine is stopped. Replication configuration on the on-premises virtual machine will be cleaned up and Site Recovery billing for this protected server is stopped.
- - **Remove** - This option is supposed to be used only if the source environment is deleted or not accessible (not connected). This removes the replicated item from Azure Site Recovery (billing is stopped). Replication configuration on the on-premises virtual machine **will not** be cleaned up.
+ - **Remove** - This option is supposed to be used only if the source environment is deleted or not accessible (not connected). This removes the replicated item from Azure Site Recovery (billing is stopped). Replication configuration on the on-premises virtual machine **will not** be cleaned up.
> [!NOTE] > If you chose the **Remove** option then run the following set of scripts to clean up the replication settings on-premises Hyper-V Server.
Hyper-V hosts that aren't managed by VMM are gathered into a Hyper-V site. Remov
2. In **Disable replication**, select one of these options: - **Disable replication and remove (recommended)** - This option remove the replicated item from Azure Site Recovery and the replication for the machine is stopped. Replication configuration on the on-premises virtual machine is cleaned up and Site Recovery billing for this protected server is stopped.
- - **Remove** - This option is supposed to be used only if the source environment is deleted or not accessible (not connected). This removes the replicated item from Azure Site Recovery (billing is stopped). Replication configuration on the on-premises virtual machine **will not** be cleaned up.
+ - **Remove** - This option is supposed to be used only if the source environment is deleted or not accessible (not connected). This removes the replicated item from Azure Site Recovery (billing is stopped). Replication configuration on the on-premises virtual machine **will not** be cleaned up.
> [!NOTE] > If you chose the **Remove** option, then tun the following scripts to clean up the replication settings on-premises VMM Server.
Hyper-V hosts that aren't managed by VMM are gathered into a Hyper-V site. Remov
```powershell Remove-VMReplication ΓÇôVMName "SQLVM1"
- ```
+ ```
site-recovery Unregister Vmm Server Script https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/site-recovery/unregister-vmm-server-script.md
+
+ Title: Unregister a VMM server script
+description: This article describes the cleanup script on the VMM server
++++ Last updated : 03/25/2021++++
+# Cleanup script on a VMM server
+If your VMM server was in a Disconnected state, then download and run the cleanup script on the VMM server.
++
+```
+pushd .
+try
+{
+ $error.Clear()
+ "This script will remove the old Hyper-V Recovery Manager related properties for this VMM. This can be run in below scenarios :"
+ "1. Complete VMM site clean up."
+ "2. VMM site clean up in case the associated VMM has become unresponsive. Input in this case will be the VMM ID of the unresponsive server."
+
+ $choice = Read-Host "Enter your choice "
+
+ if($choice -eq 1)
+ {
+ $vmmid = get-itemproperty 'hklm:\software\Microsoft\Microsoft System Center Virtual Machine Manager Server\Setup' -Name VMMID
+ $vmmid = $vmmid.VmmID
+
+ # $fullCleanup = 1 indicates that clean up all hyper-V recovery manager settings from this VMM.
+ $fullCleanup = 1
+
+ }
+ else
+ {
+ try
+ {
+ [GUID]$vmmid = Read-Host "Enter the VMMId for the unresponsive VMM server "
+ }
+ catch
+ {
+ Write-Host "Error occured" -ForegroundColor "Red"
+ $error[0]
+ return
+ }
+
+ # $fullCleanup = 0 indicates that clean up only those clouds/VMs which are protecting/protected by the objects on the given VMMId.
+ $fullCleanup = 0
+ }
+
+ if($vmmid -ne "")
+ {
+
+ Write-Host "Proceeding to remove Hyper-V Recovery Manager related properties for this VMM with ID: " $vmmid
+ Write-Host "Before running the script ensure that the VMM service is running."
+ Write-Host "In a VMM cluster ensure that the Windows Cluster service is running and run the script on each node."
+ Write-Host "The VMM service (or the Cluster role) will be stopped when the script runs. After the script completes, restart the VMM or Cluster service."
+
+ $choice = Read-Host "Do you want to continue (Y/N) ?"
+ ""
+ if($choice.ToLower() -eq "y" -or $choice.ToLower() -eq "yes" )
+ {
+ $isCluster = $false
+ $path = 'HKLM:\SOFTWARE\Microsoft\Microsoft System Center Virtual Machine Manager Server\Setup'
+ $key = Get-Item -LiteralPath $path -ErrorAction SilentlyContinue
+ $name = 'HAVMMName'
+ if ($key)
+ {
+ $clusterName = $key.GetValue($name, $null)
+ if($clusterName -eq $null)
+ {
+ $serviceName = "SCVMMService"
+ $service = Get-Service -Name $serviceName
+ if ($service.Status -eq "Running")
+ {
+ "Stopping the VMM service..."
+ net stop $serviceName
+ }
+ else
+ {
+ if($service.Status -eq "Stopped")
+ {
+ "VMM service is not running."
+ }
+ else
+ {
+ "Could not stop the VMM service as it is starting or stopping. Please try again later"
+ return
+ }
+
+ }
+ }
+ else
+ {
+ $isCluster = $True
+ $isPrimaryNode = $false
+ $clusterName = $key.GetValue($name, $null)
+
+ Write-Host "Clustered VMM detected"
+
+ $clusService = Get-Service -Name ClusSvc
+ Add-Type -AssemblyName System.ServiceProcess
+ if ($clusService.Status -ne [System.ServiceProcess.ServiceControllerStatus]::Running)
+ {
+ Write-Host "Windows Cluster service is not running on this machine. Please start Windows cluster service before running this script"
+ return
+ }
+
+ $clusterResources = Get-ClusterResource -Cluster $clusterName
+ Write-Host "Searching for VMM cluster resource....."
+
+ foreach ($clusterResource in $clusterResources)
+ {
+ if ($clusterResource.Name -match 'VMM Service')
+ {
+ Write-Host "Found SCVMM Cluster Resource" $clusterResource
+ Write-Host "Cluster owner node is " $clusterResource.OwnerNode
+ $currentHostName = [System.Net.Dns]::GetHostName()
+ $clusterCheckpointList = get-clustercheckpoint -ResourceName $clusterResource.Name
+ Write-Host "Current node is " $currentHostName
+
+ if ([string]::Compare($clusterResource.OwnerNode, $currentHostName, $True) -eq 0)
+ {
+ $isPrimaryNode = $True
+ Write-Host "Current node owns VMM cluster resource"
+ Write-Host "Shutting VMM Cluster Resource down"
+ Stop-ClusterResource $clusterResource
+ }
+ else
+ {
+ Write-Error "Current node does not own VMM cluster resource. Please run on this script on $clusterResource.OwnerNode"
+ Exit
+ }
+
+ break
+ }
+ }
+ }
+ }
+ else
+ {
+ Write-Error ΓÇ£Failed to find registry keys associated with VMMΓÇ¥
+ return
+ }
+
+ ""
+ "Connect to SCVMM database using"
+ "1. Windows Authentication"
+ "2. SQL Server Authentication"
+
+ $mode = Read-Host "Enter your choice "
+ ""
+
+ cd 'hklm:\software\Microsoft\Microsoft System Center Virtual Machine Manager Server\Settings\Sql'
+ $connectionString = get-itemproperty . -Name ConnectionString
+ $conn = New-Object System.Data.SqlClient.SqlConnection
+
+ if($mode -eq 1)
+ {
+ "Connecting to SQL via Windows Authentication..."
+ $conn.ConnectionString = $connectionString.ConnectionString
+ }
+ else
+ {
+ "Connecting to SQL via SQL Server Authentication..."
+
+ $credential = Get-Credential
+ $loginName = $credential.UserName
+ $password = $credential.password
+ $password.MakeReadOnly();
+ $conn.ConnectionString = $connectionString.ConnectionString.ToString().split(";",2)[1]
+ $sqlcred = New-Object System.Data.SqlClient.SqlCredential($loginName, $password)
+ $conn.Credential = $sqlcred
+ }
+
+ Write-Host "Connection string: " $conn.ConnectionString
+ $conn.Open()
+ $transaction = $conn.BeginTransaction("CleanupTransaction");
+
+ try
+ {
+ $sql = "SELECT TOP 1 [Id]
+ FROM [sysobjects]
+ WHERE [Name] = 'tbl_DR_ProtectionUnit'
+ AND [xType] = 'U'"
+ $cmd = New-Object System.Data.SqlClient.SqlCommand($sql,$conn)
+ $cmd.Transaction = $transaction
+ $rdr = $cmd.ExecuteReader()
+ $PUTableExists = $rdr.HasRows
+ $rdr.Close()
+ $SCVMM2012R2Detected = $false
+ if($PUTableExists)
+ {
+ $sql = "SELECT [Id]
+ FROM [tbl_DR_ProtectionUnit]"
+ $cmd = New-Object System.Data.SqlClient.SqlCommand($sql,$conn)
+ $cmd.Transaction = $transaction
+ $rdr = $cmd.ExecuteReader()
+ $SCVMM2012R2Detected = $rdr.HasRows
+ $rdr.Close()
+ }
+
+ ""
+ "Getting all clouds configured for protection..."
+
+ $sql = "SELECT [PrimaryCloudID],
+ [RecoveryCloudID],
+ [PrimaryCloudName],
+ [RecoveryCloudName]
+ FROM [tbl_Cloud_CloudDRPairing]
+ WHERE [PrimaryVMMID] = @VMMId
+ OR [RecoveryVMMID] = @VMMId"
+ $cmd = New-Object System.Data.SqlClient.SqlCommand($sql,$conn)
+
+ $cmd.Parameters.AddWithValue("@VMMId",$vmmid) | Out-Null
+ $cmd.Transaction = $transaction
+ $da = New-Object System.Data.SqlClient.SqlDataAdapter
+ $da.SelectCommand = $cmd
+ $ds = New-Object System.Data.DataSet
+ $da.Fill($ds, "Clouds") | Out-Null
+
+ if($ds.Tables["Clouds"].Rows.Count -eq 0 )
+ {
+ "No clouds were found in protected or protecting status."
+ }
+ else
+ {
+ "Cloud pairing list populated."
+
+ ""
+ "Listing the clouds and their VMs..."
+
+ $vmIds = @()
+
+ foreach ($row in $ds.tables["Clouds"].rows)
+ {
+ ""
+ "'{0}' protected by '{1}'" -f $row.PrimaryCloudName.ToString(), $row.RecoveryCloudName.ToString()
+
+ $sql = "SELECT [ObjectId],
+ [Name]
+ FROM [tbl_WLC_VObject]
+ WHERE [CloudId] IN (@PrimaryCloudId,@RecoveryCloudId)"
+ $cmd = New-Object System.Data.SqlClient.SqlCommand($sql,$conn)
+ $cmd.Transaction = $transaction
+ $cmd.Parameters.AddWithValue("@PrimaryCloudId",$row.PrimaryCloudId.ToString()) | Out-Null
+ $cmd.Parameters.AddWithValue("@RecoveryCloudId",$row.RecoveryCloudId.ToString()) | Out-Null
+ $rdr = $cmd.ExecuteReader()
+ if($rdr.HasRows)
+ {
+ "VM list:"
+ }
+ else
+ {
+ "No VMs found."
+ }
+ while($rdr.Read())
+ {
+ Write-Host $rdr["Name"].ToString()
+ $vmIds = $vmIds + $rdr["ObjectId"].ToString();
+ }
+
+ $rdr.Close()
+ }
++
+ if($vmIds.Count -eq 0)
+ {
+ "No protected VMs are present."
+ }
+ else
+ {
+ ""
+ "Removing recovery settings from all protected VMs..."
+
+ if($SCVMM2012R2Detected)
+ {
+ $sql = "UPDATE vm
+ SET [DRState] = 0,
+ [DRErrors] = NULL,
+ [ProtectionUnitId] = NULL
+ FROM
+ [tbl_WLC_VMInstance] vm
+ INNER JOIN [tbl_WLC_VObject] vObj
+ ON vm.[VMInstanceId] = vObj.[ObjectId]
+ INNER JOIN [tbl_Cloud_CloudDRPairing] cpair
+ ON vObj.[CloudId] = cpair.[PrimaryCloudID]
+ OR vObj.[CloudId] = cpair.[RecoveryCloudID]
+ WHERE cpair.[PrimaryVMMId] = @VMMId
+ OR cpair.[RecoveryVMMID] = @VMMId"
+ $cmd = New-Object System.Data.SqlClient.SqlCommand($sql,$conn)
+ $cmd.Transaction = $transaction
+ $cmd.Parameters.AddWithValue("@VMMId",$vmmid) | Out-Null
+ $cmd.ExecuteNonQuery() | Out-Null
+ }
+ else
+ {
+ $sql = "UPDATE vm
+ SET [DRState] = 0,
+ [DRErrors] = NULL
+ FROM
+ [tbl_WLC_VMInstance] vm
+ INNER JOIN [tbl_WLC_VObject] vObj
+ ON vm.[VMInstanceId] = vObj.[ObjectId]
+ INNER JOIN [tbl_Cloud_CloudDRPairing] cpair
+ ON vObj.[CloudId] = cpair.[PrimaryCloudID]
+ OR vObj.[CloudId] = cpair.[RecoveryCloudID]
+ WHERE cpair.[PrimaryVMMId] = @VMMId
+ OR cpair.[RecoveryVMMID] = @VMMId"
+ $cmd = New-Object System.Data.SqlClient.SqlCommand($sql,$conn)
+ $cmd.Transaction = $transaction
+ $cmd.Parameters.AddWithValue("@VMMId",$vmmid) | Out-Null
+ $cmd.ExecuteNonQuery() | Out-Null
+ }
++
+ $sql = "UPDATE hwp
+ SET [IsDRProtectionRequired] = 0
+ FROM
+ [tbl_WLC_HWProfile] hwp
+ INNER JOIN [tbl_WLC_VObject] vObj
+ ON hwp.[HWProfileId] = vObj.[HWProfileId]
+ INNER JOIN [tbl_Cloud_CloudDRPairing] cpair
+ ON vObj.[CloudId] = cpair.[PrimaryCloudID]
+ OR vObj.[CloudId] = cpair.[RecoveryCloudID]
+ WHERE cpair.[PrimaryVMMId] = @VMMId
+ OR cpair.[RecoveryVMMID] = @VMMId"
+ $cmd = New-Object System.Data.SqlClient.SqlCommand($sql,$conn)
+ $cmd.Transaction = $transaction
+ $cmd.Parameters.AddWithValue("@VMMId",$vmmid) | Out-Null
+ $cmd.ExecuteNonQuery() | Out-Null
+
+ "Recovery settings removed successfully for {0} VMs" -f $vmIds.Count
+ }
++
+ ""
+ "Removing recovery settings from all clouds..."
+ if($SCVMM2012R2Detected)
+ {
+ if($fullCleanup -eq 1)
+ {
+ $sql = "DELETE phost
+ FROM [tbl_DR_ProtectionUnit_HostRelation] phost
+ INNER JOIN [tbl_Cloud_CloudScopeRelation] csr
+ ON phost.[ProtectionUnitId] = csr.[ScopeId]
+ WHERE csr.[ScopeType] = 214"
+ $cmd = New-Object System.Data.SqlClient.SqlCommand($sql,$conn)
+ $cmd.Transaction = $transaction
+ $cmd.ExecuteNonQuery() | Out-Null
++
+ $sql = "UPDATE [tbl_Cloud_Cloud]
+ SET [IsDRProtected] = 0,
+ [IsDRProvider] = 0,
+ [DisasterRecoverySupported] = 0"
+ $cmd = New-Object System.Data.SqlClient.SqlCommand($sql,$conn)
+ $cmd.Transaction = $transaction
+ $cmd.ExecuteNonQuery() | Out-Null
+
+ }
+ else
+ {
+ $sql = "DELETE phost
+ FROM [tbl_DR_ProtectionUnit_HostRelation] phost
+ INNER JOIN [tbl_Cloud_CloudScopeRelation] csr
+ ON phost.[ProtectionUnitId] = csr.[ScopeId]
+ INNER JOIN [tbl_Cloud_CloudDRPairing] cpair
+ ON csr.[CloudId] = cpair.[primaryCloudId]
+ OR csr.[CloudId] = cpair.[recoveryCloudId]
+ WHERE csr.ScopeType = 214
+ AND cpair.[PrimaryVMMId] = @VMMId
+ OR cpair.[RecoveryVMMID] = @VMMId"
+ $cmd = New-Object System.Data.SqlClient.SqlCommand($sql,$conn)
+ $cmd.Transaction = $transaction
+ $cmd.Parameters.AddWithValue("@VMMId",$vmmid) | Out-Null
+ $cmd.ExecuteNonQuery() | Out-Null
+
+ $sql = "UPDATE cloud
+ SET [IsDRProtected] = 0,
+ [IsDRProvider] = 0
+ FROM
+ [tbl_Cloud_Cloud] cloud
+ INNER JOIN [tbl_Cloud_CloudDRPairing] cpair
+ ON cloud.[ID] = cpair.[PrimaryCloudID]
+ OR cloud.[ID] = cpair.[RecoveryCloudID]
+ WHERE cpair.[PrimaryVMMId] = @VMMId
+ OR cpair.[RecoveryVMMID] = @VMMId"
+ $cmd = New-Object System.Data.SqlClient.SqlCommand($sql,$conn)
+ $cmd.Transaction = $transaction
+ $cmd.Parameters.AddWithValue("@VMMId",$vmmid) | Out-Null
+ $cmd.ExecuteNonQuery() | Out-Null
+
+ }
+
+ }
+
+ # VMM 2012 SP1 detected.
+ else
+ {
+ $sql = "UPDATE cloud
+ SET [IsDRProtected] = 0,
+ [IsDRProvider] = 0
+ FROM
+ [tbl_Cloud_Cloud] cloud
+ INNER JOIN [tbl_Cloud_CloudDRPairing] cpair
+ ON cloud.[ID] = cpair.[PrimaryCloudID]
+ OR cloud.[ID] = cpair.[RecoveryCloudID]
+ WHERE cpair.[PrimaryVMMId] = @VMMId
+ OR cpair.[RecoveryVMMID] = @VMMId"
+ $cmd = New-Object System.Data.SqlClient.SqlCommand($sql,$conn)
+ $cmd.Transaction = $transaction
+ $cmd.Parameters.AddWithValue("@VMMId",$vmmid) | Out-Null
+ $cmd.ExecuteNonQuery() | Out-Null
+ }
+
+ "Recovery settings removed successfully."
+
+ ""
+ "Deleting cloud pairing entities..."
+
+ $sql = "DELETE FROM [tbl_Cloud_CloudDRPairing]
+ WHERE [PrimaryVMMID] = @VMMId
+ OR [RecoveryVMMID] = @VMMId"
+ $cmd = New-Object System.Data.SqlClient.SqlCommand($sql,$conn)
+ $cmd.Transaction = $transaction
+ $cmd.Parameters.AddWithValue("@VMMId",$vmmid) | Out-Null
+ $cmd.ExecuteNonQuery() | Out-Null
+
+ "Cloud pairing entities deleted successfully."
+ }
++
+ if ($SCVMM2012R2Detected)
+ {
+ "Removing SAN related entries"
+
+ $sql = "DELETE sanMap
+ FROM [tbl_DR_ProtectionUnit_StorageArray] sanMap
+ INNER JOIN [tbl_Cloud_CloudScopeRelation] csr
+ ON sanMap.[ProtectionUnitId] = csr.[ScopeId]
+ WHERE csr.[ScopeType] = 214"
+ $cmd = New-Object System.Data.SqlClient.SqlCommand($sql,$conn)
+ $cmd.Transaction = $transaction
+ $cmd.ExecuteNonQuery() | Out-Null
+
+ "SAN related entities deleted successfully"
+ }
++
+ if($fullCleanup -eq 1)
+ {
+ # In case of full cleanup reset all VMs protection data.
+ ""
+ "Removing stale entries for VMs..."
+ if($SCVMM2012R2Detected)
+ {
+ $sql = "UPDATE [tbl_WLC_VMInstance]
+ SET [DRState] = 0,
+ [DRErrors] = NULL,
+ [ProtectionUnitId] = NULL"
+ $cmd = New-Object System.Data.SqlClient.SqlCommand($sql,$conn)
+ $cmd.Transaction = $transaction
+ $cmd.ExecuteNonQuery() | Out-Null
+ }
+ else
+ {
+ $sql = "UPDATE [tbl_WLC_VMInstance]
+ SET [DRState] = 0,
+ [DRErrors] = NULL"
+ $cmd = New-Object System.Data.SqlClient.SqlCommand($sql,$conn)
+ $cmd.Transaction = $transaction
+ $cmd.ExecuteNonQuery() | Out-Null
+ }
++
+ $sql = "UPDATE [tbl_WLC_HWProfile]
+ SET [IsDRProtectionRequired] = 0"
+ $cmd = New-Object System.Data.SqlClient.SqlCommand($sql,$conn)
+ $cmd.Transaction = $transaction
+ $cmd.ExecuteNonQuery() | Out-Null
+ # Done removing stale enteries
+
+ # Cloud publish settings and registration details are cleaned up even if there are no paired clouds.
+ if($SCVMM2012R2Detected)
+ {
+ ""
+ "Removing cloud publish settings..."
+
+ # Currently 214 scopeType points to only ProtectionProvider = 1,2 (HVR1 and HVR2).
+ # Once new providers are introduced appropriate filtering should be done before delete
+ # in below two queries.
+ $sql = "DELETE punit
+ FROM [tbl_DR_ProtectionUnit] punit
+ INNER JOIN [tbl_Cloud_CloudScopeRelation] csr
+ ON punit.[ID] = csr.[ScopeId]
+ WHERE csr.[ScopeType] = 214"
+ $cmd = New-Object System.Data.SqlClient.SqlCommand($sql,$conn)
+ $cmd.Transaction = $transaction
+ $cmd.ExecuteNonQuery() | Out-Null
++
+ $sql = "DELETE FROM [tbl_Cloud_CloudScopeRelation]
+ WHERE [ScopeType] = 214"
+ $cmd = New-Object System.Data.SqlClient.SqlCommand($sql,$conn)
+ $cmd.Transaction = $transaction
+ $cmd.ExecuteNonQuery() | Out-Null
+ "Cloud publish settings removed successfully."
+ }
+
+ ""
+ "Un-registering VMM..."
+
+ $currentTime = Get-Date
+ $sql = "UPDATE [tbl_DR_VMMRegistrationDetails]
+ SET [DRSubscriptionId] = '',
+ [VMMFriendlyName] = '',
+ [DRAdapterInstalledVersion] = '',
+ [LastModifiedDate] = @LastModifiedTime,
+ [DRAuthCertBlob] = NULL,
+ [DRAuthCertThumbprint] = NULL,
+ [HostSigningCertBlob] = NULL,
+ [HostSigningCertThumbprint] = NULL,
+ [DRAdapterUpdateVersion] = '',
+ [OrgIdUserName] = ''
+ WHERE [VMMId] = @VMMId"
+ $cmd = New-Object System.Data.SqlClient.SqlCommand($sql,$conn)
+ $cmd.Transaction = $transaction
+ $param1 = $cmd.Parameters.AddWithValue("@LastModifiedTime", [System.Data.SqlDbType]::DateTime)
+ $param1.Value = Get-Date
+ $cmd.Parameters.AddWithValue("@VMMId",$vmmid) | Out-Null
+ $cmd.ExecuteNonQuery() | Out-Null
+
+ "Un-registration completed successfully."
+
+ ""
+ "Removing KEK..."
+
+ $kekid = "06cda9f3-2e3d-49ee-8e18-2d9bd1d74034"
+ $rolloverKekId = "fe0adfd7-309a-429a-b420-e8ed067338e6"
+ $sql = "DELETE FROM [tbl_VMM_CertificateStore]
+ WHERE [CertificateID] IN (@KEKId,@RolloverKekId)"
+ $cmd = New-Object System.Data.SqlClient.SqlCommand($sql,$conn)
+ $cmd.Transaction = $transaction
+ $cmd.Parameters.AddWithValue("@KEKId",$kekid) | Out-Null
+ $cmd.Parameters.AddWithValue("@RolloverKekId",$rolloverKekId) | Out-Null
+ $cmd.ExecuteNonQuery() | Out-Null
+
+ "Removing KEK completed successfully."
+
+ if($error.Count -eq 0)
+ {
+ $transaction.Commit()
+
+ ""
+ "Removing registration related registry keys."
+
+ $path = "software\Microsoft\Microsoft System Center Virtual Machine Manager Server\DRAdapter\Registration"
+ if((Test-Path "hklm:\$path" ))
+ {
+ if($isCluster -and $isPrimaryNode)
+ {
+ foreach($checkpoint in $clusterCheckpointList)
+ {
+ $compareResult = [string]::Compare($path, $checkpoint.Name, $True)
+
+ if($compareResult -eq 0)
+ {
+ Write-Host "Removing Checkpointing for $path"
+ Remove-ClusterCheckpoint -CheckpointName $path
+ }
+ }
+ }
+
+ Remove-Item -Path "hklm:\$path"
+
+ $proxyPath = "software\Microsoft\Microsoft System Center Virtual Machine Manager Server\DRAdapter\ProxySettings"
+ if((Test-Path "hklm:\$proxyPath"))
+ {
+ if($isCluster -and $isPrimaryNode)
+ {
+ foreach($checkpoint in $clusterCheckpointList)
+ {
+ $compareResult = [string]::Compare($proxyPath, $checkpoint.Name, $True)
+
+ if($compareResult -eq 0)
+ {
+ Write-Host "Removing Checkpointing for $proxyPath"
+ Remove-ClusterCheckpoint -CheckpointName $proxyPath
+ }
+ }
+ }
+
+ Remove-Item -Path "hklm:\$proxyPath"
+ }
+
+ $backupPath = "software\Microsoft\Hyper-V Recovery Manager"
+ if((Test-Path "hklm:\$backupPath"))
+ {
+ if($isCluster -and $isPrimaryNode)
+ {
+ foreach($checkpoint in $clusterCheckpointList)
+ {
+ $compareResult = [string]::Compare($backupPath, $checkpoint.Name, $True)
+
+ if($compareResult -eq 0)
+ {
+ Write-Host "Removing Checkpointing for $backupPath"
+ Remove-ClusterCheckpoint -CheckpointName $backupPath
+ }
+ }
+ }
+ Remove-Item "hklm:\$backupPath" -recurse
+ }
+ "Registry keys removed successfully."
+ ""
+ }
+ else
+ {
+ "Could not delete registration key as hklm:\software\Microsoft\Microsoft System Center Virtual Machine Manager Server\DRAdapter\Registration doesn't exist."
+ }
+
+ Write-Host "SUCCESS!!" -ForegroundColor "Green"
+ }
+ else
+ {
+ $transaction.Rollback()
+ Write-Error "Error occured"
+ $error[0]
+ ""
+ Write-Error "FAILED"
+ "All updates to the VMM database have been rolled back."
+ }
+ }
+ else
+ {
+ if($error.Count -eq 0)
+ {
+ $transaction.Commit()
+ Write-Host "SUCCESS!!" -ForegroundColor "Green"
+ }
+ else
+ {
+ $transaction.Rollback()
+ Write-Error "FAILED"
+ }
+ }
+
+ $conn.Close()
+ }
+ catch
+ {
+ $transaction.Rollback()
+ Write-Host "Error occured" -ForegroundColor "Red"
+ $error[0]
+ Write-Error "FAILED"
+ "All updates to the VMM database have been rolled back."
+ }
+ }
+ }
+ else
+ {
+ Write-Error "VMM Id is missing from hklm:\software\Microsoft\Microsoft System Center Virtual Machine Manager Server\Setup or VMMId is not provided."
+ Write-Error "FAILED" -ForegroundColor
+ }
+}
+
+catch
+{
+ Write-Error "Error occured"
+ $error[0]
+ Write-Error "FAILED"
+}
+
+if($isCluster)
+{
+ if($clusterResource.State -eq [Microsoft.FailoverClusters.PowerShell.ClusterResourceState]::Offline)
+ {
+ Write-Host "Cluster role is in stopped state."
+ }
+ else
+ {
+ Write-Host "Operation completed. Cluster role was not stopped."
+ }
+}
+else
+{
+ Write-Host "The VMM service is in stopped state."
+}
+
+popd
+# SIG # Begin signature block
+# MIId0wYJKoZIhvcNAQcCoIIdxDCCHcACAQExCzAJBgUrDgMCGgUAMGkGCisGAQQB
+# gjcCAQSgWzBZMDQGCisGAQQBgjcCAR4wJgIDAQAABBAfzDtgWUsITrck0sYpfvNR
+# AgEAAgEAAgEAAgEAAgEAMCEwCQYFKw4DAhoFAAQU3rRWHH5OCASnIAZsmgmowP/T
+# p6egghhkMIIEwzCCA6ugAwIBAgITMwAAAIgVUlHPFzd7VQAAAAAAiDANBgkqhkiG
+# 9w0BAQUFADB3MQswCQYDVQQGEwJVUzETMBEGA1UECBMKV2FzaGluZ3RvbjEQMA4G
+# A1UEBxMHUmVkbW9uZDEeMBwGA1UEChMVTWljcm9zb2Z0IENvcnBvcmF0aW9uMSEw
+# HwYDVQQDExhNaWNyb3NvZnQgVGltZS1TdGFtcCBQQ0EwHhcNMTUxMDA3MTgxNDAx
+# WhcNMTcwMTA3MTgxNDAxWjCBszELMAkGA1UEBhMCVVMxEzARBgNVBAgTCldhc2hp
+# bmd0b24xEDAOBgNVBAc