Updates from: 09/15/2021 03:07:19
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Concept Authentication Passwordless https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/concept-authentication-passwordless.md
The following providers offer FIDO2 security keys of different form factors that
| Kensington | ![y] | ![y]| ![n]| ![n]| ![n] | https://www.kensington.com/solutions/product-category/why-biometrics/ | | KONA I | ![y] | ![n]| ![y]| ![y]| ![n] | https://konai.com/business/security/fido | | Nymi | ![y] | ![n]| ![y]| ![n]| ![n] | https://www.nymi.com/product |
-| OneSpan Inc. | ![y] | ![n]| ![n]| ![y]| ![n] | https://www.onespan.com/products/fido |
+| OneSpan Inc. | ![n] | ![y]| ![n]| ![y]| ![n] | https://www.onespan.com/products/fido |
| Thales Group | ![n] | ![y]| ![y]| ![n]| ![n] | https://cpl.thalesgroup.com/access-management/authenticators/fido-devices | | Thetis | ![y] | ![y]| ![y]| ![y]| ![n] | https://thetis.io/collections/fido2 | | Token2 Switzerland | ![y] | ![y]| ![y]| ![n]| ![n] | https://www.token2.swiss/shop/product/token2-t2f2-alu-fido2-u2f-and-totp-security-key |
To get started with passwordless in Azure AD, complete one of the following how-
### External Links * [FIDO Alliance](https://fidoalliance.org/)
-* [FIDO2 CTAP specification](https://fidoalliance.org/specs/fido-v2.0-id-20180227/fido-client-to-authenticator-protocol-v2.0-id-20180227.html)
+* [FIDO2 CTAP specification](https://fidoalliance.org/specs/fido-v2.0-id-20180227/fido-client-to-authenticator-protocol-v2.0-id-20180227.html)
active-directory Concept Primary Refresh Token https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/devices/concept-primary-refresh-token.md
In Azure AD registered device scenarios, the Azure AD WAM plugin is the primary
## What is the lifetime of a PRT?
-Once issued, a PRT is valid for 14 days and is continuously renewed as long as the user actively uses the device.
+Once issued, a PRT is valid for 90 days and is continuously renewed as long as the user actively uses the device.
## How is a PRT used?
active-directory How To Connect Install Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-install-prerequisites.md
To read more about securing your Active Directory environment, see [Best practic
- If your global administrators have MFA enabled, the URL https://secure.aadcdn.microsoftonline-p.com *must* be in the trusted sites list. You're prompted to add this site to the trusted sites list when you're prompted for an MFA challenge and it hasn't been added before. You can use Internet Explorer to add it to your trusted sites. - If you plan to use Azure AD Connect Health for syncing, ensure that the prerequisites for Azure AD Connect Health are also met. For more information, see [Azure AD Connect Health agent installation](how-to-connect-health-agent-install.md).
-#### Harden your Azure AD Connect server
+### Harden your Azure AD Connect server
We recommend that you harden your Azure AD Connect server to decrease the security attack surface for this critical component of your IT environment. Following these recommendations will help to mitigate some security risks to your organization. - Treat Azure AD Connect the same as a domain controller and other Tier 0 resources. For more information, see [Active Directory administrative tier model](/windows-server/identity/securing-privileged-access/securing-privileged-access-reference-material).
We recommend that you harden your Azure AD Connect server to decrease the securi
- Implement dedicated [privileged access workstations](https://4sysops.com/archives/understand-the-microsoft-privileged-access-workstation-paw-security-model/) for all personnel with privileged access to your organization's information systems. - Follow these [additional guidelines](/windows-server/identity/ad-ds/plan/security-best-practices/reducing-the-active-directory-attack-surface) to reduce the attack surface of your Active Directory environment. - Follow the [Monitor changes to federation configuration](how-to-connect-monitor-federation-changes.md) to setup alerts to monitor changes to the trust established between your Idp and Azure AD.
+- Enable Multi Factor Authentication (MFA) for all users that have privileged access in Azure AD or in AD. One security issue with using AADConnect is that if an attacker can get control over the Azure AD Connect server they can manipulate users in Azure AD. To prevent a attacker from using these capabilities to take over Azure AD accounts, MFA offers protections so that even if an attacker manages to e.g. reset a user's password using Azure AD Connect they still cannot bypass the second factor.
### SQL Server used by Azure AD Connect * Azure AD Connect requires a SQL Server database to store identity data. By default, a SQL Server 2019 Express LocalDB (a light version of SQL Server Express) is installed. SQL Server Express has a 10-GB size limit that enables you to manage approximately 100,000 objects. If you need to manage a higher volume of directory objects, point the installation wizard to a different installation of SQL Server. The type of SQL Server installation can impact the [performance of Azure AD Connect](./plan-connect-performance-factors.md#sql-database-factors).
active-directory Reference Connect Tls Enforcement https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/reference-connect-tls-enforcement.md
# TLS 1.2 enforcement for Azure AD Connect
-Transport Layer Security (TLS) protocol version 1.2 is a cryptography protocol that is designed to provide secure communications. The TLS protocol aims primarily to provide privacy and data integrity. TLS has gone through many iterations with version 1.2 being defined in [RFC 5246](https://tools.ietf.org/html/rfc5246). Azure Active Directory Connect version 1.2.65.0 and later now fully support using only TLS 1.2 for communications with Azure. This document will provide information on how to force your Azure AD Connect server to use only TLS 1.2.
+Transport Layer Security (TLS) protocol version 1.2 is a cryptography protocol that is designed to provide secure communications. The TLS protocol aims primarily to provide privacy and data integrity. TLS has gone through many iterations, with version 1.2 being defined in [RFC 5246](https://tools.ietf.org/html/rfc5246). Azure Active Directory Connect version 1.2.65.0 and later now fully support using only TLS 1.2 for communications with Azure. This article provides information about how to force your Azure AD Connect server to use only TLS 1.2.
->[!NOTE]
->All versions of Windows Server that are supported for Azure AD Connect V2.0 already default to TLS 1.2. If TLS 1.2 is not enabled on your server you will need to enable this before you can deploy Azure AD Connect V2.0.
+> [!NOTE]
+> All versions of Windows Server that are supported for Azure AD Connect V2.0 already default to TLS 1.2. If TLS 1.2 is not enabled on your server you will need to enable this before you can deploy Azure AD Connect V2.0.
## Update the registry
-In order to force the Azure AD Connect server to only use TLS 1.2 the registry of the Windows server must be updated. Set the following registry keys on the Azure AD Connect server.
+In order to force the Azure AD Connect server to only use TLS 1.2, the registry of the Windows server must be updated. Set the following registry keys on the Azure AD Connect server.
->[!IMPORTANT]
->After you have updated the registry, you must restart the Windows server for the changes to take affect.
+> [!IMPORTANT]
+> After you have updated the registry, you must restart the Windows server for the changes to take affect.
### Enable TLS 1.2
In order to force the Azure AD Connect server to only use TLS 1.2 the registry o
- [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Client] - "DisabledByDefault"=dword:00000000
-### PowerShell script to enable TLS 1.2
-You can use the following PowerShell script to enable TLS 1.2 on your Azure AD Connect server.
+### PowerShell cmdlet to check TLS 1.2
+You can use the following [Get-ADSyncToolsTls12](reference-connect-adsynctools.md#get-adsynctoolstls12) PowerShell cmdlet to check the current TLS 1.2 settings on your Azure AD Connect server.
```powershell
- New-Item 'HKLM:\SOFTWARE\WOW6432Node\Microsoft\.NETFramework\v4.0.30319' -Force | Out-Null
-
- New-ItemProperty -path 'HKLM:\SOFTWARE\WOW6432Node\Microsoft\.NETFramework\v4.0.30319' -name 'SystemDefaultTlsVersions' -value '1' -PropertyType 'DWord' -Force | Out-Null
-
- New-ItemProperty -path 'HKLM:\SOFTWARE\WOW6432Node\Microsoft\.NETFramework\v4.0.30319' -name 'SchUseStrongCrypto' -value '1' -PropertyType 'DWord' -Force | Out-Null
-
- New-Item 'HKLM:\SOFTWARE\Microsoft\.NETFramework\v4.0.30319' -Force | Out-Null
-
- New-ItemProperty -path 'HKLM:\SOFTWARE\Microsoft\.NETFramework\v4.0.30319' -name 'SystemDefaultTlsVersions' -value '1' -PropertyType 'DWord' -Force | Out-Null
+ Import-module -Name "C:\Program Files\Microsoft Azure Active Directory Connect\Tools\AdSyncTools"
+ Get-ADSyncToolsTls12
+```
- New-ItemProperty -path 'HKLM:\SOFTWARE\Microsoft\.NETFramework\v4.0.30319' -name 'SchUseStrongCrypto' -value '1' -PropertyType 'DWord' -Force | Out-Null
+### PowerShell cmdlet to enable TLS 1.2
+You can use the following [Set-ADSyncToolsTls12](reference-connect-adsynctools.md#set-adsynctoolstls12) PowerShell cmdlet to enforce TLS 1.2 on your Azure AD Connect server.
- New-Item 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Server' -Force | Out-Null
-
- New-ItemProperty -path 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Server' -name 'Enabled' -value '1' -PropertyType 'DWord' -Force | Out-Null
-
- New-ItemProperty -path 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Server' -name 'DisabledByDefault' -value 0 -PropertyType 'DWord' -Force | Out-Null
-
- New-Item 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Client' -Force | Out-Null
-
- New-ItemProperty -path 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Client' -name 'Enabled' -value '1' -PropertyType 'DWord' -Force | Out-Null
-
- New-ItemProperty -path 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Client' -name 'DisabledByDefault' -value 0 -PropertyType 'DWord' -Force | Out-Null
- Write-Host 'TLS 1.2 has been enabled.'
+```powershell
+ Import-module -Name "C:\Program Files\Microsoft Azure Active Directory Connect\Tools\AdSyncTools"
+ Set-ADSyncToolsTls12 -Enabled $true
``` ### Disable TLS 1.2
You can use the following PowerShell script to enable TLS 1.2 on your Azure AD C
- [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Client] - "DisabledByDefault"=dword:00000001
-### PowerShell script to disable TLS 1.2
-You can use the following PowerShell script to disable TLS 1.2 on your Azure AD Connect server.\
+### PowerShell script to disable TLS 1.2 (not recommended)
+You can use the following [Set-ADSyncToolsTls12](reference-connect-adsynctools.md#set-adsynctoolstls12) PowerShell cmdlet to disable TLS 1.2 on your Azure AD Connect server.
```powershell
- New-Item 'HKLM:\SOFTWARE\WOW6432Node\Microsoft\.NETFramework\v4.0.30319' -Force | Out-Null
-
- New-ItemProperty -path 'HKLM:\SOFTWARE\WOW6432Node\Microsoft\.NETFramework\v4.0.30319' -name 'SystemDefaultTlsVersions' -value '0' -PropertyType 'DWord' -Force | Out-Null
-
- New-ItemProperty -path 'HKLM:\SOFTWARE\WOW6432Node\Microsoft\.NETFramework\v4.0.30319' -name 'SchUseStrongCrypto' -value '0' -PropertyType 'DWord' -Force | Out-Null
-
- New-Item 'HKLM:\SOFTWARE\Microsoft\.NETFramework\v4.0.30319' -Force | Out-Null
-
- New-ItemProperty -path 'HKLM:\SOFTWARE\Microsoft\.NETFramework\v4.0.30319' -name 'SystemDefaultTlsVersions' -value '0' -PropertyType 'DWord' -Force | Out-Null
-
- New-ItemProperty -path 'HKLM:\SOFTWARE\Microsoft\.NETFramework\v4.0.30319' -name 'SchUseStrongCrypto' -value '0' -PropertyType 'DWord' -Force | Out-Null
-
- New-Item 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Server' -Force | Out-Null
-
- New-ItemProperty -path 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Server' -name 'Enabled' -value '0' -PropertyType 'DWord' -Force | Out-Null
-
- New-ItemProperty -path 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Server' -name 'DisabledByDefault' -value 1 -PropertyType 'DWord' -Force | Out-Null
-
- New-Item 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Client' -Force | Out-Null
-
- New-ItemProperty -path 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Client' -name 'Enabled' -value '0' -PropertyType 'DWord' -Force | Out-Null
-
- New-ItemProperty -path 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Client' -name 'DisabledByDefault' -value 1 -PropertyType 'DWord' -Force | Out-Null
- Write-Host 'TLS 1.2 has been disabled.'
+ Import-module -Name "C:\Program Files\Microsoft Azure Active Directory Connect\Tools\AdSyncTools"
+ Set-ADSyncToolsTls12 -Enabled $false
``` ## Next steps
active-directory Tshoot Connect Sync Errors https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/tshoot-connect-sync-errors.md
a. Ensure that the userPrincipalName attribute has supported characters and requ
Azure Active Directory protects cloud only objects from being updated through Azure AD Connect. While it is not possible to update these objects through Azure AD Connect, calls can be made directly to the AADConnect cloud side backend to attempt to change cloud only objects. When doing so, the following errors can be returned:
+* This synchronization operation, Delete, is not valid. Contact Technical Support.
+* Unable to process this update as one or more cloud only users credential update is included in current request.
* Deleting a cloud only object is not supported. Please contact Microsoft Customer Support. * The password change request cannot be executed since it contains changes to one or more cloud only user objects, which is not supported. Please contact Microsoft Customer Support.
active-directory Silverfort Azure Ad Integration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/silverfort-azure-ad-integration.md
Previously updated : 9/08/2021 Last updated : 9/13/2021
active-directory Assign Roles Different Scopes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/assign-roles-different-scopes.md
Previously updated : 08/12/2021 Last updated : 09/13/2021
# Assign Azure AD roles at different scopes
-This article describes how to assign Azure AD roles at different scopes. To understanding scoping in Azure AD, refer to this doc - [Overview of RBAC in Azure AD](custom-overview.md). In general, you must be within the scope that you want the role assignment to be limited to. For example, if you want to assign Helpdesk Administrator role scoped over an [administrative unit](administrative-units.md), then you should go to **Azure AD > Administrative Units > {administrative unit} > Roles and administrators** and then do the role assignment. This will create a role assignment scoped to the administrative unit, not the entire tenant.
+In Azure Active Directory (Azure AD), you typically assign Azure AD roles so that they apply to the entire tenant. However, you can also assign Azure AD roles for different resources, such as administrative units or application registrations. For example, you could assign the Helpdesk Administrator role so that it just applies to a particular administrative unit and not the entire tenant. The resources that a role assignment applies to is also call the scope. This article describes how to assign Azure AD roles at tenant, administrative unit, and application registration scopes. For more information about scope, see [Overview of RBAC in Azure AD](custom-overview.md#scope).
## Prerequisites
Follow these instructions to assign a role using the Microsoft Graph API in [Gra
## Assign roles scoped to an administrative unit
+This section describes how to assign roles at an [administrative unit](administrative-units.md) scope.
+ ### Azure portal 1. Sign in to the [Azure portal](https://portal.azure.com) or [Azure AD admin center](https://aad.portal.azure.com).
Follow these instructions to assign a role at administrative unit scope using th
## Assign roles scoped to an app registration
+This section describes how to assign roles at an application registration scope.
+ ### Azure portal 1. Sign in to the [Azure portal](https://portal.azure.com) or [Azure AD admin center](https://aad.portal.azure.com).
active-directory Custom Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/custom-overview.md
Previously updated : 11/20/2020 Last updated : 09/13/2021 -+
The following are the high-level steps that Azure AD uses to determine if you ha
## Role assignment
-A role assignment is an Azure AD resource that attaches a *role definition* to a *user* at a particular *scope* to grant access to Azure AD resources. Access is granted by creating a role assignment, and access is revoked by removing a role assignment. At its core, a role assignment consists of three elements:
+A role assignment is an Azure AD resource that attaches a *role definition* to a *security principal* at a particular *scope* to grant access to Azure AD resources. Access is granted by creating a role assignment, and access is revoked by removing a role assignment. At its core, a role assignment consists of three elements:
-- Azure AD user-- Role definition-- Resource scope
+- Security principal - An identity that gets the permissions. It could be a user, group, or a service principal.
+- Role definition - A collection of permissions.
+- Scope - A way to constrain where those permissions are applicable.
-You can [create role assignments](custom-create.md) using the Azure portal, Azure AD PowerShell, or Graph API. You can also [list the role assignments](view-assignments.md).
+You can [create role assignments](manage-roles-portal.md) using the Azure portal, Azure AD PowerShell, or Graph API. You can also [list the role assignments](view-assignments.md).
-The following diagram shows an example of a role assignment. In this example, Chris Green has been assigned the App registration administrator custom role at the scope of the Contoso Widget Builder app registration. The assignment grants Chris the permissions of the App registration administrator role for only this specific app registration.
+The following diagram shows an example of a role assignment. In this example, Chris has been assigned the App Registration Administrator custom role at the scope of the Contoso Widget Builder app registration. The assignment grants Chris the permissions of the App Registration Administrator role for only this specific app registration.
-![Role assignment is how permissions are enforced and has three parts](./media/custom-overview/rbac-overview.png)
+![Role assignment is how permissions are enforced and has three parts.](./media/custom-overview/rbac-overview.png)
### Security principal
-A security principal represents the user that is to be assigned access to Azure AD resources. A user is an individual who has a user profile in Azure Active Directory.
+A security principal represents a user, group, or service principal that is assigned access to Azure AD resources. A user is an individual who has a user profile in Azure Active Directory. A group is a new Microsoft 365 or security group with the isAssignableToRole property set to true (currently in preview). A service principal is an identity created for use with applications, hosted services, and automated tools to access Azure AD resources.
-### Role
+### Role definition
A role definition, or role, is a collection of permissions. A role definition lists the operations that can be performed on Azure AD resources, such as create, read, update, and delete. There are two types of roles in Azure AD:
A role definition, or role, is a collection of permissions. A role definition li
### Scope
-A scope is the restriction of permitted actions to a particular Azure AD resource as part of a role assignment. When you assign a role, you can specify a scope that limits the administrator's access to a specific resource. For example, if you want to grant a developer a custom role, but only to manage a specific application registration, you can include the specific application registration as a scope in the role assignment.
+A scope is a way to limit the permitted actions to a particular set of resources as part of a role assignment. For example, if you want to assign a custom role to a developer, but only to manage a specific application registration, you can include the specific application registration as a scope in the role assignment.
+
+When you assign a role, you specify one of the following types of scope:
+
+- Tenant
+- [Administrative unit](administrative-units.md)
+- Azure AD resource
+
+If you specify an Azure AD resource as a scope, it can be one of the following:
+
+- Azure AD groups
+- Enterprise applications
+- Application registrations
+
+For more information, see [Assign Azure AD roles at different scopes](assign-roles-different-scopes.md).
## License requirements
Using built-in roles in Azure AD is free, while custom roles requires an Azure A
## Next steps - [Understand Azure AD roles](concept-understand-roles.md)-- Create custom role assignments using [the Azure portal, Azure AD PowerShell, and Graph API](custom-create.md)-- [List role assignments](view-assignments.md)
+- [Assign Azure AD roles to users](manage-roles-portal.md)
+- [Create and assign a custom role](custom-create.md)
active-directory Facebook Work Accounts Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/facebook-work-accounts-tutorial.md
+
+ Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Facebook Work Accounts | Microsoft Docs'
+description: Learn how to configure single sign-on between Azure Active Directory and Facebook Work Accounts.
++++++++ Last updated : 09/03/2021++++
+# Tutorial: Azure Active Directory single sign-on (SSO) integration with Facebook Work Accounts
+
+In this tutorial, you'll learn how to integrate Facebook Work Accounts with Azure Active Directory (Azure AD). When you integrate Facebook Work Accounts with Azure AD, you can:
+
+* Control in Azure AD who has access to Facebook Work Accounts.
+* Enable your users to be automatically signed-in to Facebook Work Accounts with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Facebook Work Accounts single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Facebook Work Accounts supports **SP and IDP** initiated SSO.
+
+## Add Facebook Work Accounts from the gallery
+
+To configure the integration of Facebook Work Accounts into Azure AD, you need to add Facebook Work Accounts from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Facebook Work Accounts** in the search box.
+1. Select **Facebook Work Accounts** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for Facebook Work Accounts
+
+Configure and test Azure AD SSO with Facebook Work Accounts using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Facebook Work Accounts.
+
+To configure and test Azure AD SSO with Facebook Work Accounts, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Facebook Work Accounts SSO](#configure-facebook-work-accounts-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Facebook Work Accounts test user](#create-facebook-work-accounts-test-user)** - to have a counterpart of B.Simon in Facebook Work Accounts that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Facebook Work Accounts** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, perform the following steps:
+
+ a. In the **Identifier** text box, type a URL using the following pattern:
+ `https://work.facebook.com/company/<ID>`
+
+ b. In the **Reply URL** text box, type a URL using the following pattern:
+ ` https://work.facebook.com/work/saml.php?__cid=<ID>`
+
+1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
+
+ In the **Sign-on URL** text box, type the URL:
+ `https://work.facebook.com`
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier and Reply URL. Contact [Facebook Work Accounts Client support team](mailto:WorkplaceSupportPartnerships@fb.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
+
+ ![The Certificate download link](common/certificatebase64.png)
+
+1. On the **Set up Facebook Work Accounts** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Copy configuration URLs](common/copy-configuration-urls.png)
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Facebook Work Accounts.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Facebook Work Accounts**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Facebook Work Accounts SSO
+
+1. Log in to your Facebook Work Accounts company site as an administrator.
+
+1. Go to **Security** > **Single Sign-On**.
+
+1. Enable **Single-sign on(SSO)** checkbox and click **+Add new SSO Provider**.
+
+ ![Screenshot shows the SSO Account.](./media/facebook-work-accounts-tutorial/security.png "SSO Account")
+
+1. On the **Single Sign-On (SSO) Setup** page, perform the following steps:
+
+ ![Screenshot shows the SSO Configuration.](./media/facebook-work-accounts-tutorial/certificate.png "Configuration")
+
+ 1. Enter a valid **Name of the SSO Provider**.
+
+ 1. In the **SAML URL** textbox, paste the **Login URL** value which you have copied from the Azure portal.
+
+ 1. In the **SAML Issuer URL** textbox, paste the **Azure AD Identifier** value which you have copied from the Azure portal.
+
+ 1. **Enable SAML logout redirection** checkbox and in the **SAML Logout URL** textbox, paste the **Logout URL** value which you have copied from the Azure portal.
+
+ 1. Open the downloaded **Certificate (Base64)** from the Azure portal into Notepad and paste the content into the **SAML Certificate** textbox.
+
+ 1. Copy **Audience URL** value, paste this value into the **Identifier** textbox in the **Basic SAML Configuration** section in the Azure portal.
+
+ 1. Copy **ACS (Assertion Consumer Service) URL** value, paste this value into the **Reply URL** text box in the **Basic SAML Configuration** section in the Azure portal.
+
+ 1. In the **Test SSO Setup** section, enter a valid email in the textbox and click **Test SSO**.
+
+ 1. Click **Save Changes**.
+
+### Create Facebook Work Accounts test user
+
+In this section, you create a user called Britta Simon in Facebook Work Accounts. Work with [Facebook Work Accounts support team](mailto:WorkplaceSupportPartnerships@fb.com) to add the users in the Facebook Work Accounts platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Facebook Work Accounts Sign on URL where you can initiate the login flow.
+
+* Go to Facebook Work Accounts Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Facebook Work Accounts for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the Facebook Work Accounts tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Facebook Work Accounts for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure Facebook Work Accounts you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Fieldglass Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/fieldglass-tutorial.md
Follow these steps to enable Azure AD SSO in the Azure portal.
| Reply URL | |--|
- | https://www.fieldglass.net/<company name> |
- | https://<company name>.fgvms.com/<company name> |
+ | `https://www.fieldglass.net/<company name>` |
+ | `https://<company name>.fgvms.com/<company name>` |
| > [!NOTE]
active-directory Softeon Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/softeon-tutorial.md
Title: 'Tutorial: Azure Active Directory integration with Softeon WMS | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with Softeon WMS | Microsoft Docs'
description: Learn how to configure single sign-on between Azure Active Directory and Softeon WMS.
Previously updated : 03/22/2019 Last updated : 09/13/2021
-# Tutorial: Azure Active Directory integration with Softeon WMS
+# Tutorial: Azure AD SSO integration with Softeon WMS
-In this tutorial, you learn how to integrate Softeon WMS with Azure Active Directory (Azure AD).
-Integrating Softeon WMS with Azure AD provides you with the following benefits:
+In this tutorial, you'll learn how to integrate Softeon WMS with Azure Active Directory (Azure AD). When you integrate Softeon WMS with Azure AD, you can:
-* You can control in Azure AD who has access to Softeon WMS.
-* You can enable your users to be automatically signed-in to Softeon WMS (Single Sign-On) with their Azure AD accounts.
-* You can manage your accounts in one central location - the Azure portal.
-
-If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
+* Control in Azure AD who has access to Softeon WMS.
+* Enable your users to be automatically signed-in to Softeon WMS with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
## Prerequisites To configure Azure AD integration with Softeon WMS, you need the following items:
-* An Azure AD subscription. If you don't have an Azure AD environment, you can get a [free account](https://azure.microsoft.com/free/)
-* Softeon WMS single sign-on enabled subscription
+* An Azure AD subscription. If you don't have an Azure AD environment, you can get a [free account](https://azure.microsoft.com/free/).
+* Softeon WMS single sign-on enabled subscription.
## Scenario description In this tutorial, you configure and test Azure AD single sign-on in a test environment.
-* Softeon WMS supports **SP** initiated SSO
-* Softeon WMS supports **Just In Time** user provisioning
+* Softeon WMS supports **SP** and **IDP** initiated SSO.
+* Softeon WMS supports **Just In Time** user provisioning.
-## Adding Softeon WMS from the gallery
+## Add Softeon WMS from the gallery
To configure the integration of Softeon WMS into Azure AD, you need to add Softeon WMS from the gallery to your list of managed SaaS apps.
-**To add Softeon WMS from the gallery, perform the following steps:**
-
-1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon.
-
- ![The Azure Active Directory button](common/select-azuread.png)
-
-2. Navigate to **Enterprise Applications** and then select the **All Applications** option.
-
- ![The Enterprise applications blade](common/enterprise-applications.png)
-
-3. To add new application, click **New application** button on the top of dialog.
-
- ![The New application button](common/add-new-app.png)
-
-4. In the search box, type **Softeon WMS**, select **Softeon WMS** from result panel then click **Add** button to add the application.
-
- ![Softeon WMS in the results list](common/search-new-app.png)
-
-## Configure and test Azure AD single sign-on
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Softeon WMS** in the search box.
+1. Select **Softeon WMS** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-In this section, you configure and test Azure AD single sign-on with Softeon WMS based on a test user called **Britta Simon**.
-For single sign-on to work, a link relationship between an Azure AD user and the related user in Softeon WMS needs to be established.
+## Configure and test Azure AD SSO for Softeon WMS
-To configure and test Azure AD single sign-on with Softeon WMS, you need to complete the following building blocks:
+Configure and test Azure AD SSO with Softeon WMS using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Softeon WMS.
-1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature.
-2. **[Configure Softeon WMS Single Sign-On](#configure-softeon-wms-single-sign-on)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-5. **[Create Softeon WMS test user](#create-softeon-wms-test-user)** - to have a counterpart of Britta Simon in Softeon WMS that is linked to the Azure AD representation of user.
-6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works.
+To configure and test Azure AD SSO with Softeon WMS, perform the following steps:
-### Configure Azure AD single sign-on
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Softeon WMS SSO](#configure-softeon-wms-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Softeon WMS test user](#create-softeon-wms-test-user)** - to have a counterpart of B.Simon in Softeon WMS that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-In this section, you enable Azure AD single sign-on in the Azure portal.
+## Configure Azure AD SSO
-To configure Azure AD single sign-on with Softeon WMS, perform the following steps:
+Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **Softeon WMS** application integration page, select **Single sign-on**.
+1. In the Azure portal, on the **Softeon WMS** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Configure single sign-on link](common/select-sso.png)
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
-2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
+4. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, perform the following steps:
- ![Single sign-on select mode](common/select-saml-option.png)
-
-3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog.
-
- ![Edit Basic SAML Configuration](common/edit-urls.png)
-
-4. On the **Basic SAML Configuration** section, perform the following steps:
+ a. In the **Identifier (Entity ID)** text box, type a URL using the following pattern:
+ `https://<companyname>.softeon.com/sp`
- ![Softeon WMS Domain and URLs single sign-on information](common/sp-identifier.png)
+ b. In the **Reply URL** text box, type a URL using the following pattern:
+ `https://<SUBDOMAIN>.softeon.com/<CUSTOM_URL>`
+
+5. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
- a. In the **Sign on URL** text box, type a URL using the following pattern:
+ In the **Sign on URL** text box, type a URL using the following pattern:
`https://<companyname>.softeon.com/<instancename>`
- b. In the **Identifier (Entity ID)** text box, type a URL using the following pattern:
- `https://<companyname>.softeon.com/sp`
- > [!NOTE]
- > These values are not real. Update these values with the actual Sign on URL and Identifier. Contact [Softeon WMS Client support team](mailto:contact@softeon.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [Softeon WMS Client support team](mailto:contact@softeon.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
5. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Certificate (Base64)** from the given options as per your requirement and save it on your computer.
To configure Azure AD single sign-on with Softeon WMS, perform the following ste
![Copy configuration URLs](common/copy-configuration-urls.png)
- a. Login URL
-
- b. Azure AD Identifier
-
- c. Logout URL
-
-### Configure Softeon WMS Single Sign-On
-
-To configure single sign-on on **Softeon WMS** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [Softeon WMS support team](mailto:contact@softeon.com). They set this setting to have the SAML SSO connection set properly on both sides.
- ### Create an Azure AD test user
-The objective of this section is to create a test user in the Azure portal called Britta Simon.
-
-1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**.
-
- ![The "Users and groups" and "All users" links](common/users.png)
-
-2. Select **New user** at the top of the screen.
-
- ![New user Button](common/new-user.png)
-
-3. In the User properties, perform the following steps.
-
- ![The User dialog box](common/user-properties.png)
-
- a. In the **Name** field enter **BrittaSimon**.
-
- b. In the **User name** field type brittasimon@yourcompanydomain.extension
- For example, BrittaSimon@contoso.com
-
- c. Select **Show password** check box, and then write down the value that's displayed in the Password box.
+In this section, you'll create a test user in the Azure portal called B.Simon.
- d. Click **Create**.
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
### Assign the Azure AD test user
-In this section, you enable Britta Simon to use Azure single sign-on by granting access to Softeon WMS.
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Softeon WMS.
-1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **Softeon WMS**.
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Softeon WMS**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
- ![Enterprise applications blade](common/enterprise-applications.png)
+## Configure Softeon WMS SSO
-2. In the applications list, select **Softeon WMS**.
-
- ![The Softeon WMS link in the Applications list](common/all-applications.png)
-
-3. In the menu on the left, select **Users and groups**.
-
- ![The "Users and groups" link](common/users-groups-blade.png)
-
-4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
-
- ![The Add Assignment pane](common/add-assign-user.png)
+To configure single sign-on on **Softeon WMS** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [Softeon WMS support team](mailto:contact@softeon.com). They set this setting to have the SAML SSO connection set properly on both sides.
-5. In the **Users and groups** dialog select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen.
+### Create Softeon WMS test user
-6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen.
+In this section, a user called Britta Simon is created in Softeon WMS. Softeon WMS supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Softeon WMS, a new one is created after authentication.
-7. In the **Add Assignment** dialog click the **Assign** button.
+## Test SSO
-### Create Softeon WMS test user
+In this section, you test your Azure AD single sign-on configuration with following options.
-In this section, a user called Britta Simon is created in Softeon WMS. Softeon WMS supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Softeon WMS, a new one is created after authentication.
+#### SP initiated:
-### Test single sign-on
+* Click on **Test this application** in Azure portal. This will redirect to Softeon WMS Sign on URL where you can initiate the login flow.
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+* Go to Softeon WMS Sign-on URL directly and initiate the login flow from there.
-When you click the Softeon WMS tile in the Access Panel, you should be automatically signed in to the Softeon WMS for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+#### IDP initiated:
-## Additional Resources
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Softeon WMS for which you set up the SSO.
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+You can also use Microsoft My Apps to test the application in any mode. When you click the Softeon WMS tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Softeon WMS for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+Once you configure Softeon WMS you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Vergesense Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/vergesense-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with VergeSense'
+description: Learn how to configure single sign-on between Azure Active Directory and VergeSense.
++++++++ Last updated : 09/13/2021++++
+# Tutorial: Azure AD SSO integration with VergeSense
+
+In this tutorial, you'll learn how to integrate VergeSense with Azure Active Directory (Azure AD). When you integrate VergeSense with Azure AD, you can:
+
+* Control in Azure AD who has access to VergeSense.
+* Enable your users to be automatically signed-in to VergeSense with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* VergeSense single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* VergeSense supports **SP and IDP** initiated SSO.
+
+## Add VergeSense from the gallery
+
+To configure the integration of VergeSense into Azure AD, you need to add VergeSense from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **VergeSense** in the search box.
+1. Select **VergeSense** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for VergeSense
+
+Configure and test Azure AD SSO with VergeSense using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in VergeSense.
+
+To configure and test Azure AD SSO with VergeSense, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure VergeSense SSO](#configure-vergesense-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create VergeSense test user](#create-vergesense-test-user)** - to have a counterpart of B.Simon in VergeSense that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **VergeSense** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, the user does not have to perform any step as the app is already pre-integrated with Azure.
+
+1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
+
+ In the **Sign-on URL** text box, type the URL:
+ `https://cloud.vergesense.com`
+
+1. VergeSense application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+
+ ![image](common/default-attributes.png)
+
+1. In addition to above, VergeSense application expects few more attributes to be passed back in SAML response which are shown below. These attributes are also pre populated but you can review them as per your requirements.
+
+ | Name | Source Attribute|
+ | | |
+ | email | user.mail |
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
+
+ ![The Certificate download link](common/certificatebase64.png)
+
+1. On the **Set up VergeSense** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Copy configuration URLs](common/copy-configuration-urls.png)
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to VergeSense.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **VergeSense**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure VergeSense SSO
+
+To configure single sign-on on **VergeSense** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [VergeSense support team](mailto:support@vergesense.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create VergeSense test user
+
+In this section, you create a user called Britta Simon in VergeSense. Work with [VergeSense support team](mailto:support@vergesense.com) to add the users in the VergeSense platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to VergeSense Sign on URL where you can initiate the login flow.
+
+* Go to VergeSense Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the VergeSense for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the VergeSense tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the VergeSense for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure VergeSense you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Configure Azure Active Directory For Fedramp High Impact https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/standards/configure-azure-active-directory-for-fedramp-high-impact.md
The following is a list of FedRAMP resources:
* [Azure Compliance Offerings](https://aka.ms/azurecompliance)
-* [FedRAMP High blueprint sample overview](../../governance/blueprints/samples/fedramp-h/index.md)
+* [FedRAMP High Azure Policy built-in initiative definition](../../governance/policy/samples/fedramp-high.md)
* [Microsoft 365 compliance center](/microsoft-365/compliance/microsoft-365-compliance-center)
aks Azure Hpc Cache https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/azure-hpc-cache.md
+
+ Title: Integrate Azure HPC Cache with Azure Kubernetes Service
+description: Learn how to integrate HPC Cache with Azure Kubernetes Service
++++ Last updated : 09/08/2021+
+#Customer intent: As a cluster operator or developer, I want to learn how to integrate HPC Cache with AKS
++
+# Integrate Azure HPC Cache with Azure Kubernetes Service
+
+[Azure HPC Cache][hpc-cache] speeds access to your data for high-performance computing (HPC) tasks. By caching files in Azure, Azure HPC Cache brings the scalability of cloud computing to your existing workflow. This article shows you how to integrate Azure HPC Cache with Azure Kubernetes Service (AKS).
+
+## Before you begin
+
+This article assumes that you have an existing AKS cluster. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli] or [using the Azure portal][aks-quickstart-portal].
+
+> [!IMPORTANT]
+> Your AKS cluster must be [in a region that supports Azure HPC Cache][hpc-cache-regions].
+
+You also need to install and configure Azure CLI version 2.7 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli]. See [hpc-cache-cli-prerequisites] for more information about using Azure CLI with HPC Cache.
+
+You will also need to install the hpc-cache Azure CLI extension. Please do the following:
+
+```azurecli
+az extension add --upgrade -n hpc-cache
+```
+
+## Set up Azure HPC Cache
+
+This section explains the steps to create and configure your HPC Cache.
+
+### 1. Find the AKS node resource group
+
+First, get the resource group name with the [az aks show][az-aks-show] command and add the `--query nodeResourceGroup` query parameter. You will create your HPC Cache in the same resource group.
+
+The following example gets the node resource group name for the AKS cluster named *myAKSCluster* in the resource group name *myResourceGroup*:
+
+```azurecli-interactive
+az aks show --resource-group myResourceGroup --name myAKSCluster --query nodeResourceGroup -o tsv
+```
+
+```output
+MC_myResourceGroup_myAKSCluster_eastus
+```
+
+### 2. Create the cache subnet
+
+There are a number of [prerequisites][hpc-cache-prereqs] that must be satisfied before running an HPC Cache. Most importantly, the cache requires a *dedicated* subnet with at least 64 IP addresses available. This subnet must not host other VMs or containers. This subnet must be accessible from the AKS nodes.
+
+Create the dedicated HPC Cache subnet:
+
+```azurecli
+RESOURCE_GROUP=MC_myResourceGroup_myAKSCluster_eastus
+VNET_NAME=$(az network vnet list --resource-group $RESOURCE_GROUP --query [].name -o tsv)
+VNET_ID=$(az network vnet show --resource-group $RESOURCE_GROUP --name $VNET_NAME --query "id" -o tsv)
+SUBNET_NAME=MyHpcCacheSubnet
+az network vnet subnet create \
+ --resource-group $RESOURCE_GROUP \
+ --vnet-name $VNET_NAME \
+ --name $SUBNET_NAME \
+ --address-prefixes 10.0.0.0/26
+```
+
+Register the *Microsoft.StorageCache* resource provider:
+
+```azurecli
+az provider register --namespace Microsoft.StorageCache --wait
+```
+
+> [!NOTE]
+> The resource provider registration can take some time to complete.
+
+### 3. Create the HPC Cache
+
+Create an HPC Cache in the node resource group from step 1 and in the same region as your AKS cluster. Use [az hpc-cache create][az-hpc-cache-create].
+
+> [!NOTE]
+> The HPC Cache takes approximately 20 minutes to be created.
+
+```azurecli
+RESOURCE_GROUP=MC_myResourceGroup_myAKSCluster_eastus
+VNET_NAME=$(az network vnet list --resource-group $RESOURCE_GROUP --query [].name -o tsv)
+VNET_ID=$(az network vnet show --resource-group $RESOURCE_GROUP --name $VNET_NAME --query "id" -o tsv)
+SUBNET_NAME=MyHpcCacheSubnet
+SUBNET_ID=$(az network vnet subnet show --resource-group $RESOURCE_GROUP --vnet-name $VNET_NAME --name $SUBNET_NAME --query "id" -o tsv)
+az hpc-cache create \
+ --resource-group $RESOURCE_GROUP \
+ --cache-size-gb "3072" \
+ --location eastus \
+ --subnet $SUBNET_ID \
+ --sku-name "Standard_2G" \
+ --name MyHpcCache
+```
+
+### 4. Create a storage account and new container
+
+Create the Azure Storage account for the Blob storage container. The HPC Cache will cache content that is stored in this Blob storage container.
+
+> [!IMPORTANT]
+> You need to select a unique storage account name. Replace 'uniquestorageaccount' with something that will be unique for you.
+
+Check that the storage account name that you have selected is available.
+
+```azurecli
+STORAGE_ACCOUNT_NAME=uniquestorageaccount
+az storage account check-name --name $STORAGE_ACCOUNT_NAME
+```
+
+```azurecli
+RESOURCE_GROUP=MC_myResourceGroup_myAKSCluster_eastus
+STORAGE_ACCOUNT_NAME=uniquestorageaccount
+az storage account create \
+ -n $STORAGE_ACCOUNT_NAME \
+ -g $RESOURCE_GROUP \
+ -l eastus \
+ --sku Standard_LRS
+```
+
+Create the Blob container within the storage account.
+
+```azurecli
+STORAGE_ACCOUNT_NAME=uniquestorageaccount
+STORAGE_ACCOUNT_ID=$(az storage account show --name $STORAGE_ACCOUNT_NAME --query "id" -o tsv)
+AD_USER=$(az ad signed-in-user show --query objectId -o tsv)
+CONTAINER_NAME=mystoragecontainer
+az role assignment create --role "Storage Blob Data Contributor" --assignee $AD_USER --scope $STORAGE_ACCOUNT_ID
+az storage container create --name $CONTAINER_NAME --account-name jebutlaksstorage --auth-mode login
+```
+
+Provide permissions to the Azure HPC Cache service account to access your storage account and Blob container.
+
+```azurecli
+HPC_CACHE_USER="StorageCache Resource Provider"
+STORAGE_ACCOUNT_NAME=uniquestorageaccount
+STORAGE_ACCOUNT_ID=$(az storage account show --name $STORAGE_ACCOUNT_NAME --query "id" -o tsv)
+$HPC_CACHE_ID=$(az ad sp list --display-name "${HPC_CACHE_USER}" --query "[].objectId" -o tsv)
+az role assignment create --role "Storage Account Contributor" --assignee $HPC_CACHE_ID --scope $STORAGE_ACCOUNT_ID
+az role assignment create --role "Storage Blob Data Contributor" --assignee $HPC_CACHE_ID --scope $STORAGE_ACCOUNT_ID
+```
+
+### 5. Configure the storage target
+
+Add the blob container to your HPC Cache as a storage target.
+
+```azurecli
+RESOURCE_GROUP=MC_myResourceGroup_myAKSCluster_eastus
+STORAGE_ACCOUNT_NAME=uniquestorageaccount
+STORAGE_ACCOUNT_ID=$(az storage account show --name $STORAGE_ACCOUNT_NAME --query "id" -o tsv)
+CONTAINER_NAME=mystoragecontainer
+az hpc-cache blob-storage-target add \
+ --resource-group $RESOURCE_GROUP \
+ --cache-name MyHpcCache \
+ --name MyStorageTarget \
+ --storage-account $STORAGE_ACCOUNT_ID \
+ --container-name $CONTAINER_NAME \
+ --virtual-namespace-path "/myfilepath"
+```
+
+### 6. Set up client load balancing
+
+Create a Azure Private DNS Zone for the client-facing IP addresses.
+
+```azurecli
+RESOURCE_GROUP=MC_myResourceGroup_myAKSCluster_eastus
+VNET_NAME=$(az network vnet list --resource-group $RESOURCE_GROUP --query [].name -o tsv)
+VNET_ID=$(az network vnet show --resource-group $RESOURCE_GROUP --name $VNET_NAME --query "id" -o tsv)
+PRIVATE_DNS_ZONE="myhpccache.local"
+az network private-dns zone create \
+ -g $RESOURCE_GROUP \
+ -n $PRIVATE_DNS_ZONE
+az network private-dns link vnet create \
+ -g $RESOURCE_GROUP \
+ -n MyDNSLink \
+ -z $PRIVATE_DNS_ZONE \
+ -v $VNET_NAME \
+ -e true
+```
+
+Create the round-robin DNS name.
+
+```azurecli
+DNS_NAME="server"
+PRIVATE_DNS_ZONE="myhpccache.local"
+RESOURCE_GROUP=MC_myResourceGroup_myAKSCluster_eastus
+HPC_MOUNTS0=$(az hpc-cache show --name "MyHpcCache" --resource-group $RESOURCE_GROUP --query "mountAddresses[0]" -o tsv | tr --delete '\r')
+HPC_MOUNTS1=$(az hpc-cache show --name "MyHpcCache" --resource-group $RESOURCE_GROUP --query "mountAddresses[1]" -o tsv | tr --delete '\r')
+HPC_MOUNTS2=$(az hpc-cache show --name "MyHpcCache" --resource-group $RESOURCE_GROUP --query "mountAddresses[2]" -o tsv | tr --delete '\r')
+az network private-dns record-set a add-record -g $RESOURCE_GROUP -z $PRIVATE_DNS_ZONE -n $DNS_NAME -a $HPC_MOUNTS0
+az network private-dns record-set a add-record -g $RESOURCE_GROUP -z $PRIVATE_DNS_ZONE -n $DNS_NAME -a $HPC_MOUNTS1
+az network private-dns record-set a add-record -g $RESOURCE_GROUP -z $PRIVATE_DNS_ZONE -n $DNS_NAME -a $HPC_MOUNTS2
+```
+
+## Create the AKS persistent volume
+
+Create a `pv-nfs.yaml` file to define a [persistent volume][persistent-volume].
+
+```yaml
+
+apiVersion: v1
+kind: PersistentVolume
+metadata:
+ name: pv-nfs
+spec:
+ capacity:
+ storage: 10000Gi
+ accessModes:
+ - ReadWriteMany
+ mountOptions:
+ - vers=3
+ nfs:
+ server: server.myhpccache.local
+ path: /
+```
+
+First, ensure that you have credentials for your Kubernetes cluster.
+
+```azurecli-interactive
+az aks get-credentials --resource-group myResourceGroup --name myAKSCluster
+```
+
+Update the *server* and *path* to the values of your NFS (Network File System) volume you created in the previous step. Create the persistent volume with the [kubectl apply][kubectl-apply] command:
+
+```console
+kubectl apply -f pv-nfs.yaml
+```
+
+Verify that status of the persistent volume is **Available** using the [kubectl describe][kubectl-describe] command:
+
+```console
+kubectl describe pv pv-nfs
+```
+
+## Create the persistent volume claim
+
+Create a `pvc-nfs.yaml` defining a [persistent volume claim][persistent-volume-claim]. For example:
+
+```yaml
+apiVersion: v1
+kind: PersistentVolumeClaim
+metadata:
+ name: pvc-nfs
+spec:
+ accessModes:
+ - ReadWriteMany
+ storageClassName: ""
+ resources:
+ requests:
+ storage: 100Gi
+```
+
+Use the [kubectl apply][kubectl-apply] command to create the persistent volume claim:
+
+```console
+kubectl apply -f pvc-nfs.yaml
+```
+
+Verify that the status of the persistent volume claim is **Bound** using the [kubectl describe][kubectl-describe] command:
+
+```console
+kubectl describe pvc pvc-nfs
+```
+
+## Mount the HPC Cache with a pod
+
+Create a `nginx-nfs.yaml` file to define a pod that uses the persistent volume claim. For example:
+
+```yaml
+kind: Pod
+apiVersion: v1
+metadata:
+ name: nginx-nfs
+spec:
+ containers:
+ - image: mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine
+ name: nginx-nfs
+ command:
+ - "/bin/sh"
+ - "-c"
+ - while true; do echo $(date) >> /mnt/azure/myfilepath/outfile; sleep 1; done
+ volumeMounts:
+ - name: disk01
+ mountPath: /mnt/azure
+ volumes:
+ - name: disk01
+ persistentVolumeClaim:
+ claimName: pvc-nfs
+```
+
+Create the pod with the [kubectl apply][kubectl-apply] command:
+
+```console
+kubectl apply -f nginx-nfs.yaml
+```
+
+Verify that the pod is running by using the [kubectl describe][kubectl-describe] command:
+
+```console
+kubectl describe pod nginx-nfs
+```
+
+Verify your volume has been mounted in the pod by using [kubectl exec][kubectl-exec] to connect to the pod then `df -h` to check if the volume is mounted.
+
+```console
+kubectl exec -it nginx-nfs -- sh
+```
+
+```output
+/ # df -h
+Filesystem Size Used Avail Use% Mounted on
+...
+server.myhpccache.local:/myfilepath 8.0E 0 8.0E 0% /mnt/azure/myfilepath
+...
+```
+
+## Frequently asked questions (FAQ)
+
+### Running applications as non-root
+
+If you need to run an application as a non-root user, you may need to disable root squashing to chown a directory to another user. The non-root user will need to own a directory to access the file system. For the user to own a directory, the root user must chown a directory to that user, but if the HPC Cache is squashing root, this operation will be denied because the root user (UID 0) is being mapped to the anonymous user. More information about root squashing and client access policies is found [here][hpc-cache-access-policies].
+
+### Sending feedback
+
+We'd love to hear from you! Please send any feedback or questions to <aks-hpccache-feed@microsoft.com>.
+
+## Next steps
+
+* For more information on Azure HPC Cache, see [HPC Cache Overview][hpc-cache].
+* For more information on using NFS with AKS, see [Manually create and use an NFS (Network File System) Linux Server volume with Azure Kubernetes Service (AKS)][aks-nfs].
+
+[aks-quickstart-cli]: kubernetes-walkthrough.md
+[aks-quickstart-portal]: kubernetes-walkthrough-portal.md
+[aks-nfs]: azure-nfs-volume.md
+[hpc-cache]: ../hpc-cache/hpc-cache-overview.md
+[hpc-cache-access-policies]: ../hpc-cache/access-policies.md
+[hpc-cache-regions]: https://azure.microsoft.com/global-infrastructure/services/?products=hpc-cache&regions=all
+[hpc-cache-cli-prerequisites]: ../hpc-cache/az-cli-prerequisites.md
+[hpc-cache-prereqs]: ../hpc-cache/hpc-cache-prerequisites.md
+[az-hpc-cache-create]: /cli/azure/hpc-cache#az_hpc_cache_create
+[az-aks-show]: /cli/azure/aks#az_aks_show
+[install-azure-cli]: /cli/azure/install-azure-cli
+[kubectl-apply]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply
+[kubectl-describe]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#describe
+[kubectl-exec]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#exec
+[persistent-volume]: concepts-storage.md#persistent-volumes
+[persistent-volume-claim]: concepts-storage.md#persistent-volume-claims
aks Ingress Tls https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/ingress-tls.md
az network public-ip show --ids $PUBLICIPID --query "[dnsSettings.fqdn]" --outpu
``` #### Method 2: Set the DNS label using helm chart settings
-You can pass an annotation setting to your helm chard configuration by using the `--set controller.service.annotations."service\.beta\.kubernetes\.io/azure-dns-label-name"` parameter. This can be set either when the ingress controller is first deployed, or it can be configured later.
+You can pass an annotation setting to your helm chart configuration by using the `--set controller.service.annotations."service\.beta\.kubernetes\.io/azure-dns-label-name"` parameter. This can be set either when the ingress controller is first deployed, or it can be configured later.
The following example shows how to update this setting after the controller has been deployed. ```
You can also:
[install-azure-cli]: /cli/azure/install-azure-cli [aks-supported versions]: supported-kubernetes-versions.md [aks-integrated-acr]: cluster-container-registry-integration.md?tabs=azure-cli#create-a-new-aks-cluster-with-acr-integration
-[acr-helm]: ../container-registry/container-registry-helm-repos.md
+[acr-helm]: ../container-registry/container-registry-helm-repos.md
app-service App Service Key Vault References https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/app-service-key-vault-references.md
If your vault is configured with [network restrictions](../key-vault/general/ove
2. Make sure that the vault's configuration accounts for the network or subnet through which your app will access it.
+> [!NOTE]
+> Windows container currently does not support Key Vault references over VNet Integration.
+ ### Access vaults with a user-assigned identity Some apps need to reference secrets at creation time, when a system-assigned identity would not yet be available. In these cases, a user-assigned identity can be created and given access to the vault in advance.
app-service App Service Sql Asp Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/app-service-sql-asp-github-actions.md
+
+ Title: "Tutorial: Use GitHub Actions to deploy to App Service and connect to a database"
+description: Deploy a database-backed ASP.NET core app to Azure with GitHub Actions
+ms.devlang: csharp
+ Last updated : 09/13/2021++++
+# Tutorial: Use GitHub Actions to deploy to App Service and connect to a database
+
+Learn how to set up a GitHub Actions workflow to deploy a ASP.NET Core application with an [Azure SQL Database](../azure-sql/database/sql-database-paas-overview.md) backend. When you're finished, you have an ASP.NET app running in Azure and connected to SQL Database. You'll first use an [ARM template](/azure/azure-resource-manager/templates/overview) to create resources.
+
+This tutorial does not use containers. If you want to deploy to a containerized ASP.NET Core application, see [Use GitHub Actions to deploy to App Service for Containers and connect to a database](app-service-sql-github-actions.md).
+
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+>
+> - Use a GitHub Actions workflow to add resources to Azure with a Azure Resource Manager template (ARM template)
+> - Use a GitHub Actions workflow to build an ASP.NET Core application
++
+## Prerequisites
+
+To complete this tutorial, you'll need:
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- A GitHub account. If you don't have one, sign up for [free](https://github.com/join).
+ - A GitHub repository to store your Resource Manager templates and your workflow files. To create one, see [Creating a new repository](https://docs.github.com/en/github/creating-cloning-and-archiving-repositories/creating-a-new-repository).
+
+## Download the sample
+
+[Fork the sample project](https://github.com/Azure-Samples/dotnetcore-sqldb-ghactions) in the Azure Samples repo.
+
+```
+https://github.com/Azure-Samples/dotnetcore-sqldb-ghactions
+```
+
+## Create the resource group
+
+Open the Azure Cloud Shell at https://shell.azure.com. You can alternately use the Azure CLI if you've installed it locally. (For more information on Cloud Shell, see the [Cloud Shell Overview](../cloud-shell/overview.md).)
+
+```azurecli-interactive
+az group create --name {resource-group-name} --location {resource-group-location}
+```
+
+## Generate deployment credentials
+
+You'll need to authenticate with a service principal for the resource deployment script to work. You can create a [service principal](../active-directory/develop/app-objects-and-service-principals.md#service-principal-object) with the [az ad sp create-for-rbac](/cli/azure/ad/sp#az_ad_sp_create_for_rbac) command in the [Azure CLI](/cli/azure/). Run this command with [Azure Cloud Shell](https://shell.azure.com/) in the Azure portal or by selecting the **Try it** button.
+
+```azurecli-interactive
+ az ad sp create-for-rbac --name "{service-principal-name}" --sdk-auth --role contributor --scopes /subscriptions/{subscription-id}
+```
+
+In the example, replace the placeholders with your subscription ID, resource group name, and service principal name. The output is a JSON object with the role assignment credentials that provide access to your App Service app. Copy this JSON object for later. For help, go to [configure deployment credentials](https://github.com/Azure/login#configure-deployment-credentials).
+
+```output
+ {
+ "clientId": "<GUID>",
+ "clientSecret": "<GUID>",
+ "subscriptionId": "<GUID>",
+ "tenantId": "<GUID>",
+ (...)
+ }
+```
+
+## Configure the GitHub secret for authentication
+
+In [GitHub](https://github.com/), browse your repository, select **Settings > Secrets > Add a new secret**.
+
+To use [user-level credentials](#generate-deployment-credentials), paste the entire JSON output from the Azure CLI command into the secret's value field. Name the secret `AZURE_CREDENTIALS`.
+
+## Add GitHub secrets for your build
+
+1. Create [two new secrets](https://docs.github.com/en/actions/reference/encrypted-secrets#creating-encrypted-secrets-for-a-repository) in your GitHub repository for `SQLADMIN_PASS` and `SQLADMIN_LOGIN`. Make sure you choose a complex password, otherwise the create step for the SQL database server will fail. You won't be able to access this password again so save it separately.
+
+2. Create an `AZURE_SUBSCRIPTION_ID` secret for your Azure subscription ID. If you do not know your subscription ID, use this command in the Azure Shell to find it. Copy the value in the `SubscriptionId` column.
+ ```azurecli
+ az account list -o table
+ ```
+
+## Create Azure resources
+
+The create Azure resources workflow runs an [ARM template](/azure/azure-resource-manager/templates/overview) to deploy resources to Azure. The workflow:
+
+- Checks out source code with the [Checkout action](https://github.com/marketplace/actions/checkout).
+- Logs into Azure with the [Azure Login action](https://github.com/marketplace/actions/azure-login) and gathers environment and Azure resource information.
+- Deploys resources with the [Azure Resource Manager Deploy action](https://github.com/marketplace/actions/deploy-azure-resource-manager-arm-template).
+
+To run the create Azure resources workflow:
+
+1. Open the `infraworkflow.yml` file in `.github/workflows` within your repository.
+
+1. Update the value of `AZURE_RESOURCE_GROUP` to your resource group name.
+
+1. Set the input for `region` in your ARM Deploy actions to your region.
+ 1. Open `templates/azuredeploy.resourcegroup.parameters.json` and update the `rgLocation` property to your region.
+
+1. Go to **Actions** and select **Run workflow**.
+
+ :::image type="content" source="media/github-actions-workflows/github-actions-run-workflow.png" alt-text="Run the GitHub Actions workflow to add resources.":::
+
+1. Verify that your action ran successfully by checking for a green checkmark on the **Actions** page.
+
+ :::image type="content" source="media/github-actions-workflows/create-resources-success.png" alt-text="Successful run of create resources. ":::
+
+1. After you've created your resources, go to **Actions**, select Create Azure Resources, disable the workflow.
+
+ :::image type="content" source="media/github-actions-workflows/disable-workflow-github-actions.png" alt-text="Disable the Create Azure Resources workflow.":::
+
+## Create a publish profile secret
+
+1. In the Azure portal, open your new staging App Service (Slot) created with the `Create Azure Resources` workflow.
+
+1. Select **Get Publish Profile**.
+
+1. Open the publish profile file in a text editor and copy its contents.
+
+1. Create a new GitHub secret for `AZURE_WEBAPP_PUBLISH_PROFILE`.
+
+## Build and deploy your app
+
+To run the build and deploy workflow:
+
+1. Open your `workflow.yaml` file in `.github/workflows` within your repository.
+
+1. Verify that the environment variables for `AZURE_RESOURCE_GROUP`, `AZURE_WEBAPP_NAME`, `SQLSERVER_NAME`, and `DATABASE_NAME` match the ones in `infraworkflow.yml`.
+
+1. Verify that your app deployed by visiting the URL in the Swap to production slot output. You should see a sample app, My TodoList App.
+
+## Clean up resources
+
+If you no longer need your sample project, delete your resource group in the Azure portal and delete your repository on GitHub.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Learn about Azure and GitHub integration](/azure/developer/github/)
application-gateway Key Vault Certs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/application-gateway/key-vault-certs.md
Application Gateway integration with Key Vault requires a three-step configurati
![Key vault certificates](media/key-vault-certs/ag-kv.png)
+## Investigating and resolving Key Vault errors
+
+Azure Application Gateway not only polls for the renewed certificate version on Key Vault at every 4-hour interval, but also logs any error and is integrated with Azure Advisor to surface any misconfiguration as a recommendation. The details of the recommendation contain the exact issue and the associated Key Vault resource. You can use this information along with the [troubleshooting guide](../application-gateway/application-gateway-key-vault-common-errors.md) to quickly resolve such configuration error.
+
+It is strongly recommended that you [configure Advisor Alerts](../advisor/advisor-alerts-portal.md) to stay updated in the event such a problem is detected. To set an alert for this specific case, use the Recommendation Type as "Resolve Azure Key Vault issue for your Application Gateway".
+ ## Next steps [Configure TLS termination with Key Vault certificates by using Azure PowerShell](configure-keyvault-ps.md)
automation Automation Role Based Access Control https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-role-based-access-control.md
Title: Manage role permissions and security in Azure Automation description: This article describes how to use Azure role-based access control (Azure RBAC), which enables access management for Azure resources.
-keywords: automation rbac, role based access control, azure rbac
Previously updated : 08/26/2021- Last updated : 09/10/2021+
+#Customer intent: As an administrator, I want to understand permissions so that I use the least necessary set of permissions.
-# Manage role permissions and security
+# Manage role permissions and security in Automation
Azure role-based access control (Azure RBAC) enables access management for Azure resources. Using [Azure RBAC](../role-based-access-control/overview.md), you can segregate duties within your team and grant only the amount of access to users, groups, and applications that they need to perform their jobs. You can grant role-based access to users using the Azure portal, Azure Command-Line tools, or Azure Management APIs.
An Automation Contributor can manage all resources in the Automation account exc
|Microsoft.Resources/deployments/*|Create and manage resource group deployments.| |Microsoft.Resources/subscriptions/resourceGroups/read|Read resource group deployments.| |Microsoft.Support/*|Create and manage support tickets.|
+|Microsoft.Insights/ActionGroups/*|Read/write/delete action groups.|
+|Microsoft.Insights/ActivityLogAlerts/*|Read/write/delete activity log alerts.|
+|Microsoft.Insights/diagnosticSettings/*|Read/write/delete diagnostic settings.|
+|Microsoft.Insights/MetricAlerts/*|Read/write/delete near real-time metric alerts.|
+|Microsoft.Insights/ScheduledQueryRules/*|Read/write/delete log alerts in Azure Monitor.|
+|Microsoft.OperationalInsights/workspaces/sharedKeys/action|List keys for a Log Analytics workspace|
> [!NOTE] > The Automation Contributor role can be used to access any resource using the managed identity, if appropriate permissions are set on the target resource, or using a Run As account. An Automation Run As account are by default, configured with Contributor rights on the subscription. Follow the principal of least privilege and carefully assign permissions only required to execute your runbook. For example, if the Automation account is only required to start or stop an Azure VM, then the permissions assigned to the Run As account or managed identity needs to be only for starting or stopping the VM. Similarly, if a runbook is reading from blob storage, then assign read only permissions.
automation Manage Runbooks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/manage-runbooks.md
Title: Manage runbooks in Azure Automation
description: This article tells how to manage runbooks in Azure Automation. Previously updated : 05/03/2021 Last updated : 09/13/2021 + # Manage runbooks in Azure Automation You can add a runbook to Azure Automation by either creating a new one or importing an existing one from a file or the [Runbook Gallery](automation-runbook-gallery.md). This article provides information for managing a runbook imported from a file. You can find all the details of accessing community runbooks and modules in [Runbook and module galleries for Azure Automation](automation-runbook-gallery.md). ## Create a runbook
-Create a new runbook in Azure Automation using the Azure portal or Windows PowerShell. Once the runbook has been created, you can edit it using information in:
+Create a new runbook in Azure Automation using the Azure portal or PowerShell. Once the runbook has been created, you can edit it using information in:
* [Edit textual runbook in Azure Automation](automation-edit-textual-runbook.md)
-* [Learn key Windows PowerShell Workflow concepts for Automation runbooks](automation-powershell-workflow.md)
+* [Learn key PowerShell Workflow concepts for Automation runbooks](automation-powershell-workflow.md)
* [Manage Python 2 packages in Azure Automation](python-packages.md) * [Manage Python 3 packages (preview) in Azure Automation](python-3-packages.md)
You can use the following procedure to import a script file into Azure Automatio
> [!NOTE] > After you import a graphical runbook, you can convert it to another type. However, you can't convert a graphical runbook to a textual runbook.
-### Import a runbook with Windows PowerShell
+### Import a runbook with PowerShell
Use the [Import-AzAutomationRunbook](/powershell/module/az.automation/import-azautomationrunbook) cmdlet to import a script file as a draft runbook. If the runbook already exists, the import fails unless you use the `Force` parameter with the cmdlet.
You can track the progress of a runbook by using an external source, such as a s
Some runbooks behave strangely if they run across multiple jobs at the same time. In this case, it's important for a runbook to implement logic to determine if there is already a running job. Here's a basic example. ```powershell
-# Authenticate to Azure
-$connection = Get-AutomationConnection -Name AzureRunAsConnection
-$cnParams = @{
- ServicePrincipal = $true
- Tenant = $connection.TenantId
- ApplicationId = $connection.ApplicationId
- CertificateThumbprint = $connection.CertificateThumbprint
-}
-Connect-AzAccount @cnParams
-$AzureContext = Set-AzContext -SubscriptionId $connection.SubscriptionID
+# Connect to Azure with user-assigned managed identity
+Connect-AzAccount -Identity
+$identity = Get-AzUserAssignedIdentity -ResourceGroupName <ResourceGroupName> -Name <UserAssignedManagedIdentity>
+Connect-AzAccount -Identity -AccountId $identity.ClientId
+
+$AzureContext = Set-AzContext -SubscriptionId ($identity.id -split "/")[2]
# Check for already running or new runbooks $runbookName = "RunbookName"
if (($jobs.Status -contains 'Running' -and $runningCount -gt 1 ) -or ($jobs.Stat
# Insert Your code here } ```
-Alternatively, you can use PowerShell's splatting feature to pass the connection information to `Connect-AzAccount`. In that case, the first few lines of the previous sample would look like this.
-
-```powershell
-# Authenticate to Azure
-$connection = Get-AutomationConnection -Name AzureRunAsConnection
-Connect-AzAccount @connection
-$AzureContext = Set-AzContext -SubscriptionId $connection.SubscriptionID
-```
-
-For more information, see [about splatting](/powershell/module/microsoft.powershell.core/about/about_splatting).
## Handle transient errors in a time-dependent script
Your runbook must be able to work with [subscriptions](automation-runbook-execut
```powershell Disable-AzContextAutosave -Scope Process
-$connection = Get-AutomationConnection -Name AzureRunAsConnection
-$cnParams = @{
- ServicePrincipal = $true
- Tenant = $connection.TenantId
- ApplicationId = $connection.ApplicationId
- CertificateThumbprint = $connection.CertificateThumbprint
-}
-Connect-AzAccount @cnParams
+# Connect to Azure with user-assigned managed identity
+Connect-AzAccount -Identity
+$identity = Get-AzUserAssignedIdentity -ResourceGroupName <ResourceGroupName> -Name <UserAssignedManagedIdentity>
+Connect-AzAccount -Identity -AccountId $identity.ClientId
$childRunbookName = 'ChildRunbookDemo' $accountName = 'MyAutomationAccount'
azure-arc Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/faq.md
For Azure Arc-enabled Kubernetes, since Azure Resource Manager manages your conf
This feature applies baseline configurations (like network policies, role bindings, and pod security policies) across the entire Kubernetes cluster inventory to meet compliance and governance requirements.
+## Does Azure Arc-enabled Kubernetes store any customer data outside of the cluster's region?
+
+The feature to enable storing customer data in a single region is currently only available in the Southeast Asia Region (Singapore) of the Asia Pacific Geo and Brazil South (Sao Paulo State) Region of Brazil Geo. For all other regions, customer data is stored in Geo. For more information, see [Trust Center](https://azure.microsoft.com/global-infrastructure/data-residency/).
+ ## Next steps * Walk through our quickstart to [connect a Kubernetes cluster to Azure Arc](./quickstart-connect-cluster.md).
azure-arc Agent Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/agent-overview.md
Title: Overview of the Connected Machine agent description: This article provides a detailed overview of the Azure Arc-enabled servers agent available, which supports monitoring virtual machines hosted in hybrid environments. Previously updated : 09/01/2021 Last updated : 09/14/2021
The following metadata information is requested by the agent from Azure:
## Download agents
-You can download the Azure Connected Machine agent package for Windows and Linux from the locations listed below.
+You can download the Azure Connected Machine agent package for Windows and Linux from the locations listed below.
* [Windows agent Windows Installer package](https://aka.ms/AzureConnectedMachineAgent) from the Microsoft Download Center.
Azure Arc-enabled servers depend on the following Azure resource providers in yo
* **Microsoft.HybridCompute** * **Microsoft.GuestConfiguration**
-If they are not registered, you can register them using the following commands:
+If they are not registered, you can register them using the following commands:
-Azure PowerShell:
+Azure PowerShell:
```azurepowershell-interactive Login-AzAccount
-Set-AzContext -SubscriptionId [subscription you want to onboard]
-Register-AzResourceProvider -ProviderNamespace Microsoft.HybridCompute
-Register-AzResourceProvider -ProviderNamespace Microsoft.GuestConfiguration
+Set-AzContext -SubscriptionId [subscription you want to onboard]
+Register-AzResourceProvider -ProviderNamespace Microsoft.HybridCompute
+Register-AzResourceProvider -ProviderNamespace Microsoft.GuestConfiguration
```
-Azure CLI:
+Azure CLI:
```azurecli-interactive
-az account set --subscription "{Your Subscription Name}"
-az provider register --namespace 'Microsoft.HybridCompute'
-az provider register --namespace 'Microsoft.GuestConfiguration'
+az account set --subscription "{Your Subscription Name}"
+az provider register --namespace 'Microsoft.HybridCompute'
+az provider register --namespace 'Microsoft.GuestConfiguration'
``` You can also register the resource providers in the Azure portal by following the steps under [Azure portal](../../azure-resource-manager/management/resource-providers-and-types.md#azure-portal).
Connecting machines in your hybrid environment directly with Azure can be accomp
|--|-| | Interactively | Manually install the agent on a single or small number of machines following the steps in [Connect machines from Azure portal](onboard-portal.md).<br> From the Azure portal, you can generate a script and execute it on the machine to automate the install and configuration steps of the agent.| | At scale | Install and configure the agent for multiple machines following the [Connect machines using a Service Principal](onboard-service-principal.md).<br> This method creates a service principal to connect machines non-interactively.|
+| At scale | Install and configure the agent for multiple machines following the method [Connect hybrid machines to Azure from Automation Update Management](onboard-update-management-machines.md).<br> This method creates a service principal, and installs and configures the agent for multiple machines managed with Azure Automation Update Management to connect machines non-interactively. |
| At scale | Install and configure the agent for multiple machines following the method [Using Windows PowerShell DSC](onboard-dsc.md).<br> This method uses a service principal to connect machines non-interactively with PowerShell DSC. | ## Connected Machine agent technical overview
Azure Arc-enabled servers Connected Machine agent is designed to manage agent an
- The Extension Service agent is limited to use up to 5% of the CPU. - This only applies to install/uninstall/upgrade operations. Once installed, extensions are responsible for their own resource utilization and the 5% CPU limit does not apply.
- - The Log Analytics agent and Azure Monitor Agent is allowed to use up to 60% of the CPU during their install/upgrade/uninstall operations on Red Hat Linux, CentOS, and other enterprise Linux variants. The limit is higher for this combination of extensions and operating systems to accommodate the performance impact of [SELinux](https://www.redhat.com/en/topics/linux/what-is-selinux) on these systems.
+ - The Log Analytics agent and Azure Monitor Agent are allowed to use up to 60% of the CPU during their install/upgrade/uninstall operations on Red Hat Linux, CentOS, and other enterprise Linux variants. The limit is higher for this combination of extensions and operating systems to accommodate the performance impact of [SELinux](https://www.redhat.com/en/topics/linux/what-is-selinux) on these systems.
## Next steps * To begin evaluating Azure Arc-enabled servers, follow the article [Connect hybrid machines with Azure Arc-enabled servers](learn/quick-enable-hybrid-vm.md).
-* Before deploying the Azure Arc-enabled servers agent and integrate with other Azure management and monitoring services, review the [Planning and deployment guide](plan-at-scale-deployment.md).
+* Before you deploy the Azure Arc-enabled servers agent and integrate with other Azure management and monitoring services, review the [Planning and deployment guide](plan-at-scale-deployment.md).
* Troubleshooting information can be found in the [Troubleshoot Connected Machine agent guide](troubleshoot-agent-onboard.md).
azure-arc Onboard Update Management Machines https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/onboard-update-management-machines.md
+
+ Title: Connect machines from Azure Automation Update Management
+description: In this article, you learn how to connect hybrid machines to Azure Arc managed by Automation Update Management.
Last updated : 09/14/2021+++
+# Connect hybrid machines to Azure from Automation Update Management
+
+You can enable Azure Arc-enabled servers for one or more of your Windows or Linux virtual machines or physical servers hosted on-premises or other cloud environment that are managed with Azure Automation Update Management. This onboarding process automates the download and installation of the [Connected Machine agent](agent-overview.md). To connect the machines to Azure Arc-enabled servers, an Azure Active Directory [service principal](../../active-directory/develop/app-objects-and-service-principals.md) is used instead of your privileged identity to [interactively connect](onboard-portal.md) the machine. This service principal is created automatically as part of the onboarding process for these machines.
+
+Before you get started, be sure to review the [prerequisites](agent-overview.md#prerequisites) and verify that your subscription and resources meet the requirements. For information about supported regions and other related considerations, see [supported Azure regions](overview.md#supported-regions).
+
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+
+## How it works
+
+When the onboarding process is launched, an Active Directory [service principal](../../active-directory/fundamentals/service-accounts-principal.md) is created in the tenant.
+
+To install and configure the Connected Machine agent on the target machine, a master runbook named **Add-AzureConnectedMachines** runs in the Azure sandbox. Based on the operating system detected on the machine, the master runbook calls a child runbook named **Add-AzureConnectedMachineWindows** or **Add-AzureConnectedMachineLinux** that runs under the system [Hybrid Runbook Worker](../../automation/automation-hybrid-runbook-worker.md) role directly on the machine. Runbook job output is written to the job history, and you can view their [status summary](../../automation/automation-runbook-execution.md#job-statuses) or drill into details of a specific runbook job in the [Azure portal](../../automation/manage-runbooks.md#view-statuses-in-the-azure-portal) or using [Azure PowerShell](../../automation/manage-runbooks.md#retrieve-job-statuses-using-powershell). Execution of runbooks in Azure Automation writes details in an activity log for the Automation account. For details of using the log, see [Retrieve details from Activity log](../../automation/manage-runbooks.md#retrieve-details-from-activity-log).
+
+The final step establishes the connection to Azure Arc using the `azcmagent` command using the service principal to register the machine as a resource in Azure.
+
+## Prerequisites
+
+This method requires that you are a member of the [Automation Job Operator](../../automation/automation-role-based-access-control.md#automation-job-operator) role or higher so you can create runbook jobs in the Automation account.
+
+If you have enabled Azure Policy to [manage runbook execution](../../automation/enforce-job-execution-hybrid-worker.md) and enforce targeting of runbook execution against a Hybrid Runbook Worker group, this policy must be disabled. Otherwise, the runbook jobs that onboard the machine(s) to Arc-enabled servers will fail.
+
+## Add machines from the Azure portal
+
+Perform the following steps to configure the hybrid machine with Arc-enabled servers. The server or machine must be powered on and online in order for the process to complete successfully.
+
+1. From your browser, go to the [Azure portal](https://portal.azure.com).
+
+1. Navigate to the **Servers - Azure Arc** page, and then select **Add** at the upper left.
+
+1. On the **Select a method** page, select the **Add managed servers from Update Management (preview)** tile, and then select **Add servers**.
+
+1. On the **Basics** page, configure the following:
+
+ 1. In the **Resource group** drop-down list, select the resource group the machine will be managed from.
+ 1. In the **Region** drop-down list, select the Azure region to store the servers metadata.
+ 1. If the machine is communicating through a proxy server to connect to the internet, specify the proxy server IP address or the name and port number that the machine will use to communicate with the proxy server. Enter the value in the format `http://<proxyURL>:<proxyport>`.
+ 1. Select **Next: Machines**.
+
+1. On the **Machines** page, select the **Subscription** and **Automation account** from the drop-down list that has the Update Management feature enabled and includes the machines you want to onboard to Azure Arc-enabled servers.
+
+ After specifying the Automation account, the list below returns non-Azure machines managed by Update Management for that Automation account. Both Windows and Linux machines are listed and for each one, select **add**.
+
+ You can review your selection by selecting **Review selection** and if you want to remove a machine select **remove** from under the **Action** column.
+
+ Once you confirm your selection, select **Next: Tags**.
+
+1. On the **Tags** page, specify one or more **Name**/**Value** pairs to support your standards. Select **Next: Review + add**.
+
+1. On the **Review _ add** page, review the summary information, and then select **Add machines**. If you still need to make changes, select **Previous**.
+
+## Verify the connection with Azure Arc
+
+After the agent is installed and configured to connect to Azure Arc-enabled servers, go to the Azure portal to verify that the server has successfully connected. View your machines in the [Azure portal](https://aka.ms/hybridmachineportal).
+
+![A successful server connection](./media/onboard-portal/arc-for-servers-successful-onboard.png)
+
+## Next steps
+
+- Troubleshooting information can be found in the [Troubleshoot Connected Machine agent guide](troubleshoot-agent-onboard.md).
+
+- Review the [Planning and deployment guide](plan-at-scale-deployment.md) to plan for deploying Azure Arc-enabled servers at any scale and implement centralized management and monitoring.
+
+- Learn how to manage your machine using [Azure Policy](../../governance/policy/overview.md), for such things as VM [guest configuration](../../governance/policy/concepts/guest-configuration.md), verify the machine is reporting to the expected Log Analytics workspace, enable monitoring with [VM insights](../../azure-monitor/vm/vminsights-enable-policy.md), and much more.
azure-arc Private Link Security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/private-link-security.md
Title: Use Azure Private Link to securely connect networks to Azure Arc description: Learn how to use Azure Private Link to securely connect networks to Azure Arc. Previously updated : 07/20/2021 Last updated : 09/14/2021 # Use Azure Private Link to securely connect networks to Azure Arc
The Azure Arc-enabled servers Private Link Scope object has a number of limits y
- All on-premises machines need to use the same private endpoint by resolving the correct private endpoint information (FQDN record name and private IP address) using the same DNS forwarder. For more information, see [Azure Private Endpoint DNS configuration](../../private-link/private-endpoint-dns.md) -- The Azure Arc-enabled machine or server, Azure Arc Private Link Scope, and virtual network must be in the same Azure region.
+- The Azure Arc-enabled server and Azure Arc Private Link Scope must be in the same Azure region. The Private Endpoint and the virtual network must also be in the same Azure region, but this region can be different from that of your Azure Arc Private Link Scope and Arc-enabled server.
-- Traffic to Azure Active Directory and Azure Resource Manager service tags must be allowed through your on-premises network firewall during the preview.
+- Traffic to Azure Active Directory and Azure Resource Manager service tags must be allowed through your on-premises network firewall during the preview.
- Other Azure services that you will use, for example Azure Monitor, requires their own private endpoints in your virtual network.
azure-functions Functions Bindings Cosmosdb V2 Input https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-bindings-cosmosdb-v2-input.md
Title: Azure Cosmos DB input binding for Functions 2.x and higher
description: Learn to use the Azure Cosmos DB input binding in Azure Functions. Previously updated : 02/24/2020 Last updated : 09/01/2021
This section contains the following examples:
* [HTTP trigger, look up ID from route data, using SqlQuery](#http-trigger-look-up-id-from-route-data-using-sqlquery-c) * [HTTP trigger, get multiple docs, using SqlQuery](#http-trigger-get-multiple-docs-using-sqlquery-c) * [HTTP trigger, get multiple docs, using DocumentClient](#http-trigger-get-multiple-docs-using-documentclient-c)
+* [HTTP trigger, get multiple docs, using CosmosClient (v4 extension)](#http-trigger-get-multiple-docs-using-cosmosclient-c)
The examples refer to a simple `ToDoItem` type:
namespace CosmosDBSamplesV2
} ```
+<a id="http-trigger-get-multiple-docs-using-cosmosclient-c"></a>
+
+### HTTP trigger, get multiple docs, using CosmosClient
+
+The following example shows a [C# function](functions-dotnet-class-library.md) that retrieves a list of documents. The function is triggered by an HTTP request. The code uses a `CosmosClient` instance provided by the Azure Cosmos DB binding, available in [extension version 4.x](./functions-bindings-cosmosdb-v2.md#cosmos-db-extension-4x-and-higher), to read a list of documents. The `CosmosClient` instance could also be used for write operations.
+
+```csharp
+using System.Linq;
+using System.Threading.Tasks;
+using Microsoft.AspNetCore.Http;
+using Microsoft.AspNetCore.Mvc;
+using Microsoft.Azure.Cosmos;
+using Microsoft.Azure.WebJobs;
+using Microsoft.Azure.WebJobs.Host;
+using Microsoft.Azure.WebJobs.Extensions.Http;
+using Microsoft.Extensions.Logging;
+
+namespace CosmosDBSamplesV2
+{
+ public static class DocsByUsingCosmosClient
+ {
+ [FunctionName("DocsByUsingCosmosClient")]
+ public static async Task<IActionResult> Run(
+ [HttpTrigger(AuthorizationLevel.Anonymous, "get", "post",
+ Route = null)]HttpRequest req,
+ [CosmosDB(
+ databaseName: "ToDoItems",
+ containerName: "Items",
+ Connection = "CosmosDBConnection")] CosmosClient client,
+ ILogger log)
+ {
+ log.LogInformation("C# HTTP trigger function processed a request.");
+
+ var searchterm = req.Query["searchterm"].ToString();
+ if (string.IsNullOrWhiteSpace(searchterm))
+ {
+ return (ActionResult)new NotFoundResult();
+ }
+
+ Container container = client.GetDatabase("ToDoItems").GetContainer("Items");
+
+ log.LogInformation($"Searching for: {searchterm}");
+
+ QueryDefinition queryDefinition = new QueryDefinition(
+ "SELECT * FROM items i WHERE CONTAINS(i.Description, @searchterm)")
+ .WithParameter("@searchterm", searchterm);
+ using (FeedIterator<ToDoItem> resultSet = container.GetItemQueryIterator<ToDoItem>(queryDefinition))
+ {
+ while (resultSet.HasMoreResults)
+ {
+ FeedResponse<ToDoItem> response = await resultSet.ReadNextAsync();
+ ToDoItem item = response.First();
+ log.LogInformation(item.Description);
+ }
+ }
+
+ return new OkResult();
+ }
+ }
+}
+```
+ # [C# Script](#tab/csharp-script) This section contains the following examples:
def main(queuemsg: func.QueueMessage, documents: func.DocumentList):
In [C# class libraries](functions-dotnet-class-library.md), use the [CosmosDB](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/master/src/WebJobs.Extensions.CosmosDB/CosmosDBAttribute.cs) attribute.
-The attribute's constructor takes the database name and collection name. For information about those settings and other properties that you can configure, see [the following configuration section](#configuration).
+The attribute's constructor takes the database name and collection name. In [extension version 4.x](./functions-bindings-cosmosdb-v2.md#cosmos-db-extension-4x-and-higher) some settings and properties have been removed or renamed. For information about settings and other properties that you can configure for all versions, see [the following configuration section](#configuration).
# [C# Script](#tab/csharp-script)
The following table explains the binding configuration properties that you set i
|**direction** | n/a | Must be set to `in`. | |**name** | n/a | Name of the binding parameter that represents the document in the function. | |**databaseName** |**DatabaseName** |The database containing the document. |
-|**collectionName** |**CollectionName** | The name of the collection that contains the document. |
+|**collectionName** <br> or <br> **containerName**|**CollectionName** <br> or <br> **ContainerName**| The name of the collection that contains the document. <br><br> In [version 4.x of the extension](./functions-bindings-cosmosdb-v2.md#cosmos-db-extension-4x-and-higher) this property is called `ContainerName`. |
|**id** | **Id** | The ID of the document to retrieve. This property supports [binding expressions](./functions-bindings-expressions-patterns.md). Don't set both the `id` and **sqlQuery** properties. If you don't set either one, the entire collection is retrieved. | |**sqlQuery** |**SqlQuery** | An Azure Cosmos DB SQL query used for retrieving multiple documents. The property supports runtime bindings, as in this example: `SELECT * FROM c where c.departmentId = {departmentId}`. Don't set both the `id` and `sqlQuery` properties. If you don't set either one, the entire collection is retrieved.|
-|**connectionStringSetting** |**ConnectionStringSetting**|The name of the app setting containing your Azure Cosmos DB connection string. |
+|**connectionStringSetting** <br> or <br> **connection** |**ConnectionStringSetting** <br> or <br> **Connection**|The name of the app setting containing your Azure Cosmos DB connection string. <br><br> In [version 4.x of the extension](./functions-bindings-cosmosdb-v2.md#cosmos-db-extension-4x-and-higher) this property is called `Connection`. The value is the name of an app setting that either contains the connection string or contains a configuration section or prefix which defines the connection. See [Connections](./functions-reference.md#connections). |
|**partitionKey**|**PartitionKey**|Specifies the partition key value for the lookup. May include binding parameters. It is required for lookups in [partitioned](../cosmos-db/partitioning-overview.md#logical-partitions) collections.| |**preferredLocations**| **PreferredLocations**| (Optional) Defines preferred locations (regions) for geo-replicated database accounts in the Azure Cosmos DB service. Values should be comma-separated. For example, "East US,South Central US,North Europe". |
azure-functions Functions Bindings Cosmosdb V2 Output https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-bindings-cosmosdb-v2-output.md
Title: Azure Cosmos DB output binding for Functions 2.x and higher
description: Learn to use the Azure Cosmos DB output binding in Azure Functions. Previously updated : 02/24/2020 Last updated : 09/01/2021
For information on setup and configuration details, see the [overview](./functio
This section contains the following examples: * [Queue trigger, write one doc](#queue-trigger-write-one-doc-c)
+* [Queue trigger, write one doc (v4 extension)](#queue-trigger-write-one-doc-v4-c)
* [Queue trigger, write docs using IAsyncCollector](#queue-trigger-write-docs-using-iasynccollector-c) The examples refer to a simple `ToDoItem` type:
namespace CosmosDBSamplesV2
} ```
+<a id="queue-trigger-write-one-doc-v4-c"></a>
+
+### Queue trigger, write one doc (v4 extension)
+
+Apps using Cosmos DB [extension version 4.x](./functions-bindings-cosmosdb-v2.md#cosmos-db-extension-4x-and-higher) or higher will have different attribute properties which are shown below. The following example shows a [C# function](functions-dotnet-class-library.md) that adds a document to a database, using data provided in message from Queue storage.
+
+```cs
+using Microsoft.Azure.WebJobs;
+using Microsoft.Azure.WebJobs.Host;
+using Microsoft.Extensions.Logging;
+using System;
+
+namespace CosmosDBSamplesV2
+{
+ public static class WriteOneDoc
+ {
+ [FunctionName("WriteOneDoc")]
+ public static void Run(
+ [QueueTrigger("todoqueueforwrite")] string queueMessage,
+ [CosmosDB(
+ databaseName: "ToDoItems",
+ containerName: "Items",
+ Connection = "CosmosDBConnection")]out dynamic document,
+ ILogger log)
+ {
+ document = new { Description = queueMessage, id = Guid.NewGuid() };
+
+ log.LogInformation($"C# Queue trigger function inserted one row");
+ log.LogInformation($"Description={queueMessage}");
+ }
+ }
+}
+```
+ <a id="queue-trigger-write-docs-using-iasynccollector-c"></a> ### Queue trigger, write docs using IAsyncCollector
The attribute's constructor takes the database name and collection name. For inf
} ```
+In [extension version 4.x](./functions-bindings-cosmosdb-v2.md#cosmos-db-extension-4x-and-higher) some settings and properties have been removed or renamed. For detailed information about the changes, see [Output - configuration](#configuration). Here's a `CosmosDB` attribute example in a method signature:
+
+```csharp
+ [FunctionName("QueueToCosmosDB")]
+ public static void Run(
+ [QueueTrigger("myqueue-items", Connection = "AzureWebJobsStorage")] string myQueueItem,
+ [CosmosDB("database", "container", Connection = "CosmosDBConnectionSetting")] out dynamic document)
+ {
+ ...
+ }
+```
+ # [C# Script](#tab/csharp-script) Attributes are not supported by C# Script.
The following table explains the binding configuration properties that you set i
|**direction** | n/a | Must be set to `out`. | |**name** | n/a | Name of the binding parameter that represents the document in the function. | |**databaseName** | **DatabaseName**|The database containing the collection where the document is created. |
-|**collectionName** |**CollectionName** | The name of the collection where the document is created. |
+|**collectionName** <br> or <br> **containerName** |**CollectionName** <br> or <br> **ContainerName** | The name of the collection where the document is created. <br><br> In [version 4.x of the extension](./functions-bindings-cosmosdb-v2.md#cosmos-db-extension-4x-and-higher) this property is called `ContainerName`. |
|**createIfNotExists** |**CreateIfNotExists** | A boolean value to indicate whether the collection is created when it doesn't exist. The default is *false* because new collections are created with reserved throughput, which has cost implications. For more information, see the [pricing page](https://azure.microsoft.com/pricing/details/cosmos-db/). | |**partitionKey**|**PartitionKey** |When `CreateIfNotExists` is true, it defines the partition key path for the created collection.|
-|**collectionThroughput**|**CollectionThroughput**| When `CreateIfNotExists` is true, it defines the [throughput](../cosmos-db/set-throughput.md) of the created collection.|
-|**connectionStringSetting** |**ConnectionStringSetting** |The name of the app setting containing your Azure Cosmos DB connection string. |
+|**collectionThroughput** <br> or <br> **containerThroughput**|**CollectionThroughput** <br> or <br> **ContainerThroughput**| When `CreateIfNotExists` is true, it defines the [throughput](../cosmos-db/set-throughput.md) of the created collection. <br><br> In [version 4.x of the extension](./functions-bindings-cosmosdb-v2.md#cosmos-db-extension-4x-and-higher) this property is called `ContainerThroughput`. |
+|**connectionStringSetting** <br> or <br> **connection** |**ConnectionStringSetting** <br> or <br> **Connection**|The name of the app setting containing your Azure Cosmos DB connection string. <br><br> In [version 4.x of the extension](./functions-bindings-cosmosdb-v2.md#cosmos-db-extension-4x-and-higher) this property is called `Connection`. The value is the name of an app setting that either contains the connection string or contains a configuration section or prefix which defines the connection. See [Connections](./functions-reference.md#connections). |
|**preferredLocations**| **PreferredLocations**| (Optional) Defines preferred locations (regions) for geo-replicated database accounts in the Azure Cosmos DB service. Values should be comma-separated. For example, "East US,South Central US,North Europe". |
-|**useMultipleWriteLocations**| **UseMultipleWriteLocations**| (Optional) When set to `true` along with `PreferredLocations`, it can leverage [multi-region writes](../cosmos-db/how-to-manage-database-account.md#configure-multiple-write-regions) in the Azure Cosmos DB service. |
+|**useMultipleWriteLocations**| **UseMultipleWriteLocations**| (Optional) When set to `true` along with `PreferredLocations`, it can leverage [multi-region writes](../cosmos-db/how-to-manage-database-account.md#configure-multiple-write-regions) in the Azure Cosmos DB service. <br><br> This property is not available in [version 4.x of the extension](./functions-bindings-cosmosdb-v2.md#cosmos-db-extension-4x-and-higher). |
[!INCLUDE [app settings to local.settings.json](../../includes/functions-app-settings-local.md)]
By default, when you write to the output parameter in your function, a document
## host.json settings
-This section describes the global configuration settings available for this binding in version 2.x. For more information about global configuration settings in version 2.x, see [host.json reference for Azure Functions version 2.x](functions-host-json.md).
+This section describes the global configuration settings available for this binding in Azure Functions version 2.x. For more information about global configuration settings in Azure Functions version 2.x, see [host.json reference for Azure Functions version 2.x](functions-host-json.md).
```json {
This section describes the global configuration settings available for this bind
} ```
-|Property |Default | Description |
-||||
+|Property |Default |Description |
+|-|--||
|GatewayMode|Gateway|The connection mode used by the function when connecting to the Azure Cosmos DB service. Options are `Direct` and `Gateway`|
-|Protocol|Https|The connection protocol used by the function when connection to the Azure Cosmos DB service. Read [here for an explanation of both modes](../cosmos-db/performance-tips.md#networking)|
-|leasePrefix|n/a|Lease prefix to use across all functions in an app.|
+|Protocol|Https|The connection protocol used by the function when connection to the Azure Cosmos DB service. Read [here for an explanation of both modes](../cosmos-db/performance-tips.md#networking). <br><br> This setting is not available in [version 4.x of the extension](./functions-bindings-cosmosdb-v2.md#cosmos-db-extension-4x-and-higher). |
+|leasePrefix|n/a|Lease prefix to use across all functions in an app. <br><br> This setting is not available in [version 4.x of the extension](./functions-bindings-cosmosdb-v2.md#cosmos-db-extension-4x-and-higher).|
## Next steps
azure-functions Functions Bindings Cosmosdb V2 Trigger https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-bindings-cosmosdb-v2-trigger.md
Title: Azure Cosmos DB trigger for Functions 2.x and higher
description: Learn to use the Azure Cosmos DB trigger in Azure Functions. Previously updated : 02/24/2020 Last updated : 09/01/2021
namespace CosmosDBSamplesV2
} ```
+Apps using Cosmos DB [extension version 4.x](./functions-bindings-cosmosdb-v2.md#cosmos-db-extension-4x-and-higher) or higher will have different attribute properties which are shown below. This example refers to a simple `ToDoItem` type.
+
+```cs
+namespace CosmosDBSamplesV2
+{
+ public class ToDoItem
+ {
+ public string Id { get; set; }
+ public string Description { get; set; }
+ }
+}
+```
+
+```cs
+using System.Collections.Generic;
+using Microsoft.Azure.WebJobs;
+using Microsoft.Azure.WebJobs.Host;
+using Microsoft.Extensions.Logging;
+
+namespace CosmosDBSamplesV2
+{
+ public static class CosmosTrigger
+ {
+ [FunctionName("CosmosTrigger")]
+ public static void Run([CosmosDBTrigger(
+ databaseName: "databaseName",
+ containerName: "containerName",
+ Connection = "CosmosDBConnectionSetting",
+ LeaseContainerName = "leases",
+ CreateLeaseContainerIfNotExists = true)]IReadOnlyList<ToDoItem> input, ILogger log)
+ {
+ if (input != null && input.Count > 0)
+ {
+ log.LogInformation("Documents modified " + input.Count);
+ log.LogInformation("First document Id " + input[0].Id);
+ }
+ }
+ }
+}
+```
+ # [C# Script](#tab/csharp-script) The following example shows a Cosmos DB trigger binding in a *function.json* file and a [C# script function](functions-reference-csharp.md) that uses the binding. The function writes log messages when Cosmos DB records are added or modified.
The attribute's constructor takes the database name and collection name. For inf
} ```
-For a complete example, see [Trigger](#example).
+In [extension version 4.x](./functions-bindings-cosmosdb-v2.md#cosmos-db-extension-4x-and-higher) some settings and properties have been removed or renamed. For detailed information about the changes, see [Trigger - configuration](#configuration). Here's a `CosmosDBTrigger` attribute example in a method signature which refers to a simple `ToDoItem` type:
+
+```cs
+namespace CosmosDBSamplesV2
+{
+ public class ToDoItem
+ {
+ public string Id { get; set; }
+ public string Description { get; set; }
+ }
+}
+```
+
+```csharp
+ [FunctionName("DocumentUpdates")]
+ public static void Run([CosmosDBTrigger("database", "container", Connection = "CosmosDBConnectionSetting")]
+ IReadOnlyList<ToDoItem> documents,
+ ILogger log)
+ {
+ ...
+ }
+```
+
+For a complete example of either extension version, see [Trigger](#example).
# [C# Script](#tab/csharp-script)
The following table explains the binding configuration properties that you set i
|**type** | n/a | Must be set to `cosmosDBTrigger`. | |**direction** | n/a | Must be set to `in`. This parameter is set automatically when you create the trigger in the Azure portal. | |**name** | n/a | The variable name used in function code that represents the list of documents with changes. |
-|**connectionStringSetting**|**ConnectionStringSetting** | The name of an app setting that contains the connection string used to connect to the Azure Cosmos DB account being monitored. |
+|**connectionStringSetting** <br> or <br> **connection**|**ConnectionStringSetting** <br> or <br> **Connection**| The name of an app setting that contains the connection string used to connect to the Azure Cosmos DB account being monitored. <br><br> In [version 4.x of the extension](./functions-bindings-cosmosdb-v2.md#cosmos-db-extension-4x-and-higher) this property is called `Connection`. The value is the name of an app setting that either contains the connection string used to connect to the Azure Cosmos DB account being monitored or contains a configuration section or prefix which defines the connection. See [Connections](./functions-reference.md#connections). |
|**databaseName**|**DatabaseName** | The name of the Azure Cosmos DB database with the collection being monitored. |
-|**collectionName** |**CollectionName** | The name of the collection being monitored. |
-|**leaseConnectionStringSetting** | **LeaseConnectionStringSetting** | (Optional) The name of an app setting that contains the connection string to the Azure Cosmos DB account that holds the lease collection. When not set, the `connectionStringSetting` value is used. This parameter is automatically set when the binding is created in the portal. The connection string for the leases collection must have write permissions.|
+|**collectionName** <br> or <br> **containerName** |**CollectionName** <br> or <br> **ContainerName** | The name of the collection being monitored. <br><br> In [version 4.x of the extension](./functions-bindings-cosmosdb-v2.md#cosmos-db-extension-4x-and-higher) this property is called `ContainerName`. |
+|**leaseConnectionStringSetting** <br> or <br> **leaseConnection** | **LeaseConnectionStringSetting** <br> or <br> **LeaseConnection** | (Optional) The name of an app setting that contains the connection string to the Azure Cosmos DB account that holds the lease collection. When not set, the `connectionStringSetting` value is used. This parameter is automatically set when the binding is created in the portal. The connection string for the leases collection must have write permissions. <br><br> In [version 4.x of the extension](./functions-bindings-cosmosdb-v2.md#cosmos-db-extension-4x-and-higher) this property is called `LeaseConnection` and if not set it will use the `Connection` value. The value is the name of an app setting that contains either the connection string used to connect to the Azure Cosmos DB account with the lease container or contains a configuration section or prefix which defines the connection. See [Connections](./functions-reference.md#connections).|
|**leaseDatabaseName** |**LeaseDatabaseName** | (Optional) The name of the database that holds the collection used to store leases. When not set, the value of the `databaseName` setting is used. This parameter is automatically set when the binding is created in the portal. |
-|**leaseCollectionName** | **LeaseCollectionName** | (Optional) The name of the collection used to store leases. When not set, the value `leases` is used. |
-|**createLeaseCollectionIfNotExists** | **CreateLeaseCollectionIfNotExists** | (Optional) When set to `true`, the leases collection is automatically created when it doesn't already exist. The default value is `false`. |
-|**leasesCollectionThroughput**| **LeasesCollectionThroughput**| (Optional) Defines the number of Request Units to assign when the leases collection is created. This setting is only used when `createLeaseCollectionIfNotExists` is set to `true`. This parameter is automatically set when the binding is created using the portal.
-|**leaseCollectionPrefix**| **LeaseCollectionPrefix**| (Optional) When set, the value is added as a prefix to the leases created in the Lease collection for this Function. Using a prefix allows two separate Azure Functions to share the same Lease collection by using different prefixes.
+|**leaseCollectionName** <br> or <br> **leaseContainerName** | **LeaseCollectionName** <br> or <br> **LeaseContainerName** | (Optional) The name of the collection used to store leases. When not set, the value `leases` is used. <br><br> In [version 4.x of the extension](./functions-bindings-cosmosdb-v2.md#cosmos-db-extension-4x-and-higher) this property is called `LeaseContainerName`. |
+|**createLeaseCollectionIfNotExists** <br> or <br> **createLeaseContainerIfNotExists** | **CreateLeaseCollectionIfNotExists** <br> or <br> **CreateLeaseContainerIfNotExists** | (Optional) When set to `true`, the leases collection is automatically created when it doesn't already exist. The default value is `false`. <br><br> In [version 4.x of the extension](./functions-bindings-cosmosdb-v2.md#cosmos-db-extension-4x-and-higher) this property is called `CreateLeaseContainerIfNotExists`. |
+|**leasesCollectionThroughput** <br> or <br> **leasesContainerThroughput**| **LeasesCollectionThroughput** <br> or <br> **LeasesContainerThroughput**| (Optional) Defines the number of Request Units to assign when the leases collection is created. This setting is only used when `createLeaseCollectionIfNotExists` is set to `true`. This parameter is automatically set when the binding is created using the portal. <br><br> In [version 4.x of the extension](./functions-bindings-cosmosdb-v2.md#cosmos-db-extension-4x-and-higher) this property is called `LeasesContainerThroughput`. |
+|**leaseCollectionPrefix** <br> or <br> **leaseContainerPrefix**| **LeaseCollectionPrefix** <br> or <br> **leaseContainerPrefix** | (Optional) When set, the value is added as a prefix to the leases created in the Lease collection for this Function. Using a prefix allows two separate Azure Functions to share the same Lease collection by using different prefixes. <br><br> In [version 4.x of the extension](./functions-bindings-cosmosdb-v2.md#cosmos-db-extension-4x-and-higher) this property is called `LeaseContainerPrefix`. |
|**feedPollDelay**| **FeedPollDelay**| (Optional) The time (in milliseconds) for the delay between polling a partition for new changes on the feed, after all current changes are drained. Default is 5,000 milliseconds, or 5 seconds. |**leaseAcquireInterval**| **LeaseAcquireInterval**| (Optional) When set, it defines, in milliseconds, the interval to kick off a task to compute if partitions are distributed evenly among known host instances. Default is 13000 (13 seconds). |**leaseExpirationInterval**| **LeaseExpirationInterval**| (Optional) When set, it defines, in milliseconds, the interval for which the lease is taken on a lease representing a partition. If the lease is not renewed within this interval, it will cause it to expire and ownership of the partition will move to another instance. Default is 60000 (60 seconds). |**leaseRenewInterval**| **LeaseRenewInterval**| (Optional) When set, it defines, in milliseconds, the renew interval for all leases for partitions currently held by an instance. Default is 17000 (17 seconds).
-|**checkpointFrequency**| **CheckpointFrequency**| (Optional) When set, it defines, in milliseconds, the interval between lease checkpoints. Default is always after each Function call.
-|**maxItemsPerInvocation**| **MaxItemsPerInvocation**| (Optional) When set, this property sets the maximum number of items received per Function call. If operations in the monitored collection are performed through stored procedures, [transaction scope](../cosmos-db/stored-procedures-triggers-udfs.md#transactions) is preserved when reading items from the Change Feed. As a result, the number of items received could be higher than the specified value so that the items changed by the same transaction are returned as part of one atomic batch.
+|**checkpointInterval**| **CheckpointInterval**| (Optional) When set, it defines, in milliseconds, the interval between lease checkpoints. Default is always after each Function call. <br><br> This property is not available in [version 4.x of the extension](./functions-bindings-cosmosdb-v2.md#cosmos-db-extension-4x-and-higher). |
+|**maxItemsPerInvocation**| **MaxItemsPerInvocation**| (Optional) When set, this property sets the maximum number of items received per Function call. If operations in the monitored collection are performed through stored procedures, [transaction scope](../cosmos-db/stored-procedures-triggers-udfs.md#transactions) is preserved when reading items from the change feed. As a result, the number of items received could be higher than the specified value so that the items changed by the same transaction are returned as part of one atomic batch.
|**startFromBeginning**| **StartFromBeginning**| (Optional) This option tells the Trigger to read changes from the beginning of the collection's change history instead of starting at the current time. Reading from the beginning only works the first time the Trigger starts, as in subsequent runs, the checkpoints are already stored. Setting this option to `true` when there are leases already created has no effect. | |**preferredLocations**| **PreferredLocations**| (Optional) Defines preferred locations (regions) for geo-replicated database accounts in the Azure Cosmos DB service. Values should be comma-separated. For example, "East US,South Central US,North Europe". |
azure-functions Functions Bindings Cosmosdb V2 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-bindings-cosmosdb-v2.md
Title: Azure Cosmos DB bindings for Functions 2.xd and higher
+ Title: Azure Cosmos DB bindings for Functions 2.x and higher
description: Understand how to use Azure Cosmos DB triggers and bindings in Azure Functions. Previously updated : 02/24/2017 Last updated : 09/01/2021
Working with the trigger and bindings requires that you reference the appropriat
[Update your extensions]: ./functions-bindings-register.md [Azure Tools extension]: https://marketplace.visualstudio.com/items?itemName=ms-vscode.vscode-node-azure-pack
+### Cosmos DB extension 4.x and higher
+
+A new version of the Cosmos DB bindings extension is available as a [preview NuGet package](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.CosmosDB/4.0.0-preview1). This preview introduces the ability to [connect using an identity instead of a secret](./functions-reference.md#configure-an-identity-based-connection). For .NET applications, it also changes the types that you can bind to, replacing the types from the v2 SDK `Microsoft.Azure.DocumentDB` with newer types from the v3 SDK [Microsoft.Azure.Cosmos](../cosmos-db/sql/sql-api-sdk-dotnet-standard.md). Learn more about how these new types are different and how to migrate to them from the [SDK migration guide](../cosmos-db/sql/migrate-dotnet-v3.md), [trigger](./functions-bindings-cosmosdb-v2-trigger.md), [input binding](./functions-bindings-cosmosdb-v2-input.md), and [output binding](./functions-bindings-cosmosdb-v2-output.md) examples.
+
+> [!NOTE]
+> Currently, authentication with an identity instead of a secret using the 4.x preview extension is only available for Elastic Premium plans.
+ ### Functions 1.x Functions 1.x apps automatically have a reference the [Microsoft.Azure.WebJobs](https://www.nuget.org/packages/Microsoft.Azure.WebJobs) NuGet package, version 2.x.
+## Exceptions and return codes
+
+| Binding | Reference |
+|||
+| CosmosDB | [CosmosDB Error Codes](/rest/api/cosmos-db/http-status-codes-for-cosmosdb) |
+
+<a name="host-json"></a>
+
+## host.json settings
+
+This section describes the global configuration settings available for this binding in Azure Functions version 2.x. For more information about global configuration settings in Azure Functions version 2.x, see [host.json reference for Azure Functions version 2.x](functions-host-json.md).
+
+```json
+{
+ "version": "2.0",
+ "extensions": {
+ "cosmosDB": {
+ "connectionMode": "Gateway",
+ "protocol": "Https",
+ "leaseOptions": {
+ "leasePrefix": "prefix1"
+ }
+ }
+ }
+}
+```
+
+|Property |Default |Description |
+|-|--||
+|GatewayMode|Gateway|The connection mode used by the function when connecting to the Azure Cosmos DB service. Options are `Direct` and `Gateway`|
+|Protocol|Https|The connection protocol used by the function when connection to the Azure Cosmos DB service. Read [here for an explanation of both modes](../cosmos-db/performance-tips.md#networking). <br><br> This setting is not available in [version 4.x of the extension](#cosmos-db-extension-4x-and-higher). |
+|leasePrefix|n/a|Lease prefix to use across all functions in an app. <br><br> This setting is not available in [version 4.x of the extension](#cosmos-db-extension-4x-and-higher).|
+ ## Next steps - [Run a function when an Azure Cosmos DB document is created or modified (Trigger)](./functions-bindings-cosmosdb-v2-trigger.md)
azure-functions Functions Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-reference.md
Title: Guidance for developing Azure Functions
description: Learn the Azure Functions concepts and techniques that you need to develop functions in Azure, across all programming languages and bindings. ms.assetid: d8efe41a-bef8-4167-ba97-f3e016fcd39e Previously updated : 10/12/2017 Last updated : 9/02/2021 # Azure Functions developer guide
For example, the `connection` property for a Azure Blob trigger definition might
Some connections in Azure Functions are configured to use an identity instead of a secret. Support depends on the extension using the connection. In some cases, a connection string may still be required in Functions even though the service to which you are connecting supports identity-based connections.
-Identity-based connections are supported by the following trigger and binding extensions in all plans:
+Identity-based connections are supported by the following trigger and binding extensions:
> [!NOTE] > Identity-based connections are not supported with Durable Functions.
-| Extension name | Extension version |
-|-|-|
-| Azure Blob | [Version 5.0.0-beta1 or later](./functions-bindings-storage-blob.md#storage-extension-5x-and-higher) |
-| Azure Queue | [Version 5.0.0-beta1 or later](./functions-bindings-storage-queue.md#storage-extension-5x-and-higher) |
-| Azure Event Hubs | [Version 5.0.0-beta1 or later](./functions-bindings-event-hubs.md#event-hubs-extension-5x-and-higher) |
-| Azure Service Bus | [Version 5.0.0-beta2 or later](./functions-bindings-service-bus.md#service-bus-extension-5x-and-higher) |
+| Extension name | Extension version | Plans supported |
+|-|-||
+| Azure Blob | [Version 5.0.0-beta1 or later](./functions-bindings-storage-blob.md#storage-extension-5x-and-higher) | All |
+| Azure Queue | [Version 5.0.0-beta1 or later](./functions-bindings-storage-queue.md#storage-extension-5x-and-higher) | All |
+| Azure Event Hubs | [Version 5.0.0-beta1 or later](./functions-bindings-event-hubs.md#event-hubs-extension-5x-and-higher) | All |
+| Azure Service Bus | [Version 5.0.0-beta2 or later](./functions-bindings-service-bus.md#service-bus-extension-5x-and-higher) | All |
+| Azure Cosmos DB | [Version 4.0.0-preview1 or later](./functions-bindings-cosmosdb-v2.md#cosmos-db-extension-4x-and-higher) | Elastic Premium |
The storage connections used by the Functions runtime (`AzureWebJobsStorage`) may also be configured using an identity-based connection. See [Connecting to host storage with an identity](#connecting-to-host-storage-with-an-identity) below.
The following roles cover the primary permissions needed for each extension in n
| Azure Queues | [Storage Queue Data Reader](../role-based-access-control/built-in-roles.md#storage-queue-data-reader), [Storage Queue Data Message Processor](../role-based-access-control/built-in-roles.md#storage-queue-data-message-processor), [Storage Queue Data Message Sender](../role-based-access-control/built-in-roles.md#storage-queue-data-message-sender), [Storage Queue Data Contributor](../role-based-access-control/built-in-roles.md#storage-queue-data-contributor) | | Event Hubs | [Azure Event Hubs Data Receiver](../role-based-access-control/built-in-roles.md#azure-event-hubs-data-receiver), [Azure Event Hubs Data Sender](../role-based-access-control/built-in-roles.md#azure-event-hubs-data-sender), [Azure Event Hubs Data Owner](../role-based-access-control/built-in-roles.md#azure-event-hubs-data-owner) | | Service Bus | [Azure Service Bus Data Receiver](../role-based-access-control/built-in-roles.md#azure-service-bus-data-receiver), [Azure Service Bus Data Sender](../role-based-access-control/built-in-roles.md#azure-service-bus-data-sender), [Azure Service Bus Data Owner](../role-based-access-control/built-in-roles.md#azure-service-bus-data-owner) |
+| Azure Cosmos DB | [Cosmos DB Built-in Data Reader](../cosmos-db/how-to-setup-rbac.md#built-in-role-definitions), [Cosmos DB Built-in Data Contributor](../cosmos-db/how-to-setup-rbac.md#built-in-role-definitions) |
#### Connection properties
-An identity-based connection for an Azure service accepts the following properties:
+An identity-based connection for an Azure service accepts the following properties where `<CONNECTION_NAME_PREFIX>` is the value of your `connection` property in the trigger or binding definition:
| Property | Required for Extensions | Environment variable | Description | ||||| | Service URI | Azure Blob<sup>1</sup>, Azure Queue | `<CONNECTION_NAME_PREFIX>__serviceUri` | The data plane URI of the service to which you are connecting. | | Fully Qualified Namespace | Event Hubs, Service Bus | `<CONNECTION_NAME_PREFIX>__fullyQualifiedNamespace` | The fully qualified Event Hubs and Service Bus namespace. |
+| Account Endpoint | Azure Cosmos DB | `<CONNECTION_NAME_PREFIX>__accountEndpoint` | The Azure Cosmos DB account endpoint URI. |
| Token Credential | (Optional) | `<CONNECTION_NAME_PREFIX>__credential` | Defines how a token should be obtained for the connection. Recommended only when specifying a user-assigned identity, when it should be set to "managedidentity". This is only valid when hosted in the Azure Functions service. | | Client ID | (Optional) | `<CONNECTION_NAME_PREFIX>__clientId` | When `credential` is set to "managedidentity", this property specifies the user-assigned identity to be used when obtaining a token. The property accepts a client ID corresponding to a user-assigned identity assigned to the application. If not specified, the system-assigned identity will be used. This property is used differently in [local development scenarios](#local-development-with-identity-based-connections), when `credential` should not be set. |
Example of `local.settings.json` properties required for identity-based connecti
} ```
+Example of `local.settings.json` properties required for identity-based connection with Azure Cosmos DB:
+
+```json
+{
+ "IsEncrypted": false,
+ "Values": {
+ "<CONNECTION_NAME_PREFIX>__accountEndpoint": "<accountEndpoint>",
+ "<CONNECTION_NAME_PREFIX>__tenantId": "<tenantId>",
+ "<CONNECTION_NAME_PREFIX>__clientId": "<clientId>",
+ "<CONNECTION_NAME_PREFIX>__clientSecret": "<clientSecret>"
+ }
+}
+```
+ #### Connecting to host storage with an identity Azure Functions by default uses the `AzureWebJobsStorage` connection for core behaviors such as coordinating singleton execution of timer triggers and default app key storage. This can be configured to leverage an identity as well.
azure-government Documentation Government Csp List https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/documentation-government-csp-list.md
cloud: gov Previously updated : 06/11/2021 Last updated : 09/11/2021 + # Azure Government authorized reseller list
-Since the launch of the [Azure Government in the Cloud Solution Provider Program (CSP)](https://azure.microsoft.com/blog/announcing-microsoft-azure-government-services-in-the-cloud-solution-provider-program/), work has been done with the Partner Community to bring them the benefits of this channel, enable them to resell Azure Government, and help them grow their business while providing the cloud services their customers need.
+Since the launch of [Azure Government services in the Cloud Solution Provider (CSP) program)](https://azure.microsoft.com/blog/announcing-microsoft-azure-government-services-in-the-cloud-solution-provider-program/), we have worked with the partner community to bring them the benefits of this channel, enable them to resell Azure Government, and help them grow their business while providing the cloud services their customers need.
-Below you can find a list of all the authorized Cloud Solution Providers, AOS-G (Agreement for Online Services for Government), and Licensing Solution Providers (LSP) which can transact Azure Government. This list includes all approved Partners as of **June 2021**. Updates to this list will be made as new partners are onboarded.
+Below you can find a list of all the authorized Cloud Solution Providers (CSPs), Agreement for Online Services for Government (AOS-G), and Licensing Solution Providers (LSP) that can transact Azure Government. Updates to this list will be made as new partners are onboarded.
## Approved direct CSPs
-|Partner Name|
+|Partner name|
|-| |[10th Magnitude](https://www.10thmagnitude.com)| |[12:34 MicroTechnolgies Inc.](https://www.1234micro.com/)|
Below you can find a list of all the authorized Cloud Solution Providers, AOS-G
|[Airnet Group](https://www.airnetgroup.com/)| |[AIS Network](https://www.aisn.net/)| |[Alcala Consulting Inc.](https://www.alcalaconsulting.com/)|
-|Alexan Consulting Enterprise Services, LLC (ACES)|
|[Alliance Enterprises, Inc.](https://www.allianceenterprises.com)| |[Alvarez Technology Group](https://www.alvareztg.com/)| |[Amalgama Technologies Inc](http://amalgamatetech.com/)|
Below you can find a list of all the authorized Cloud Solution Providers, AOS-G
|[Army of Quants](https://www.armyofquants.com/)| |[Ascent Innovations LLC](https://www.ascent365.com/)| |[ASM Research LLC](https://www.asmr.com)|
-|ATLGaming|
+|[ATLGaming](https://www.atlgaming.net/)|
|[Arraya Solutions](https://www.arrayasolutions.com)| |[Atmosera, Inc.](https://www.atmosera.com)| |[Atos IT Solutions and Services](https://atos.net)|
Below you can find a list of all the authorized Cloud Solution Providers, AOS-G
|[CBTS](https://www.cbts.com/)| |[CDO Technologies Inc.](https://www.cdotech.com/contact/)| |[CDW-G, LLC](https://www.cdwg.com)|
-|CENTRALE LLC|
|[Centurylink](https://www.centurylink.com/public-sector/federal-government.html)| |[cFocus Software Incorporated](https://cfocussoftware.com)| |[CGI Federal, Inc.](https://www.cgi.com/en/us-federal)|
Below you can find a list of all the authorized Cloud Solution Providers, AOS-G
|[Cybercore Solutions LLC](https://cybercoresolutions.com/)| |[Dalecheck Technology Group](https://www.dalechek.com/)| |[Dasher Technologies, Inc.](https://www.dasher.com)|
-|Data:Architect|
|[Data Center Services Inc](https://www.d8acenter.com)| |[Datapipe (RackSpace Company)](https://www.rackspace.com)| |[Dataprise, Inc.](https://www.dataprise.com/)|
Below you can find a list of all the authorized Cloud Solution Providers, AOS-G
|[Diffeo, Inc.](https://diffeo.com)| |[DirectApps, Inc. D.B.A. Direct Technology](https://directtechnology.com)| |[DominionTech Inc.](https://www.dominiontech.com)|
-|Domino Systems Inc.|
|[DOT Personable Inc](http://solutions.personable.com/)| |[Doublehorn, LLC](https://doublehorn.com/)| |[DXC Technology Services LLC](https://www.dxc.technology/services)|
Below you can find a list of all the authorized Cloud Solution Providers, AOS-G
|[Dynamics Intelligence Inc.](https://www.dynamicsintelligence.us)| |[DynTek](https://www.dyntek.com)| |[ECS Federal, LLC](https://ecstech.com/)|
-|eFibernet Inc.|
|[eMazzanti Technologies](https://www.emazzanti.net/)| |[Enabling Technologies Corp.](https://www.enablingtechcorp.com/)| |[Enlighten IT Consulting](https://www.eitccorp.com)|
Below you can find a list of all the authorized Cloud Solution Providers, AOS-G
|[Envistacom](https://www.envistacom.com)| |[Epic Systems Inc](http://epicinfotech.com/)| |[EpochConcepts](https://epochconcepts.com)|
-|[Equilibrium IT Solutions, Inc.](https://eqinc.com/)|
+|[Equilibrium IT Solutions, Inc. (Ntiva)](https://www.ntiva.com/)|
|[Evertec](http://www.evertecinc.com)| |[eWay Corp](https://www.ewaycorp.com)| |[Exbabylon IT Solutions](https://www.exbabylon.com)|
Below you can find a list of all the authorized Cloud Solution Providers, AOS-G
|[HumanTouch LLC](https://www.humantouchllc.com/)| |[Hyertek Inc.](https://www.hyertek.com)| |[I10 Inc](http://i10agile.com/)|
-|I2, Inc|
+|[I2, Inc. (IBM)](https://www.ibm.com/security/intelligence-analysis/i2)|
|[i3 Business Solutions, LLC](https://www.i3businesssolutions.com/)| |[i3 LLC](http://i3llc.net/)| |[IBM Corporation](https://www.ibm.com/industries/federal)|
Below you can find a list of all the authorized Cloud Solution Providers, AOS-G
|[Medsphere](https://www.medsphere.com)| |[Menlo Technologies](https://www.menlo-technologies.com)| |[MetroStar Systems Inc.](https://www.metrostarsystems.com)|
-|Mibura Inc.|
+|[Mibura Inc.](https://www.mibura.com/)|
|[Microtechnologies, LLC](https://www.microtech.net/)| |[Miken Technologies](https://www.miken.net)| |[mindSHIFT Technologies, Inc.](https://www.mindshift.com/)| |[MIS Sciences Corp](https://www.mis-sciences.com/)| |[Mission Cyber LLC](https://missioncyber.com/b/)| |[Mobomo, LLC](https://www.mobomo.com)|
-|MSCloud Express, LLC|
|[Nanavati Consulting, Inc.](https://www.nanavaticonsulting.com)| |[Navisite LLC](https://www.navisite.com/)| |[NCI](https://www.nciinc.com/)|
Below you can find a list of all the authorized Cloud Solution Providers, AOS-G
|[Nihilent Inc](https://nihilent.com)| |[Nimbus Logic LLC](https://www.nimbus-logic.com)| |[Norseman, Inc](https://www.norseman.com)|
-|[Northern Sky Technologies, Inc]|
|[Northrop Grumman](https://www.northropgrumman.com)| |[NTS Cloud](http://ntscloud.com/ )| |[NTT America, Inc.](https://www.us.ntt.net)|
Below you can find a list of all the authorized Cloud Solution Providers, AOS-G
|[Om Group, Inc.](http://www.omgroupinc.us/)| |[OneNeck IT Solutions](https://www.oneneck.com)| |[Onyx Point, Inc.](https://www.onyxpoint.com)|
-|Open Analyze Technologies, Inc.|
|[Opsgility](https://www.opsgility.com)| |[OpsPro](https://opspro.com/)| |[Orion Communications, Inc.](https://www.orioncom.com)|
Below you can find a list of all the authorized Cloud Solution Providers, AOS-G
|[People Services Inc. DBA CATCH Intelligence](https://catchintelligence.com)| |[Perrygo Consulting Group, LLC](https://perrygo.com)| |[Perspecta](https://perspecta.com/)|
-|[Phacil](https://www.phacil.com/)|
+|[Phacil (By Light)](https://www.bylight.com/phacil/)|
|[Pharicode LLC](https://pharicode.com)| |[Picis Envision](https://www.picis.com/en/)| |[Pinao Consulting LLC](https://www.pcg-msp.com)|
Below you can find a list of all the authorized Cloud Solution Providers, AOS-G
|[Redhorse Corporation](https://www.redhorsecorp.com)| |[Regan Technologies Corporation](http://www.regantech.com/)| |Remote Support Solutions Corp DBA RemoteWorks|
-|Reperi LLC|
|[Resource Metrix](https://www.rmtrx.com)| |[Revenue Solutions, Inc](https://www.revenuesolutionsinc.com)| |[RMON Networks Inc.](https://rmonnetworks.com/)| |[rmsource, Inc.](https://www.rmsource.com)|
-|[RoboTech Science, Inc.](https://robotechscience.com)|
+|[RoboTech Science, Inc. (Cyberscend)](https://cyberscend.com)|
|[Rollout Systems LLC](http://www.rolloutsys.com/)| |[RV Global Solutions](https://rvglobalsolutions.com/)| |[Saiph Technologies Corporation](http://www.saiphtech.com/)|
Below you can find a list of all the authorized Cloud Solution Providers, AOS-G
|[Smoothlogics](https://www.smoothlogics.com)| |[Socius 1 LLC](http://www.socius1.com)| |[Softchoice Corporation](https://www.softchoice.com)|
-|[Software Services Group (dba Secant Technologies)](https://www.secantcorp.com/)|
+|[Software Services Group, dba Secant Technologies (Aunalytics)](https://www.aunalytics.com)|
|[SoftwareONE Inc.](https://www.softwareone.com/en-us)| |[Solution Systems Inc.](https://www.solsyst.com/)| |[South River Technologies](https://southrivertech.com)|
Below you can find a list of all the authorized Cloud Solution Providers, AOS-G
|[Strongbridge LLC](https://www.sb-llc.com)| |[Summit 7 Systems, Inc.](https://www.summit7.us/)| |[Sumo Logic](https://www.sumologic.com/)|
-|[SWC Technology Partners](https://www.swc.com)|
-|[Sybatech, Inc](https://www.sybatech.com)|
+|[SWC Technology Partners (BDO USA)](https://www.bdo.com/)|
+|[Sybatech, Inc. (Codepal Toolkit)](https://www.codepaltoolkit.com)|
|[Synergy Technical, LLC](https://www.synergy-technical.com/)| |[Synoptek LLC](https://synoptek.com/)| |[Systems Engineering Inc](https://www.seisystems.com)|
Below you can find a list of all the authorized Cloud Solution Providers, AOS-G
|[UDRI - SSG](https://udayton.edu/udri/_resources/docs/ssg_v8.pdf)| |[Unisys Corp / Blue Bell](https://www.unisys.com)| |[United Data Technologies, Inc.](https://udtonline.com)|
-|[Universal EVoIP Transitions]|
|[VALCOM COMPUTER CENTER](https://www.vlcmtech.com/)| |[Vana Solutions LLC](https://vanasolutions.com)| |[Vazata - Horizon Data Center Solutions LLC](https://www.vazata.com/)|
Below you can find a list of all the authorized Cloud Solution Providers, AOS-G
|[ViON Corp.](https://www.vion.com/)| |[VisioLogix Corporation](https://www.visiologix.com)| |[VVL Systems & Consulting, LLC](https://www.vvlsystems.com/)|
-|Vistronix, LLC|
+|[Vistronix, LLC (ASRC Federal)](https://www.asrcfederal.com/asrc-federal-vistronix-cio-sp3/)|
|[Vology Inc.](https://www.vology.com/)|
-|vSolvIT|
+|[vSolvIT](https://www.vsolvit.com/)|
|[Warren Averett Technology Group](https://warrenaverett.com/warren-averett-technology-group/)| |[Wintellect, LLC](https://www.wintellect.com)| |[Wintellisys, Inc.](https://wintellisys.com)|
Below you can find a list of all the authorized Cloud Solution Providers, AOS-G
|[Zones Inc](https://www.zones.com/site/home/https://docsupdatetracker.net/index.html)| |[ZR Systems Group LLC](https://zrsystems.com)|
-## Approved indirect CSP Providers
+## Approved indirect CSPs
-|Partner Name|
+|Partner name|
|-| |[Arrow Enterprise Computing Solutions, Inc.](http://ecs.arrow.com/)| |[Crayon Software Experts LCC](https://www.crayon.com/en-US)|
Below you can find a list of all the authorized Cloud Solution Providers, AOS-G
## Approved LSPs
-|LSP Name|Email|Phone|
+|LSP name|Email|Phone|
|-||--| |CDW Corp.|cdwgsales@cdwg.com|800-808-4239| |Dell Corp.|Get_Azure@Dell.com|888-375-9857|
Below you can find a list of all the authorized Cloud Solution Providers, AOS-G
|SHI, Inc.|msftgov@shi.com|888-764-8888| |Minburn Technology Group|microsoft@minburntech.com |571-699-0705 Opt. 1|
-## Approved AOS-G Partners
+## Approved AOS-G partners
-|Partner Name|
+|Partner name|
|-| |[Accenture Federal Service](https://www.accenture.com/us-en/industries/afs-index)| |[Agile IT, Inc](https://www.agileit.com)|
Below you can find a list of all the authorized Cloud Solution Providers, AOS-G
|[General Dynamics Information Technology](https://www.gdit.com)| |[Hypori, Inc.](https://hypori.com/)| |[Jackpine Technologies](https://www.jackpinetech.com)|
-|Jasper Solutions|
+|[Jasper Solutions](https://www.jaspersolutions.com/)|
|[Johnson Technology Systems Inc](https://www.jtsusa.com/)| |[KAMIND IT, Inc.](https://www.kamind.com/)| |[KTL Solutions, Inc.](https://www.ktlsolutions.com)|
Below you can find a list of all the authorized Cloud Solution Providers, AOS-G
|[TechTrend, Inc](https://techtrend.us)| |[VLCM](https://www.vlcmtech.com)| |[VC3](https://www.vc3.com)|
-|Vexcel|
+|[Vexcel](https://www.vexcel.com/)|
-If you would like to learn more about the Cloud Solution Provider Program, you can do so [here](/partner-center/faq-for-us-govt-cloud). If you would like to apply to the program, you can visit [this link](./documentation-government-csp-application.md). If you are interested to deploy to our [DoD regions via CSP](https://blogs.msdn.microsoft.com/azuregov/2017/12/18/announcing-the-availability-of-dod-regions-via-government-csp-program-for-azure-government/) talk to your CSP Provider and they can enable that for you. For any additional questions, reach out to [Azure Government CSP](mailto:azgovcsp@microsoft.com).
+To learn more about the Cloud Solution Provider program, see [Frequently asked questions for Partner Center](/partner-center/faq-for-us-govt-cloud). If you would like to apply for the program, visit [Azure Government CSP application process](./documentation-government-csp-application.md). For any other questions, contact [Azure Government CSP](mailto:azgovcsp@microsoft.com).
azure-government Documentation Government Impact Level 5 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/documentation-government-impact-level-5.md
Previously updated : 08/27/2021 Last updated : 09/11/2021 # Isolation guidelines for Impact Level 5 workloads
For Management and governance services availability in Azure Government, see [Pr
- By default, all data and saved queries are encrypted at rest using Microsoft-managed keys. Configure encryption at rest of your data in Azure Monitor [using customer-managed keys in Azure Key Vault](../azure-monitor/logs/customer-managed-keys.md).
-> [!IMPORTANT]
-> See additional guidance for **[Log Analytics](#log-analytics)**, which is a feature of Azure Monitor.
-
-### [Azure Policy](https://azure.microsoft.com/services/azure-policy/)
-
-Azure Policy supports Impact Level 5 workloads in Azure Government with no extra configuration required.
-
-### [Azure Policy's guest configuration](../governance/policy/concepts/guest-configuration.md)
-
-Azure Policy's guest configuration supports Impact Level 5 workloads in Azure Government with no extra configuration required.
- #### [Log Analytics](../azure-monitor/logs/data-platform-logs.md)
-Log Analytics is intended to be used for monitoring the health and status of services and infrastructure. The monitoring data and logs primarily store [logs and metrics](../azure-monitor/logs/data-security.md#data-retention) that are service generated. When used in this primary capacity, Log Analytics supports Impact Level 5 workloads in Azure Government with no extra configuration required.
+Log Analytics, which is a feature of Azure Monitor, is intended to be used for monitoring the health and status of services and infrastructure. The monitoring data and logs primarily store [logs and metrics](../azure-monitor/logs/data-security.md#data-retention) that are service generated. When used in this primary capacity, Log Analytics supports Impact Level 5 workloads in Azure Government with no extra configuration required.
Log Analytics may also be used to ingest additional customer-provided logs. These logs may include data ingested as part of operating Azure Security Center or Azure Sentinel. If the ingested logs or the queries written against these logs are categorized as IL5 data, then you should configure customer-managed keys (CMK) for your Log Analytics workspaces and Application Insights components. Once configured, any data sent to your workspaces or components is encrypted with your Azure Key Vault key. For more information, see [Azure Monitor customer-managed keys](../azure-monitor/logs/customer-managed-keys.md).
azure-monitor Javascript https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/javascript.md
cfg: { // Application Insights Configuration
```
-If any of your third-party servers that the client communicates with cannot accept the `Request-Id` and `Request-Context` headers, and you cannot update their configuration, then you'll need to put them into an exclude list via the `correlationHeaderExcludeDomains` configuration property. This property supports wildcards.
+If any of your third-party servers that the client communicates with cannot accept the `Request-Id` and `Request-Context` headers, and you cannot update their configuration, then you'll need to put them into an exclude list via the `correlationHeaderExcludedDomains` configuration property. This property supports wildcards.
The server-side needs to be able to accept connections with those headers present. Depending on the `Access-Control-Allow-Headers` configuration on the server-side it is often necessary to extend the server-side list by manually adding `Request-Id` and `Request-Context`.
azure-monitor Container Insights Update Metrics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/containers/container-insights-update-metrics.md
Perform the following steps to update a specific cluster in your subscription us
1. Run the following command by using the Azure CLI. Edit the values for **subscriptionId**, **resourceGroupName**, and **clusterName** using the values on the **AKS Overview** page for the AKS cluster. To get the value of **clientIdOfSPN**, it is returned when you run the command `az aks show` as shown in the example below. + ```azurecli az login az account set --subscription "<subscriptionName>"
Perform the following steps to update a specific cluster in your subscription us
az role assignment create --assignee <clientIdOfSPN> --scope <clusterResourceId> --role "Monitoring Metrics Publisher" ``` + To get the value for **clientIdOfSPNOrMsi**, you can run the command `az aks show` as shown in the example below. If the **servicePrincipalProfile** object has a valid *clientid* value, you can use that. Otherwise, if it is set to *msi*, you need to pass in the clientid from `addonProfiles.omsagent.identity.clientId`. + ```azurecli az login az account set --subscription "<subscriptionName>"
Perform the following steps to update a specific cluster in your subscription us
az role assignment create --assignee <clientIdOfSPNOrMsi> --scope <clusterResourceId> --role "Monitoring Metrics Publisher" ``` ++
+>[!NOTE]
+>If you use your user account and wanted to perform the role assignment then use --assignee parameter as shown in below example. Else if you login with SPN and wanted to perform the role assignment then use --assignee-object-id --assignee-principal-type parameters instead of --assignee parameter.
+ ## Upgrade all clusters using Azure PowerShell Perform the following steps to update all clusters in your subscription using Azure PowerShell.
azure-monitor Data Explorer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/insights/data-explorer.md
To access Azure Data Explorer Insights directly from an Azure Data Explorer Clus
These views are also accessible by selecting the resource name of an Azure Data Explorer cluster from within the Azure Monitor insights view.
-Azure Data Explorer Insights combines both logs and metrics to provide a global monitoring solution. The inclusion of logs-based visualizations requires users to [enable diagnostic logging of their Azure Data Explorer cluster and send them to a Log Analytics workspace.](/azure/data-explorer/using-diagnostic-logs?tabs=commands-and-queries#enable-diagnostic-logs). The diagnostic logs that should be enabled are: **Command**, **Query**, **TableDetails**, and **TableUsageStatistics**.
+> [!NOTE]
+> Azure Data Explorer Insights combines both logs and metrics to provide a global monitoring solution. The inclusion of logs-based visualizations requires users to [enable diagnostic logging of their Azure Data Explorer cluster and send them to a Log Analytics workspace.](/azure/data-explorer/using-diagnostic-logs?tabs=commands-and-queries#enable-diagnostic-logs). The diagnostic logs that should be enabled are: **Command**, **Query**, **SucceededIngestion**, **FailedIngestion**, **IngestionBatching**, **TableDetails**, and **TableUsageStatistics** (enabling SucceededIngestion logs might be costly. Only enable it if you need to monitor successful ingestions).
![Screenshot of blue button that displays the text "Enable Logs for Monitoring"](./media/data-explorer/enable-logs.png)
azure-monitor Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/service-limits.md
This article lists limits in different areas of Azure Monitor.
[!INCLUDE [monitoring-limits](../../includes/azure-monitor-limits-autoscale.md)] - ## Data collection rules [!INCLUDE [data-collection-rules](../../includes/azure-monitor-limits-data-collection-rules.md)]+
+## Diagnostic Settings
+++ ## Log queries and language [!INCLUDE [monitoring-limits](../../includes/azure-monitor-limits-log-queries.md)]
This article lists limits in different areas of Azure Monitor.
- [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/) - [Monitoring usage and estimated costs in Azure Monitor](./usage-estimated-costs.md)-- [Manage usage and costs for Application Insights](app/pricing.md)
+- [Manage usage and costs for Application Insights](app/pricing.md)
azure-resource-manager Modules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/bicep/modules.md
description: Describes how to define and consume a module, and how to use module
Previously updated : 09/10/2021 Last updated : 09/14/2021 # Use Bicep modules
-Bicep enables you to break down a complex solution into modules. A Bicep module is a set of one or more resources to be deployed together. Modules abstract away complex details of the raw resource declaration, which can increase readability. You can reuse these modules, and share them with other people. Bicep modules are transpiled into a single ARM template with [nested templates](../templates/linked-templates.md#nested-template) for deployment.
+Bicep enables you to break down a complex solution into modules. A Bicep module is just a Bicep file that is deployed from another Bicep file. You can encapsulate complex details of the resource declaration in a module, which improves readability of files that use the module. You can reuse these modules, and share them with other people. Bicep modules are converted into a single Azure Resource Manager template with [nested templates](../templates/linked-templates.md#nested-template) for deployment.
+
+This article describes how to define and consume modules.
For a tutorial, see [Deploy Azure resources by using Bicep templates](/learn/modules/deploy-azure-resources-by-using-bicep-templates/). ## Define modules
-Every Bicep file can be consumed as a module. A module only exposes parameters and outputs as contract to other Bicep files. Both parameters and outputs are optional.
+Every Bicep file can be used as a module. A module only exposes parameters and outputs as a contract to other Bicep files. Parameters and outputs are optional.
The following Bicep file can be deployed directly to create a storage account or be used as a module. The next section shows you how to consume modules:
Use the _module_ keyword to consume a module. The following Bicep file deploys t
param namePrefix string param location string = resourceGroup().location
-module stgModule './storageAccount.bicep' = {
+module stgModule 'storageAccount.bicep' = {
name: 'storageDeploy' params: { storagePrefix: namePrefix
You can deploy a module multiple times by using loops. For more information, see
## Configure module scopes
-When declaring a module, you can supply a _scope_ property to set the scope at which to deploy the module:
-
-```bicep
-module stgModule './storageAccount.bicep' = {
- name: 'storageDeploy'
- scope: resourceGroup('someOtherRg') // pass in a scope to a different resourceGroup
- params: {
- storagePrefix: namePrefix
- location: location
- }
-}
-```
-
-The _scope_ property can be omitted when the module's target scope and the parent's target scope are the same. When the scope property isn't provided, the module is deployed at the parent's target scope.
+When declaring a module, you can set a scope for the module that is different than the scope for the containing Bicep file. Use the `scope` property to set the scope for the module. When the scope property isn't provided, the module is deployed at the parent's target scope.
The following Bicep file shows how to create a resource group and deploy a module to the resource group:
module stgModule './storageAccount.bicep' = {
output storageEndpoint object = stgModule.outputs.storageEndpoint ```
-The scope property must be set to a valid scope object. If your Bicep file deploys a resource group, subscription, or management group, you can set the scope for a module to the symbolic name for that resource. This approach is shown in the previous example where a resource group is created and used for a module's scope.
+The next example deploys to existing resource groups.
+
+```bicep
+targetScope = 'subscription'
+
+resource firstRG 'Microsoft.Resources/resourceGroups@2021-04-01' existing = {
+ name: 'demogroup1'
+}
+
+resource secondRG 'Microsoft.Resources/resourceGroups@2021-04-01' existing = {
+ name: 'demogroup2'
+}
+
+module storage1 'storageAccount.bicep' = {
+ name: 'westusdeploy'
+ scope: firstRG
+ params: {
+ storagePrefix: 'stg1'
+ location: 'westus'
+ }
+}
+
+module storage2 'storageAccount.bicep' = {
+ name: 'eastusdeploy'
+ scope: secondRG
+ params: {
+ storagePrefix: 'stg2'
+ location: 'eastus'
+ }
+}
+```
+
+The scope property must be set to a valid scope object. If your Bicep file deploys a resource group, subscription, or management group, you can set the scope for a module to the symbolic name for that resource. Or, you can use the scope functions to get a valid scope.
-Or, you can use the scope functions to get a valid scope. Those functions are:
+Those functions are:
- [resourceGroup](bicep-functions-scope.md#resourcegroup) - [subscription](bicep-functions-scope.md#subscription) - [managementGroup](bicep-functions-scope.md#managementgroup) - [tenant](bicep-functions-scope.md#tenant)
+The following example uses the `managementGroup` function to set the scope.
+
+```bicep
+param managementGroupName string
+
+module 'module.bicep' = {
+ name: 'deployToMG'
+ scope: managementGroup(managementGroupName)
+}
+```
+ ## Next steps -- To go through a tutorial, see [Deploy Azure resources by using Bicep templates](/learn/modules/deploy-azure-resources-by-using-bicep-templates/).
+- To pass a sensitive value to a module, use the [getSecret](bicep-functions-resource.md#getsecret) function.
- You can deploy a module multiple times by using loops. For more information, see [Module iteration in Bicep](loop-modules.md).
azure-resource-manager Azure Services Resource Providers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/azure-services-resource-providers.md
Title: Resource providers by Azure services description: Lists all resource provider namespaces for Azure Resource Manager and shows the Azure service for that namespace. Previously updated : 08/05/2021 Last updated : 09/14/2021
The resources providers that are marked with **- registered** are registered by
| Microsoft.HanaOnAzure | [SAP HANA on Azure Large Instances](../../virtual-machines/workloads/sap/hana-overview-architecture.md) | | Microsoft.HardwareSecurityModules | [Azure Dedicated HSM](../../dedicated-hsm/index.yml) | | Microsoft.HDInsight | [HDInsight](../../hdinsight/index.yml) |
-| Microsoft.HealthcareApis | [Azure API for FHIR](../../healthcare-apis/fhir/index.yml) |
+| Microsoft.HealthcareApis (Azure API for FHIR) | [Azure API for FHIR](../../healthcare-apis/azure-api-for-fhir/index.yml) |
+| Microsoft.HealthcareApis (Healthcare APIs) | [Healthcare APIs](../../healthcare-apis/index.yml) |
| Microsoft.HybridCompute | [Azure Arc-enabled servers](../../azure-arc/servers/index.yml) | | Microsoft.HybridData | [StorSimple](../../storsimple/index.yml) | | Microsoft.HybridNetwork | [Network Function Manager](../../network-function-manager/index.yml) |
azure-resource-manager Deploy To Management Group https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/deploy-to-management-group.md
Title: Deploy resources to management group description: Describes how to deploy resources at the management group scope in an Azure Resource Manager template. Previously updated : 03/18/2021 Last updated : 09/14/2021
For templates, use:
```json {
- "$schema": "https://schema.management.azure.com/schemas/2019-08-01/managementGroupDeploymentTemplate.json#",
- ...
+ "$schema": "https://schema.management.azure.com/schemas/2019-08-01/managementGroupDeploymentTemplate.json#",
+ ...
} ```
The schema for a parameter file is the same for all deployment scopes. For param
```json {
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
- ...
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
+ ...
} ```
The next example creates a new management group in the management group specifie
```json {
- "$schema": "https://schema.management.azure.com/schemas/2019-08-01/managementGroupDeploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "mgName": {
- "type": "string",
- "defaultValue": "[concat('mg-', uniqueString(newGuid()))]"
- },
- "parentMG": {
- "type": "string"
- }
+ "$schema": "https://schema.management.azure.com/schemas/2019-08-01/managementGroupDeploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "mgName": {
+ "type": "string",
+ "defaultValue": "[concat('mg-', uniqueString(newGuid()))]"
},
- "resources": [
- {
- "name": "[parameters('mgName')]",
- "type": "Microsoft.Management/managementGroups",
- "apiVersion": "2020-05-01",
- "scope": "/",
- "location": "eastus",
- "properties": {
- "details": {
- "parent": {
- "id": "[tenantResourceId('Microsoft.Management/managementGroups', parameters('parentMG'))]"
- }
- }
- }
- }
- ],
- "outputs": {
- "output": {
- "type": "string",
- "value": "[parameters('mgName')]"
+ "parentMG": {
+ "type": "string"
+ }
+ },
+ "resources": [
+ {
+ "name": "[parameters('mgName')]",
+ "type": "Microsoft.Management/managementGroups",
+ "apiVersion": "2021-04-01",
+ "scope": "/",
+ "location": "eastus",
+ "properties": {
+ "details": {
+ "parent": {
+ "id": "[tenantResourceId('Microsoft.Management/managementGroups', parameters('parentMG'))]"
+ }
}
+ }
+ }
+ ],
+ "outputs": {
+ "output": {
+ "type": "string",
+ "value": "[parameters('mgName')]"
}
+ }
} ```
The following example shows how to [define](../../governance/policy/concepts/def
```json {
- "$schema": "https://schema.management.azure.com/schemas/2019-08-01/managementGroupDeploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "targetMG": {
- "type": "string",
- "metadata": {
- "description": "Target Management Group"
- }
- },
- "allowedLocations": {
- "type": "array",
- "defaultValue": [
- "australiaeast",
- "australiasoutheast",
- "australiacentral"
- ],
- "metadata": {
- "description": "An array of the allowed locations, all other locations will be denied by the created policy."
- }
- }
+ "$schema": "https://schema.management.azure.com/schemas/2019-08-01/managementGroupDeploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "targetMG": {
+ "type": "string",
+ "metadata": {
+ "description": "Target Management Group"
+ }
},
- "variables": {
- "mgScope": "[tenantResourceId('Microsoft.Management/managementGroups', parameters('targetMG'))]",
- "policyDefinition": "LocationRestriction"
- },
- "resources": [
- {
- "type": "Microsoft.Authorization/policyDefinitions",
- "name": "[variables('policyDefinition')]",
- "apiVersion": "2019-09-01",
- "properties": {
- "policyType": "Custom",
- "mode": "All",
- "parameters": {
- },
- "policyRule": {
- "if": {
- "not": {
- "field": "location",
- "in": "[parameters('allowedLocations')]"
- }
- },
- "then": {
- "effect": "deny"
- }
- }
- }
+ "allowedLocations": {
+ "type": "array",
+ "defaultValue": [
+ "australiaeast",
+ "australiasoutheast",
+ "australiacentral"
+ ],
+ "metadata": {
+ "description": "An array of the allowed locations, all other locations will be denied by the created policy."
+ }
+ }
+ },
+ "variables": {
+ "mgScope": "[tenantResourceId('Microsoft.Management/managementGroups', parameters('targetMG'))]",
+ "policyDefinition": "LocationRestriction"
+ },
+ "resources": [
+ {
+ "type": "Microsoft.Authorization/policyDefinitions",
+ "name": "[variables('policyDefinition')]",
+ "apiVersion": "2020-09-01",
+ "properties": {
+ "policyType": "Custom",
+ "mode": "All",
+ "parameters": {
},
- {
- "type": "Microsoft.Authorization/policyAssignments",
- "name": "location-lock",
- "apiVersion": "2019-09-01",
- "dependsOn": [
- "[variables('policyDefinition')]"
- ],
- "properties": {
- "scope": "[variables('mgScope')]",
- "policyDefinitionId": "[extensionResourceId(variables('mgScope'), 'Microsoft.Authorization/policyDefinitions', variables('policyDefinition'))]"
+ "policyRule": {
+ "if": {
+ "not": {
+ "field": "location",
+ "in": "[parameters('allowedLocations')]"
}
+ },
+ "then": {
+ "effect": "deny"
+ }
}
- ]
+ }
+ },
+ {
+ "type": "Microsoft.Authorization/policyAssignments",
+ "name": "location-lock",
+ "apiVersion": "2020-09-01",
+ "dependsOn": [
+ "[variables('policyDefinition')]"
+ ],
+ "properties": {
+ "scope": "[variables('mgScope')]",
+ "policyDefinitionId": "[extensionResourceId(variables('mgScope'), 'Microsoft.Authorization/policyDefinitions', variables('policyDefinition'))]"
+ }
+ }
+ ]
} ```
From a management group level deployment, you can target a subscription within t
```json {
- "$schema": "https://schema.management.azure.com/schemas/2019-08-01/managementGroupDeploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "nestedsubId": {
- "type": "string"
- },
- "nestedRG": {
- "type": "string"
- },
- "storageAccountName": {
- "type": "string"
- },
- "nestedLocation": {
- "type": "string"
- }
+ "$schema": "https://schema.management.azure.com/schemas/2019-08-01/managementGroupDeploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "nestedsubId": {
+ "type": "string"
+ },
+ "nestedRG": {
+ "type": "string"
},
- "resources": [
- {
- "type": "Microsoft.Resources/deployments",
- "apiVersion": "2020-10-01",
- "name": "nestedSub",
- "location": "[parameters('nestedLocation')]",
- "subscriptionId": "[parameters('nestedSubId')]",
- "properties": {
- "mode": "Incremental",
- "template": {
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- },
- "variables": {
- },
- "resources": [
- {
- "type": "Microsoft.Resources/resourceGroups",
- "apiVersion": "2020-10-01",
- "name": "[parameters('nestedRG')]",
- "location": "[parameters('nestedLocation')]"
- }
- ]
- }
+ "storageAccountName": {
+ "type": "string"
+ },
+ "nestedLocation": {
+ "type": "string"
+ }
+ },
+ "resources": [
+ {
+ "type": "Microsoft.Resources/deployments",
+ "apiVersion": "2021-04-01",
+ "name": "nestedSub",
+ "location": "[parameters('nestedLocation')]",
+ "subscriptionId": "[parameters('nestedSubId')]",
+ "properties": {
+ "mode": "Incremental",
+ "template": {
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ },
+ "variables": {
+ },
+ "resources": [
+ {
+ "type": "Microsoft.Resources/resourceGroups",
+ "apiVersion": "2021-04-01",
+ "name": "[parameters('nestedRG')]",
+ "location": "[parameters('nestedLocation')]"
}
- },
- {
- "type": "Microsoft.Resources/deployments",
- "apiVersion": "2020-10-01",
- "name": "nestedRG",
- "subscriptionId": "[parameters('nestedSubId')]",
- "resourceGroup": "[parameters('nestedRG')]",
- "dependsOn": [
- "nestedSub"
- ],
- "properties": {
- "mode": "Incremental",
- "template": {
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "resources": [
- {
- "type": "Microsoft.Storage/storageAccounts",
- "apiVersion": "2019-04-01",
- "name": "[parameters('storageAccountName')]",
- "location": "[parameters('nestedLocation')]",
- "kind": "StorageV2",
- "sku": {
- "name": "Standard_LRS"
- }
- }
- ]
- }
+ ]
+ }
+ }
+ },
+ {
+ "type": "Microsoft.Resources/deployments",
+ "apiVersion": "2021-04-01",
+ "name": "nestedRG",
+ "subscriptionId": "[parameters('nestedSubId')]",
+ "resourceGroup": "[parameters('nestedRG')]",
+ "dependsOn": [
+ "nestedSub"
+ ],
+ "properties": {
+ "mode": "Incremental",
+ "template": {
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "resources": [
+ {
+ "type": "Microsoft.Storage/storageAccounts",
+ "apiVersion": "2021-04-01",
+ "name": "[parameters('storageAccountName')]",
+ "location": "[parameters('nestedLocation')]",
+ "kind": "StorageV2",
+ "sku": {
+ "name": "Standard_LRS"
+ }
}
+ ]
}
- ]
+ }
+ }
+ ]
} ``` ## Next steps
-* To learn about assigning roles, see [Add Azure role assignments using Azure Resource Manager templates](../../role-based-access-control/role-assignments-template.md).
+* To learn about assigning roles, see [Assign Azure roles using Azure Resource Manager templates](../../role-based-access-control/role-assignments-template.md).
* For an example of deploying workspace settings for Azure Security Center, see [deployASCwithWorkspaceSettings.json](https://github.com/krnese/AzureDeploy/blob/master/ARM/deployments/deployASCwithWorkspaceSettings.json). * You can also deploy templates at [subscription level](deploy-to-subscription.md) and [tenant level](deploy-to-tenant.md).
azure-resource-manager Deploy To Resource Group https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/deploy-to-resource-group.md
Title: Deploy resources to resource groups description: Describes how to deploy resources in an Azure Resource Manager template. It shows how to target more than one resource group. Previously updated : 01/13/2021 Last updated : 09/14/2021
For templates, use the following schema:
```json {
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
- ...
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ ...
} ```
For parameter files, use:
```json {
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
- ...
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
+ ...
} ```
From a resource group deployment, you can switch to the level of a subscription
```json {
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "storagePrefix": {
- "type": "string",
- "maxLength": 11
- },
- "newResourceGroupName": {
- "type": "string"
- },
- "nestedSubscriptionID": {
- "type": "string"
- },
- "location": {
- "type": "string",
- "defaultValue": "[resourceGroup().location]"
- }
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "storagePrefix": {
+ "type": "string",
+ "maxLength": 11
},
- "variables": {
- "storageName": "[concat(parameters('storagePrefix'), uniqueString(resourceGroup().id))]"
+ "newResourceGroupName": {
+ "type": "string"
},
- "resources": [
- {
- "type": "Microsoft.Storage/storageAccounts",
- "apiVersion": "2019-06-01",
- "name": "[variables('storageName')]",
- "location": "[parameters('location')]",
- "sku": {
- "name": "Standard_LRS"
- },
- "kind": "Storage",
- "properties": {
- }
- },
- {
- "type": "Microsoft.Resources/deployments",
- "apiVersion": "2020-06-01",
- "name": "demoSubDeployment",
- "location": "westus",
- "subscriptionId": "[parameters('nestedSubscriptionID')]",
- "properties": {
- "mode": "Incremental",
- "template": {
- "$schema": "https://schema.management.azure.com/schemas/2018-05-01/subscriptionDeploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {},
- "variables": {},
- "resources": [
- {
- "type": "Microsoft.Resources/resourceGroups",
- "apiVersion": "2020-10-01",
- "name": "[parameters('newResourceGroupName')]",
- "location": "[parameters('location')]",
- "properties": {}
- }
- ],
- "outputs": {}
- }
+ "nestedSubscriptionID": {
+ "type": "string"
+ },
+ "location": {
+ "type": "string",
+ "defaultValue": "[resourceGroup().location]"
+ }
+ },
+ "variables": {
+ "storageName": "[concat(parameters('storagePrefix'), uniqueString(resourceGroup().id))]"
+ },
+ "resources": [
+ {
+ "type": "Microsoft.Storage/storageAccounts",
+ "apiVersion": "2021-04-01",
+ "name": "[variables('storageName')]",
+ "location": "[parameters('location')]",
+ "sku": {
+ "name": "Standard_LRS"
+ },
+ "kind": "Storage",
+ "properties": {
+ }
+ },
+ {
+ "type": "Microsoft.Resources/deployments",
+ "apiVersion": "2021-04-01",
+ "name": "demoSubDeployment",
+ "location": "westus",
+ "subscriptionId": "[parameters('nestedSubscriptionID')]",
+ "properties": {
+ "mode": "Incremental",
+ "template": {
+ "$schema": "https://schema.management.azure.com/schemas/2018-05-01/subscriptionDeploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {},
+ "variables": {},
+ "resources": [
+ {
+ "type": "Microsoft.Resources/resourceGroups",
+ "apiVersion": "2021-04-01",
+ "name": "[parameters('newResourceGroupName')]",
+ "location": "[parameters('location')]",
+ "properties": {}
}
+ ],
+ "outputs": {}
}
- ]
+ }
+ }
+ ]
} ```
azure-resource-manager Deploy To Subscription https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/deploy-to-subscription.md
Title: Deploy resources to subscription description: Describes how to create a resource group in an Azure Resource Manager template. It also shows how to deploy resources at the Azure subscription scope. Previously updated : 01/13/2021 Last updated : 09/14/2021
For templates, use:
```json {
- "$schema": "https://schema.management.azure.com/schemas/2018-05-01/subscriptionDeploymentTemplate.json#",
- ...
+ "$schema": "https://schema.management.azure.com/schemas/2018-05-01/subscriptionDeploymentTemplate.json#",
+ ...
} ```
The schema for a parameter file is the same for all deployment scopes. For param
```json {
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
- ...
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
+ ...
} ```
For Azure CLI, use [az deployment sub create](/cli/azure/deployment/sub#az_deplo
az deployment sub create \ --name demoSubDeployment \ --location centralus \
- --template-uri "https://raw.githubusercontent.com/Azure/azure-docs-json-samples/master/azure-resource-manager/emptyRG.json" \
+ --template-uri "https://raw.githubusercontent.com/Azure/azure-docs-json-samples/master/azure-resource-manager/emptyrg.json" \
--parameters rgName=demoResourceGroup rgLocation=centralus ```
For the PowerShell deployment command, use [New-AzDeployment](/powershell/module
New-AzSubscriptionDeployment ` -Name demoSubDeployment ` -Location centralus `
- -TemplateUri "https://raw.githubusercontent.com/Azure/azure-docs-json-samples/master/azure-resource-manager/emptyRG.json" `
+ -TemplateUri "https://raw.githubusercontent.com/Azure/azure-docs-json-samples/master/azure-resource-manager/emptyrg.json" `
-rgName demoResourceGroup ` -rgLocation centralus ```
The following template creates an empty resource group.
"resources": [ { "type": "Microsoft.Resources/resourceGroups",
- "apiVersion": "2020-10-01",
+ "apiVersion": "2021-04-01",
"name": "[parameters('rgName')]", "location": "[parameters('rgLocation')]", "properties": {}
Use the [copy element](copy-resources.md) with resource groups to create more th
"resources": [ { "type": "Microsoft.Resources/resourceGroups",
- "apiVersion": "2020-10-01",
+ "apiVersion": "2021-04-01",
"location": "[parameters('rgLocation')]", "name": "[concat(parameters('rgNamePrefix'), copyIndex())]", "copy": {
The following example creates a resource group, and deploys a storage account to
"resources": [ { "type": "Microsoft.Resources/resourceGroups",
- "apiVersion": "2020-10-01",
+ "apiVersion": "2021-04-01",
"name": "[parameters('rgName')]", "location": "[parameters('rgLocation')]", "properties": {} }, { "type": "Microsoft.Resources/deployments",
- "apiVersion": "2020-10-01",
+ "apiVersion": "2021-04-01",
"name": "storageDeployment", "resourceGroup": "[parameters('rgName')]", "dependsOn": [
The following example creates a resource group, and deploys a storage account to
"resources": [ { "type": "Microsoft.Storage/storageAccounts",
- "apiVersion": "2019-06-01",
+ "apiVersion": "2021-04-01",
"name": "[variables('storageName')]", "location": "[parameters('rgLocation')]", "sku": {
The following example assigns an existing policy definition to the subscription.
"resources": [ { "type": "Microsoft.Authorization/policyAssignments",
- "apiVersion": "2018-03-01",
+ "apiVersion": "2020-09-01",
"name": "[parameters('policyName')]", "properties": { "scope": "[subscription().id]",
You can [define](../../governance/policy/concepts/definition-structure.md) and a
"resources": [ { "type": "Microsoft.Authorization/policyDefinitions",
- "apiVersion": "2018-05-01",
+ "apiVersion": "2020-09-01",
"name": "locationpolicy", "properties": { "policyType": "Custom",
You can [define](../../governance/policy/concepts/definition-structure.md) and a
}, { "type": "Microsoft.Authorization/policyAssignments",
- "apiVersion": "2018-05-01",
+ "apiVersion": "2020-09-01",
"name": "location-lock", "dependsOn": [ "locationpolicy"
New-AzSubscriptionDeployment `
## Access control
-To learn about assigning roles, see [Add Azure role assignments using Azure Resource Manager templates](../../role-based-access-control/role-assignments-template.md).
+To learn about assigning roles, see [Assign Azure roles using Azure Resource Manager templates](../../role-based-access-control/role-assignments-template.md).
The following example creates a resource group, applies a lock to it, and assigns a role to a principal.
azure-resource-manager Deploy To Tenant https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/deploy-to-tenant.md
Title: Deploy resources to tenant description: Describes how to deploy resources at the tenant scope in an Azure Resource Manager template. Previously updated : 04/27/2021 Last updated : 09/14/2021
For templates, use:
```json {
- "$schema": "https://schema.management.azure.com/schemas/2019-08-01/tenantDeploymentTemplate.json#",
- ...
+ "$schema": "https://schema.management.azure.com/schemas/2019-08-01/tenantDeploymentTemplate.json#",
+ ...
} ```
The schema for a parameter file is the same for all deployment scopes. For param
```json {
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
- ...
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
+ ...
} ```
The following template assigns a role at the tenant scope.
## Next steps
-* To learn about assigning roles, see [Add Azure role assignments using Azure Resource Manager templates](../../role-based-access-control/role-assignments-template.md).
+* To learn about assigning roles, see [Assign Azure roles using Azure Resource Manager templates](../../role-based-access-control/role-assignments-template.md).
* You can also deploy templates at [subscription level](deploy-to-subscription.md) or [management group level](deploy-to-management-group.md).
azure-resource-manager Template Functions Resource https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/template-functions-resource.md
The preceding example returns an object in the following format:
} ```
-This example specifies a resource group property. Only the resource group's name is shown in output.
-- ## resourceId `resourceId([subscriptionId], [resourceGroupName], resourceType, resourceName1, [resourceName2], ...)`
azure-sql Authentication Aad Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/authentication-aad-configure.md
This authentication method allows middle-tier services to obtain [JSON Web Token
Sample connection string: ```csharp
-string ConnectionString =@"Data Source=n9lxnyuzhv.database.windows.net; Initial Catalog=testdb;"
+string ConnectionString = @"Data Source=n9lxnyuzhv.database.windows.net; Initial Catalog=testdb;";
SqlConnection conn = new SqlConnection(ConnectionString);
-conn.AccessToken = "Your JWT token"
+conn.AccessToken = "Your JWT token";
conn.Open(); ```
Guidance on troubleshooting issues with Azure AD authentication can be found in
[11]: ./media/authentication-aad-configure/active-directory-integrated.png [12]: ./media/authentication-aad-configure/12connect-using-pw-auth2.png
-[13]: ./media/authentication-aad-configure/13connect-to-db2.png
+[13]: ./media/authentication-aad-configure/13connect-to-db2.png
azure-sql Auto Failover Group Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/auto-failover-group-overview.md
Previously updated : 08/30/2021 Last updated : 09/14/2021 # Use auto-failover groups to enable transparent and coordinated failover of multiple databases
To illustrate the change sequence, we will assume server A is the primary server
## Best practices for SQL Managed Instance
-The auto-failover group must be configured on the primary instance and will connect it to the secondary instance in a different Azure region. All databases in the instance will be replicated to the secondary instance.
+The auto-failover group must be configured on the primary instance and will connect it to the secondary instance in a different Azure region. All user databases in the instance will be replicated to the secondary instance. System databases like _master_ and _msdb_ will not be replicated.
The following diagram illustrates a typical configuration of a geo-redundant cloud application using managed instance and auto-failover group.
Let's assume instance A is the primary instance, instance B is the existing seco
> When the failover group is deleted, the DNS records for the listener endpoints are also deleted. At that point, there is a non-zero probability of somebody else creating a failover group or server alias with the same name, which will prevent you from using it again. To minimize the risk, don't use generic failover group names. ### Enable scenarios dependent on objects from the system databases
-System databases are not replicated to the secondary instance in a failover group. To enable scenarios that depend on objects from the system databases, on the secondary instance, make sure to create the same objects on the secondary.
+System databases are **not** replicated to the secondary instance in a failover group. To enable scenarios that depend on objects from the system databases, make sure to create the same objects on the secondary instance and keep them synchronized with the primary instance.
For example, if you plan to use the same logins on the secondary instance, make sure to create them with the identical SID. ```SQL -- Code to create login on the secondary instance CREATE LOGIN foo WITH PASSWORD = '<enterStrongPasswordHere>', SID = <login_sid>; ``` -
+### Synchronize instance properties and retention policies between primary and secondary instance
+Instances in failover group remain separate Azure resources, and no changes made to the configuration of the primary instance will be automatically replicated to the secondary instance. Make sure to perform all relevant changes both on primary _and_ secondary instance. For example, if you change backup storage redundancy or long-term backup retention policy on primary instance, make sure to change it on secondary instance as well.
## Failover groups and network security
azure-sql Restore Sample Database Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/restore-sample-database-quickstart.md
In SQL Server Management Studio, follow these steps to restore the Wide World Im
7. When the restore completes, view the database in Object Explorer. You can verify that database restore is completed using the [sys.dm_operation_status](/sql/relational-databases/system-dynamic-management-views/sys-dm-operation-status-azure-sql-database) view. > [!NOTE]
-> A database restore operation is asynchronous and retryable. You might get an error in SQL Server Management Studio if the connection breaks or a time-out expires. Azure SQL Database will keep trying to restore database in the background, and you can track the progress of the restore using the [sys.dm_exec_requests](/sql/relational-databases/system-dynamic-management-views/sys-dm-exec-requests-transact-sql) and [sys.dm_operation_status](/sql/relational-databases/system-dynamic-management-views/sys-dm-operation-status-azure-sql-database) views.
+> A database restore operation is asynchronous and retryable. You might get an error in SQL Server Management Studio if the connection breaks or a time-out expires. Azure SQL Managed Instance will keep trying to restore database in the background, and you can track the progress of the restore using the [sys.dm_exec_requests](/sql/relational-databases/system-dynamic-management-views/sys-dm-exec-requests-transact-sql) and [sys.dm_operation_status](/sql/relational-databases/system-dynamic-management-views/sys-dm-operation-status-azure-sql-database) views.
> In some phases of the restore process, you will see a unique identifier instead of the actual database name in the system views. Learn about `RESTORE` statement behavior differences [here](./transact-sql-tsql-differences-sql-server.md#restore-statement). ## Next steps
azure-video-analyzer Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-docs/overview.md
Title: What is Azure Video Analyzer description: This topic provides an overview of Azure Video Analyzer- Last updated 06/01/2021
azure-video-analyzer Use Continuous Video Recording https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-docs/use-continuous-video-recording.md
In this tutorial, you will:
1. Clean up resources. ## Set up your development environment
-
++
+### Get the sample code
+
+1. Clone the [AVA C# samples repository](https://github.com/Azure-Samples/video-analyzer-iot-edge-csharp).
+1. Start Visual Studio Code, and open the folder where the repo has been downloaded.
+1. In Visual Studio Code, browse to the src/cloud-to-device-console-app folder and create a file named **appsettings.json**. This file contains the settings needed to run the program.
+1. Browse to the file share in the storage account created in the setup step above, and locate the **appsettings.json** file under the "deployment-output" file share. Click on the file, and then hit the "Download" button. The contents should open in a new browser tab, which should look like:
+
+ ```
+ {
+ "IoThubConnectionString" : "HostName=xxx.azure-devices.net;SharedAccessKeyName=iothubowner;SharedAccessKey=XXX",
+ "deviceId" : "avasample-iot-edge-device",
+ "moduleId" : "avaedge"
+ }
+ ```
+
+ The IoT Hub connection string lets you use Visual Studio Code to send commands to the edge modules via Azure IoT Hub. Copy the above JSON into the **src/cloud-to-device-console-app/appsettings.json** file.
+
+### Connect to the IoT Hub
+
+1. In Visual Studio Code, set the IoT Hub connection string by selecting the **More actions** icon next to the **AZURE IOT HUB** pane in the lower-left corner. Copy the string from the src/cloud-to-device-console-app/appsettings.json file.
+
+ <!-- commenting out the image for now ![Set IoT Hub connection string]()./media/quickstarts/set-iotconnection-string.png-->
+ [!INCLUDE [provide-builtin-endpoint](./includes/common-includes/provide-builtin-endpoint.md)]
+1. In about 30 seconds, refresh Azure IoT Hub in the lower-left section. You should see the edge device `avasample-iot-edge-device`, which should have the following modules deployed:
+ - Edge Hub (module name **edgeHub**)
+ - Edge Agent (module name **edgeAgent**)
+ - Video Analyzer (module name **avaedge**)
+ - RTSP simulator (module name **rtspsim**)
+
+### Prepare to monitor the modules
+
+When you use run this quickstart or tutorial, events will be sent to the IoT Hub. To see these events, follow these steps:
+
+1. Open the Explorer pane in Visual Studio Code, and look for **Azure IoT Hub** in the lower-left corner.
+1. Expand the **Devices** node.
+1. Right-click on `avasample-iot-edge-device`, and select **Start Monitoring Built-in Event Endpoint**.
+
+ [!INCLUDE [provide-builtin-endpoint](./includes/common-includes/provide-builtin-endpoint.md)]
## Examine the sample files
In Visual Studio Code, browse to the src/cloud-to-device-console-app folder. Her
`"topologyName" : "CVRToVideoSink"` 1. Open the [pipeline topology](https://raw.githubusercontent.com/Azure/video-analyzer/main/pipelines/live/topologies/cvr-video-sink/topology.json) in a browser, and look at videoName - it is hard-coded to `sample-cvr-video`. This is acceptable for a tutorial. In production, you would take care to ensure that each unique RTSP camera is recorded to a video resource with a unique name. 1. Start a debugging session by selecting F5. You'll see some messages printed in the **TERMINAL** window.
-1. The operations.json file starts off with calls to `pipelineTopologyList` and `livePipelineList`. If you've cleaned up resources after previous quickstarts or tutorials, this action returns empty lists and then pauses for you to select **Enter**, as shown:
+1. The operations.json file starts off with calls to `pipelineTopologyList` and `livePipelineList`. If you've cleaned up resources after previous quickstarts or tutorials, this action returns empty lists as shown:
``` --
In Visual Studio Code, browse to the src/cloud-to-device-console-app folder. Her
"value": [] } --
- Executing operation WaitForInput
- Press Enter to continue
+ ```
-1. After you select **Enter** in the **TERMINAL** window, the next set of direct method calls is made:
+1. After that the next set of direct method calls is made:
* A call to `pipelineTopologySet` by using the previous `topologyUrl` * A call to `livePipelineSet` by using the following body
In Visual Studio Code, browse to the src/cloud-to-device-console-app folder. Her
} } ```
- * A call to `livePipelineActivate` to start the live pipeline and to start the flow of video
- * A second call to `livePipelineList` to show that the live pipeline is in the running state
+ * A call to `livePipelineActivate` to start the live pipeline and to start the flow of video and then pauses for you to select **Enter** in the **Terminal** window
1. The output in the **TERMINAL** window pauses now at a **Press Enter to continue** prompt. Do not select **Enter** at this time. Scroll up to see the JSON response payloads for the direct methods you invoked.
-1. If you now switch over to the **OUTPUT** window in Visual Studio Code, you'll see messages being sent to IoT Hub by the Video Analyzer edge module.
--
- These messages are discussed in the following section.
+1. If you now switch over to the **OUTPUT** window in Visual Studio Code, you'll see messages being sent to IoT Hub by the Video Analyzer edge module. These messages are discussed in the following section.
1. The live pipeline continues to run and record the video. The RTSP simulator keeps looping the source video. To stop recording, go back to the **TERMINAL** window and select **Enter**. The next series of calls are made to clean up resources by using: * A call to `livePipelineDeactivate` to deactivate the live pipeline. * A call to `livePipelineDelete` to delete the live pipeline.
+ * A second call to `livePipelineList` to show that the live pipeline is in the running state.
* A call to `pipelineTopologyDelete` to delete the topology. * A final call to `pipelineTopologyList` to show that the list is now empty.
-## Interpret the results
+## Interpret the results
When you run the live pipeline, the Video Analyzer edge module sends certain diagnostic and operational events to the IoT Edge hub. These events are the messages you see in the **OUTPUT** window of Visual Studio Code. They contain a `body` section and an `applicationProperties` section. To understand what these sections represent, see [Create and read IoT Hub messages](../../iot-hub/iot-hub-devguide-messages-construct.md).
azure-video-analyzer Use Line Crossing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-docs/use-line-crossing.md
In this message, notice these details:
* The `direction` contains the direction for this event. > [!NOTE]
-> If you deployed Azure resources using the one-click deployment for this tutorial, a Standard DS1 Virtual Machine is created. However, to get accurate results from resource-intensive AI models like YOLO, you may have to increase the VM size. [Resize the VM](../../virtual-machines/windows/resize-vm.md) to increase number of vcpus and memory based on your requirement. Then, reactivate the live pipeline to view inferences.
+> If you deployed Azure resources using the one-click deployment for this tutorial, a Standard DS1 Virtual Machine is created. However, to get accurate results from resource-intensive AI models like YOLO, you may have to increase the VM size. [Resize the VM](../../virtual-machines/resize-vm.md) to increase number of vcpus and memory based on your requirement. Then, reactivate the live pipeline to view inferences.
## Customize for your own environment
azure-video-analyzer Animated Characters Recognition https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-for-media-docs/animated-characters-recognition.md
Title: Animated character detection with Azure Video Analyzer for Media (formerly Video Indexer)- description: This topic demonstrates how to use animated character detection with Azure Video Analyzer for Media (formerly Video Indexer).-----+ Last updated 11/19/2019
azure-video-analyzer Audio Effects Detection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-for-media-docs/audio-effects-detection.md
Audio Events Detection can be used in many domains. Two examples are:
|Indexing type |Standard indexing| Advanced indexing| ||||
-|**Preset Name** |**"Audio OnlyΓÇ¥** <br/>**ΓÇ£Video + AudioΓÇ¥** |**ΓÇ£Advance AudioΓÇ¥**<br/> **ΓÇ£Advance Video + AudioΓÇ¥**|
+|**Preset Name** |**"Audio Only"** <br/>**"Video + Audio"** |**"Advance Audio"**<br/> **"Advance Video + Audio"**|
|**Appear in insights pane**|| V| |Crowd Reaction |V| V| | Silence| V| V|
audioEffects: [{
start: "0:00:47.9", end: "0:00:52.5" },
- {
+ {
confidence: 0.7314, adjustedStart: "0:04:57.67", adjustedEnd: "0:05:01.57",
audioEffects: [{
## How to index Audio Effects
-In order to set the index process to include the detection of Audio Effects, the user should chose one of the Advanced presets under ΓÇ£Video + audio indexingΓÇ¥ menu as can be seen below.
+In order to set the index process to include the detection of Audio Effects, the user should chose one of the Advanced presets under "Video + audio indexing" menu as can be seen below.
> [!div class="mx-imgBorder"] > :::image type="content" source="./media/audio-effects-detection/index-audio-effect.png" alt-text="Index Audio Effects image":::
When Audio Effects are retrieved in the closed caption files, they will be retri
||| |SRT |00:00:00,000 00:00:03,671<br/>[Gunshot]| |VTT |00:00:00.000 00:00:03.671<br/>[Gunshot]|
-|TTML|Confidence: 0.9047 <br/> <p begin="00:00:00.000" end="00:00:03.671">[Gunshot]</p>|
+|TTML|Confidence: 0.9047 <br/> `<p begin="00:00:00.000" end="00:00:03.671">[Gunshot]</p>`|
|TXT |[Gunshot]| |CSV |0.9047,00:00:00.000,00:00:03.671, [Gunshot]|
azure-video-analyzer Compare Video Indexer With Media Services Presets https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-for-media-docs/compare-video-indexer-with-media-services-presets.md
Title: Comparison of Azure Video Analyzer for Media (formerly Video Indexer) and Azure Media Services v3 presets description: This article compares Azure Video Analyzer for Media (formerly Video Indexer) capabilities and Azure Media Services v3 presets.-------+ Last updated 02/24/2020
azure-video-analyzer Concepts Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-for-media-docs/concepts-overview.md
Title: Azure Video Analyzer for Media (formerly Video Indexer) concepts - Azure description: This article gives a brief overview of Azure Video Analyzer for Media (formerly Video Indexer) terminology and concepts.-----+ Last updated 01/19/2021
azure-video-analyzer Connect To Azure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-for-media-docs/connect-to-azure.md
Title: Create a Azure Video Analyzer for Media (formerly Video Indexer) account connected to Azure- description: Learn how to create a Azure Video Analyzer for Media (formerly Video Indexer) account connected to Azure.-----+ Last updated 01/14/2021
azure-video-analyzer Considerations When Use At Scale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-for-media-docs/considerations-when-use-at-scale.md
Title: Things to consider when using Azure Video Analyzer for Media (formerly Video Indexer) at scale - Azure- description: This topic explains what things to consider when using Azure Video Analyzer for Media (formerly Video Indexer) at scale.---- - Last updated 11/13/2020
Therefore, we recommend you to verify that you get the right results for your us
## Next steps
-[Examine the Azure Video Analyzer for Media output produced by API](video-indexer-output-json-v2.md)
+[Examine the Azure Video Analyzer for Media output produced by API](video-indexer-output-json-v2.md)
azure-video-analyzer Invite Users https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-for-media-docs/invite-users.md
Title: Invite users to Azure Video Analyzer for Media (former Video Analyzer for Media) - Azure - description: This article shows how to invite users to Azure Video Analyzer for Media (former Video Analyzer for Media).--- - Last updated 02/03/2021- # Quickstart: Invite users to Video Analyzer for Media
azure-video-analyzer Language Identification Model https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-for-media-docs/language-identification-model.md
Title: Use Azure Video Analyzer for Media (formerly Video Indexer) to auto identify spoken languages - Azure- description: This article describes how the Azure Video Analyzer for Media (formerly Video Indexer) language identification model is used to automatically identifying the spoken language in a video.------+ Last updated 04/12/2020
azure-video-analyzer Live Stream Analysis https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-for-media-docs/live-stream-analysis.md
Title: Live stream analysis using Azure Video Analyzer for Media (formerly Video Indexer)- description: This article shows how to perform a live stream analysis using Azure Video Analyzer for Media (formerly Video Indexer).-----+ Last updated 11/13/2019- # Live stream analysis with Video Analyzer for Media
azure-video-analyzer Logic Apps Connector Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-for-media-docs/logic-apps-connector-tutorial.md
Title: The Azure Video Analyzer for Media (formerly Video Indexer) connectors with Logic App and Power Automate tutorial. description: This tutorial shows how to unlock new experiences and monetization opportunities Azure Video Analyzer for Media (formerly Video Indexer) connectors with Logic App and Power Automate.-- -- Last updated 09/21/2020
azure-video-analyzer Manage Account Connected To Azure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-for-media-docs/manage-account-connected-to-azure.md
Title: Manage a Azure Video Analyzer for Media (formerly Video Indexer) account- description: Learn how to manage a Azure Video Analyzer for Media (formerly Video Indexer) account connected to Azure.-----+ Last updated 01/14/2021
azure-video-analyzer Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-for-media-docs/release-notes.md
Title: Azure Video Analyzer for Media (formerly Video Indexer) release notes | Microsoft Docs description: To stay up-to-date with the most recent developments, this article provides you with the latest updates on Azure Video Analyzer for Media (formerly Video Indexer).---- - Last updated 08/01/2021
azure-video-analyzer Scenes Shots Keyframes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-for-media-docs/scenes-shots-keyframes.md
Title: Azure Video Analyzer for Media (formerly Video Indexer) scenes, shots, and keyframes - description: This topic gives an overview of the Azure Video Analyzer for Media (formerly Video Indexer) scenes, shots, and keyframes.-----+ Last updated 07/05/2019
azure-video-analyzer Upload Index Videos https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-for-media-docs/upload-index-videos.md
Title: Upload and index videos with Azure Video Analyzer for Media (formerly Video Indexer)- description: This topic demonstrates how to use APIs to upload and index your videos with Azure Video Analyzer for Media (formerly Video Indexer).--- - Last updated 05/12/2021- # Upload and index your videos
azure-video-analyzer Video Indexer Embed Widgets https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-for-media-docs/video-indexer-embed-widgets.md
Title: Embed Azure Video Analyzer for Media (formerly Video Indexer) widgets in your apps- description: Learn how to embed Azure Video Analyzer for Media (formerly Video Indexer) widgets in your apps.-----+ Last updated 01/25/2021
azure-video-analyzer Video Indexer Get Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-for-media-docs/video-indexer-get-started.md
Title: Sign up for Azure Video Analyzer for Media (formerly Video Indexer) and upload your first video - Azure- description: Learn how to sign up and upload your first video using the Azure Video Analyzer for Media (formerly Video Indexer) portal.--- Last updated 01/25/2021
azure-video-analyzer Video Indexer Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-for-media-docs/video-indexer-overview.md
Title: What is Azure Video Analyzer for Media (formerly Video Indexer)? description: This article gives an overview of the Azure Video Analyzer for Media (formerly Video Indexer) service.--- - Last updated 07/15/2021
azure-video-analyzer Video Indexer Search https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-for-media-docs/video-indexer-search.md
Title: Search for exact moments in videos with Azure Video Analyzer for Media (formerly Video Indexer)- description: Learn how to search for exact moments in videos using Azure Video Analyzer for Media (formerly Video Indexer).-----+ Last updated 11/23/2019
azure-video-analyzer Video Indexer Use Apis https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-for-media-docs/video-indexer-use-apis.md
Title: Use the Azure Video Analyzer for Media (formerly Video Indexer) API- description: This article describes how to get started with Azure Video Analyzer for Media (formerly Video Indexer) API.----- Last updated 01/07/2021-+
backup Backup Azure Vms Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-vms-troubleshoot.md
This will ensure the snapshots are taken through host instead of Guest. Retry th
**Step 2**: Try changing the backup schedule to a time when the VM is under less load (like less CPU or IOPS)
-**Step 3**: Try [increasing the size of the VM](../virtual-machines/windows/resize-vm.md) and retry the operation
+**Step 3**: Try [increasing the size of the VM](../virtual-machines/resize-vm.md) and retry the operation
### 320001, ResourceNotFound - Could not perform the operation as VM no longer exists / 400094, BCMV2VMNotFound - The virtual machine doesn't exist / An Azure virtual machine wasn't found
backup Backup Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-support-matrix.md
Title: Azure Backup support matrix description: Provides a summary of support settings and limitations for the Azure Backup service. Previously updated : 08/23/2021 Last updated : 09/14/2021
The resource health check functions in following conditions:
| | | | | | | **Supported Resources** | Recovery Services vault |
-| **Supported Regions** | East US 2, Central US, North Europe, France Central, East Asia, Japan East, Japan West, Australia East, South Africa North. |
+| **Supported Regions** | East US, East US 2, Central US, South Central US, North Central US, West Central US, North Europe, West Europe, France Central, East Asia, South East Asia, Japan East, Japan West, Australia East, Australia Central, Australia Central 2, South Africa North, UAE Central, Brazil South, Switzerland North, Norway East, Germany West Central, West India. |
| **For unsupported regions** | The resource health status is shown as "Unknown". |
cloudfoundry How Cloud Foundry Integrates With Azure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloudfoundry/how-cloud-foundry-integrates-with-azure.md
Managed disks support smaller disk sizes, for example P4(32 GB) and P6(64 GB) fo
#### Use Azure First Party Taking advantage of AzureΓÇÖs first party service will lower the long-term administration cost, in addition to HA and reliability mentioned in above sections.
-Pivotal has launched a [Small Footprint ERT](https://docs.pivotal.io/pivotalcf/2-0/customizing/small-footprint.html) for PCF customers, the components are co-located into just 4 VMs, running up to 2500 application instances. The trial version is now available through [Azure Market place](https://azuremarketplace.microsoft.com/marketplace/apps/pivotal.pivotal-cloud-foundry).
+Pivotal has launched a [Small Footprint ERT](https://docs.pivotal.io/pivotalcf/2-0/customizing/small-footprint.html) for PCF customers, the components are co-located into just 4 VMs, running up to 2500 application instances. The trial version is now available through Azure Market place.
## Next Steps Azure integration features are first available with [Open Source Cloud Foundry](https://github.com/cloudfoundry-incubator/bosh-azure-cpi-release/tree/master/docs/advanced/), before it's available on Pivotal Cloud Foundry. Features marked with * are still not available through PCF. Cloud Foundry integration with Azure Stack isn't covered in this document either.
-For PCF support on the features marked with *, or Cloud Foundry integration with Azure Stack, contact your Pivotal and Microsoft account manager for latest status.
+For PCF support on the features marked with *, or Cloud Foundry integration with Azure Stack, contact your Pivotal and Microsoft account manager for latest status.
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Face/Overview.md
Modern enterprises and apps can use the the Face identification and Face verific
### Identification
-Face identification can be thought of as "one-to-many" matching. Match candidates are returned based on how closely their face data matches the query face. This scenario is used in granting building access to a certain group of people or verifying the user of a device.
+Face identification can address "one-to-many" matching of one face in an image to a set of faces in a secure repository. Match candidates are returned based on how closely their face data matches the query face. This scenario is used in granting building access to a certain group of people or verifying the user of a device.
The following image shows an example of a database named `"myfriends"`. Each group can contain up to 1 million different person objects. Each person object can have up to 248 faces registered.
After you create and train a group, you can do identification against the group
### Verification
-The verification operation answers the question, "Do these two faces belong to the same person?". Verification is also called "one-to-one" matching because the probe face data is compared to only a single enrolled face. Verification is used in the identification scenario to double-check that a given match is accurate.
+The verification operation answers the question, "Do these two faces belong to the same person?".
+
+Verification is also "one-to-one" matching of one face in an image to one face in a secure repository or photo
+
+Verification can be used in identity verification or access control scenarios to verify a picture matches a previously captured image (such as from a photo from a government issued ID card).
For more information about identity verification, see the [Facial recognition](concepts/face-recognition.md) concepts guide or the [Identify](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239) and [Verify](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523a) API reference documentation.
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/language-support.md
Neural voices can be used to make interactions with chatbots and voice assistant
> Neural voices are created from samples that use a 24 khz sample rate. > All voices can upsample or downsample to other sample rates when synthesizing. - | Language | Locale | Gender | Voice name | Style support | |||||| | Arabic (Egypt) | `ar-EG` | Female | `ar-EG-SalmaNeural` | General |
Neural voices can be used to make interactions with chatbots and voice assistant
| English (South Africa) | `en-ZA` | Female | `en-ZA-LeahNeural` <sup>New</sup> | General | | English (South Africa) | `en-ZA` | Male | `en-ZA-LukeNeural` <sup>New</sup> | General | | English (United Kingdom) | `en-GB` | Female | `en-GB-LibbyNeural` | General |
-| English (United Kingdom) | `en-GB` | Female | `en-GB-MiaNeural` | General |
+| English (United Kingdom) | `en-GB` | Female | `en-GB-SoniaNeural` <sup>New</sup> | General |
+| English (United Kingdom) | `en-GB` | Female | `en-GB-MiaNeural` <sup>Retires Oct 15, see below</sup> | General |
| English (United Kingdom) | `en-GB` | Male | `en-GB-RyanNeural` | General | | English (United States) | `en-US` | Female | `en-US-AriaNeural` | General, multiple voice styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) | | English (United States) | `en-US` | Female | `en-US-JennyNeural` | General, multiple voice styles available [using SSML](speech-synthesis-markup.md#adjust-speaking-styles) |
Neural voices can be used to make interactions with chatbots and voice assistant
| Welsh (United Kingdom) | `cy-GB` | Female | `cy-GB-NiaNeural` | General | | Welsh (United Kingdom) | `cy-GB` | Male | `cy-GB-AledNeural` | General |
+> [!IMPORTANT]
+> The English (United Kingdom) voice `en-GB-MiaNeural` will be retired on **October 15th, 2021**. All service requests to `en-GB-MiaNeural` will be re-directed to `en-GB-SoniaNeural` automatically after **October 15th, 2021**.
+ #### Neural voices in preview Below neural voices are in public preview.
To learn how you can configure and adjust neural voices, such as Speaking Styles
More than 75 standard voices are available in over 45 languages and locales, which allow you to convert text into synthesized speech. For more information about regional availability, see [regions](regions.md#neural-and-standard-voices).
+> [!IMPORTANT]
+> We are retiring the standard voices on **31st August 2024** and they will no longer be supported after that date. We announced this in emails sent to all existing Speech subscriptions created before **31st August 2021**. During the retiring period (**31st August 2021** - **31st August 2024**), existing standard voice users can continue to use standard voices, but all new users/new speech resources must choose neural voices.
+ > [!NOTE] > With two exceptions, standard voices are created from samples that use a 16 khz sample rate. > **The en-US-AriaRUS** and **en-US-GuyRUS** voices are also created from samples that use a 24 khz sample rate.
cognitive-services Speech Studio Role Based Access Control https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/speech-studio-role-based-access-control.md
+
+ Title: Role-based access control in Speech Studio - Speech service
+
+description: Learn how to assign access roles to the Speech service through Speech Studio.
++++++ Last updated : 09/07/2021+++
+# Azure role-based access control in Speech Studio
+
+Speech Studio supports Azure role-based access control (Azure RBAC), an authorization system for managing individual access to Azure resources. Using Azure RBAC, you can assign different levels of permissions for your Speech Studio operations to different team members. For more information on Azure RBAC, see the [Azure RBAC documentation](/azure/role-based-access-control/overview).
+
+## Prerequisites
+
+* You must be signed into Speech Studio with your Azure account and Speech resource. See the [Speech Studio overview](speech-studio-overview.md).
+
+## Manage role assignments for Speech resources
+
+To grant access to an Azure speech resource, you add a role assignment through the Azure RBAC tool in the Azure portal.
+
+Within a few minutes, the target will be assigned the selected role at the selected scope. For help with these steps, see [Assign Azure roles using the Azure portal](/azure/role-based-access-control/role-assignments-portal?tabs=current).
+
+## Supported built-in roles in Speech Studio
+
+A role definition is a collection of permissions. Use the following recommended built-in roles if you don't have any unique custom requirements for permissions:
+
+| **Built-in role** | **Permission to list resource keys** | **Permission for Custom Speech operations** | **Permission for Custom Voice operations**| **Permission for other capabilities** |
+| | | | | --|
+|**Owner** |Yes |Full access to the projects, including the permission to create, edit, or delete project / data / model / endpoints |Full access to the projects, including the permission to create, edit, or delete project / data / model / endpoints |Full access |
+|**Contributor** |Yes |Full access to the projects, including the permission to create, edit, or delete project / data / model / endpoints |Full access to the projects, including the permission to create, edit, or delete project / data / model / endpoints |Full access |
+|**Cognitive Service Contributors** |Yes |Full access to the projects, including the permission to create, edit, or delete project / data / model / endpoints |Full access to the projects, including the permission to create, edit, or delete project / data / model / endpoints |Full access |
+|**Cognitive Service Users** |Yes |Full access to the projects, including the permission to create, edit, or delete project / data / model / endpoints |Full access to the projects, including the permission to create, edit, or delete project / data / model / endpoints |Full access |
+|**Cognitive Service Speech Contributor** |No |Full access to the projects, including the permission to create, edit, or delete project / data / model / endpoints |Full access to the projects, including the permission to create, edit, or delete project / data / model / endpoints |Full access |
+|**Cognitive Service Speech User** |No |Can view the projects / datasets / models / endpoints; cannot create, edit, delete |Can view the projects / datasets / models / endpoints; cannot create, edit, delete |Full access |
+|**Cognitive Services Data Reader (preview)** |No |Can view the projects / datasets / models / endpoints; cannot create, edit, delete |Can view the projects / datasets / models / endpoints; cannot create, edit, delete |Full access |
+
+Alternatively, you can [create your own custom roles](/azure/role-based-access-control/custom-roles). For example, you could create a custom role with the permission to upload custom speech datasets, but without the ability to deploy a custom speech model to an endpoint.
+
+> [!NOTE]
+> Speech Studio supports key-based authentication. Roles that have permission to list resource keys (`Microsoft.CognitiveServices/accounts/listKeys/action`) will firstly be authenticated with a resource key and will have full access to the Speech Studio operations, as long as key authentication is enabled in Azure portal. If key authentication is disabled by the service admin, then those roles will lose all access to the Studio.
+
+> [!NOTE]
+> One resource could be assigned or inherited with multiple roles, and the final level of access to this resource is a combination of all your roles' permissions from the operation level.
+
+## Next steps
+
+Learn more about [Speech service encryption of data at rest](/azure/cognitive-services/speech-service/speech-encryption-of-data-at-rest).
communication-services Manage Teams Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/manage-teams-identity.md
dotnet add package Azure.Communication.Identity
dotnet add package Microsoft.Identity.Client ```
+> [!NOTE]
+> Packages for private preview aren't available in official package repositories as NPM or NuGet.org. You can find SDKs in the following package repositories [.net](https://dev.azure.com/azure-sdk/public/_packaging?_a=package&feed=azure-sdk-for-net&package=Azure.Communication.Identity&protocolType=NuGet&version=1.1.0-alpha.20210531.2) and [javascript](https://www.npmjs.com/package/@azure/communication-identity/v/1.1.0-alpha.20210531.1).
+ #### Set up the app framework From the project directory, do the following:
In this quickstart, you learned how to:
Learn about the following concepts: - [Custom Teams endpoint](../concepts/teams-endpoint.md)-- [Teams interoperability](../concepts/teams-interop.md)
+- [Teams interoperability](../concepts/teams-interop.md)
connectors Apis List https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/connectors/apis-list.md
ms.suite: integration Previously updated : 07/01/2021 Last updated : 09/13/2021
An *action* is an operation that follows the trigger and performs some kind of t
## Connector categories
-In Logic Apps, most triggers and actions are available in either a *built-in* version or *managed connector* version. A few triggers and actions are available in both versions. The versions available depend on whether you create a multi-tenant logic app or a single-tenant logic app, which is currently available only in [single-tenant Azure Logic Apps](../logic-apps/single-tenant-overview-compare.md).
+In Azure Logic Apps, most triggers and actions are available in either a *built-in* version or *managed connector* version. A few triggers and actions are available in both versions. The versions available depend on whether you create a multi-tenant logic app or a single-tenant logic app, which is currently available only in [single-tenant Azure Logic Apps](../logic-apps/single-tenant-overview-compare.md).
[Built-in triggers and actions](built-in.md) run natively on the Logic Apps runtime, don't require creating connections, and perform these kinds of tasks:
In Logic Apps, most triggers and actions are available in either a *built-in* ve
- [Integration account connectors](managed.md#integration-account-connectors) that support business-to-business (B2B) communication scenarios. - [Integration service environment (ISE) connectors](managed.md#ise-connectors) that are a small group of [managed connectors available only for ISEs](#ise-and-connectors).
+<a name="connection-configuration"></a>
+ ## Connection configuration
-Most connectors require that you first create a *connection* to the target service or system before you can use its triggers or actions in your workflow. To create a connection, you have to authenticate your identity with account credentials and sometimes other connection information. For example, before your workflow can access and work with your Office 365 Outlook email account, you must authorize a connection to that account.
+To create or manage logic app resources and connections, you need certain permissions, which are provided through roles using [Azure role-based access control (Azure RBAC)](../role-based-access-control/role-assignments-portal.md). You can assign built-in or customized roles to members who have access to your Azure subscription. Azure Logic Apps has these specific roles:
+
+* [Logic App Contributor](../role-based-access-control/built-in-roles.md#logic-app-contributor): Lets you manage logic apps, but you can't change access to them.
+
+* [Logic App Operator](../role-based-access-control/built-in-roles.md#logic-app-operator): Lets you read, enable, and disable logic apps, but you can't edit or update them.
+
+* [Contributor](../role-based-access-control/built-in-roles.md#contributor): Grants full access to manage all resources, but does not allow you to assign roles in Azure RBAC, manage assignments in Azure Blueprints, or share image galleries.
+
+ For example, suppose you have to work with a logic app that you didn't create and authenticate connections used by that logic app's workflow. Your Azure subscription requires Contributor permissions for the resource group that contains that logic app resource. If you create a logic app resource, you automatically have Contributor access.
+
+Before you can use a connector's triggers or actions in your workflow, most connectors require that you first create a *connection* to the target service or system . To create a connection from within a logic app workflow, you have to authenticate your identity with account credentials and sometimes other connection information. For example, before your workflow can access and work with your Office 365 Outlook email account, you must authorize a connection to that account. For a small number of built-in operations and managed connectors, you can [set up and use a managed identity for authentication](../logic-apps/create-managed-service-identity.md#triggers-actions-managed-identity), rather than provide your credentials.
+
+<a name="connection-security-encyrption"></a>
### Connection security and encryption
-For connectors that use Azure Active Directory (Azure AD) OAuth, such as Office 365, Salesforce, or GitHub, you must sign into the service where your access token is [encrypted](../security/fundamentals/encryption-overview.md) and securely stored in an Azure secret. Other connectors, such as FTP and SQL, require a connection that has configuration details, such as the server address, username, and password. These connection configuration details are also [encrypted and securely stored in Azure](../security/fundamentals/encryption-overview.md).
+Connection configuration details, such as server address, username, and password, credentials, and secrets are [encrypted and stored in the secured Azure environment](../security/fundamentals/encryption-overview.md). This information can be used only in logic app resources and by clients who have permissions for the connection resource, which is enforced using linked access checks. Connections that use Azure Active Directory Open Authentication (Azure AD OAuth), such as Office 365, Salesforce, and GitHub, require that you sign in, but Azure Logic Apps stores only access and refresh tokens as secrets, not sign-in credentials.
Established connections can access the target service or system for as long as that service or system allows. For services that use Azure AD OAuth connections, such as Office 365 and Dynamics, the Logic Apps service refreshes access tokens indefinitely. Other services might have limits on how long Logic Apps can use a token without refreshing. Some actions, such as changing your password, invalidate all access tokens.
Although you create connections from within a workflow, connections are separate
> [!TIP] > If your organization doesn't permit you to access specific resources through Logic Apps connectors, you can [block the capability to create such connections](../logic-apps/block-connections-connectors.md) using [Azure Policy](../governance/policy/overview.md).
+For more information about securing logic apps and connections, review [Secure access and data in Azure Logic Apps](../logic-apps/logic-apps-securing-a-logic-app.md).
+ <a name="firewall-access"></a> ### Firewall access for connections
connectors Connectors Native Http https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/connectors/connectors-native-http.md
ms.suite: integration Previously updated : 05/25/2021 Last updated : 09/13/2021 tags: connectors
For example, suppose you have a logic app that sends an HTTP POST request to a w
## Asynchronous request-response behavior
-By default, all HTTP-based actions in Azure Logic Apps follow the standard [asynchronous operation pattern](/azure/architecture/patterns/async-request-reply). This pattern specifies that after an HTTP action calls or sends a request to an endpoint, service, system, or API, the receiver immediately returns a ["202 ACCEPTED"](https://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.3) response. This code confirms that the receiver accepted the request but hasn't finished processing. The response can include a `location` header that specifies the URL and a refresh ID that the caller can use to poll or check the status for the asynchronous request until the receiver stops processing and returns a ["200 OK"](https://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) success response or other non-202 response. However, the caller doesn't have to wait for the request to finish processing and can continue to run the next action. For more information, see [Asynchronous microservice integration enforces microservice autonomy](/azure/architecture/microservices/design/interservice-communication#synchronous-versus-asynchronous-messaging).
+For *stateful* workflows in both multi-tenant and single-tenant Azure Logic Apps, all HTTP-based actions follow the standard [asynchronous operation pattern](/azure/architecture/patterns/async-request-reply) as the default behavior. This pattern specifies that after an HTTP action calls or sends a request to an endpoint, service, system, or API, the receiver immediately returns a ["202 ACCEPTED"](https://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.3) response. This code confirms that the receiver accepted the request but hasn't finished processing. The response can include a `location` header that specifies the URI and a refresh ID that the caller can use to poll or check the status for the asynchronous request until the receiver stops processing and returns a ["200 OK"](https://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) success response or other non-202 response. However, the caller doesn't have to wait for the request to finish processing and can continue to run the next action. For more information, see [Asynchronous microservice integration enforces microservice autonomy](/azure/architecture/microservices/design/interservice-communication#synchronous-versus-asynchronous-messaging).
+
+For *stateless* workflows in single-tenant Azure Logic Apps, HTTP-based actions don't use the asynchronous operation pattern. Instead, they only run synchronously, return the ["202 ACCEPTED"](https://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.3) response as-is, and proceed to the next step in the workflow execution. If the response includes a `location` header, a stateless workflow won't poll the specified URI to check the status. To follow the standard [asynchronous operation pattern](/azure/architecture/patterns/async-request-reply), use a stateful workflow instead.
* In the Logic App Designer, the HTTP action, but not trigger, has an **Asynchronous Pattern** setting, which is enabled by default. This setting specifies that the caller doesn't wait for processing to finish and can move on to the next action but continues checking the status until processing stops. If disabled, this setting specifies that the caller waits for processing to finish before moving on to the next action.
container-registry Zone Redundancy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/zone-redundancy.md
Title: Zone-redundant registry for high availability description: Learn about enabling zone redundancy in Azure Container Registry. Create a container registry or replication in an Azure availability zone. Zone redundancy is a feature of the Premium service tier. Previously updated : 02/23/2021 Last updated : 09/13/2021
Zone redundancy is a **preview** feature of the Premium container registry servi
## Preview limitations
-* Currently supported in the following regions: East US, East US 2, West US 2, North Europe, West Europe, Japan East.
+* Currently supported in the following regions:
+
+ |Americas |Europe |Africa |Asia Pacific |
+ |||||
+ |Brazil South<br/>Canada Central<br/>Central US<br/>East US<br/>East US 2<br/>South Central US<br/>US Government Virginia<br/>West US 2<br/>West US 3 |France Central<br/>Germany West Central<br/>North Europe<br/>Norway East<br/>West Europe<br/>UK South |South Africa North<br/> |Australia East<br/>Central India<br/>Japan East<br/>Korea Central<br/> |
+ * Region conversions to availability zones aren't currently supported. To enable availability zone support in a region, the registry must either be created in the desired region, with availability zone support enabled, or a replicated region must be added with availability zone support enabled. * Zone redundancy can't be disabled in a region. * [ACR Tasks](container-registry-tasks-overview.md) doesn't yet support availability zones.
cosmos-db Compliance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/compliance.md
Title: Azure Cosmos DB compliance
-description: This article describes certification coverage for Azure Cosmos DB compliance offerings.
+description: This article describes compliance coverage for Azure Cosmos DB.
Previously updated : 03/18/2020 Last updated : 09/11/2021 - # Compliance in Azure Cosmos DB -
-Azure Cosmos DB is available in all Azure regions. Microsoft makes five distinct Azure cloud environments available to customers:
-* **Azure public** cloud, which is available globally.
-
-* **Azure China 21Vianet** is available through a unique partnership between Microsoft and 21Vianet, one of the country/region's largest internet providers.
-* **Azure Germany** provides services under a data trustee model, which ensures that customer data remains in Germany under the control of T-Systems International GmbH, a subsidiary of Deutsche Telekom, acting as the German data trustee.
+Azure Cosmos DB is available in all Azure regions. Microsoft makes the following Azure cloud environments available to customers:
-* **Azure Government** is available in four regions in the United States to US government agencies and their partners.
+- **Azure** is available globally. It is sometimes referred to as Azure commercial, Azure public, or Azure global.
+- **Azure China** is available through a unique partnership between Microsoft and 21Vianet, one of the countryΓÇÖs largest Internet providers.
+- **Azure Government** is available from five regions in the United States to US government agencies and their partners. Two regions (US DoD Central and US DoD East) are reserved for exclusive use by the US Department of Defense.
+- **Azure Government Secret** is available from three regions exclusively for the needs of US Government and designed to accommodate classified Secret workloads and native connectivity to classified networks.
+- **Azure Government Top Secret** serves the national security mission and empowers leaders across the Intelligence Community (IC), Department of Defense (DoD), and Federal Civilian agencies to process national security workloads classified at the US Top Secret level.
-* **Azure Government for Department of Defense(DoD)** is available in two regions in the United States to the US Department of Defense.
+To help you meet your own compliance obligations across regulated industries and markets worldwide, Azure maintains the largest compliance portfolio in the industry both in terms of breadth (total number of [compliance offerings](../compliance/index.yml)) and depth (number of [customer-facing services](https://azure.microsoft.com/services/) in assessment scope). For service availability, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/).
-To help customers meet their own compliance obligations across regulated industries and markets worldwide, Azure maintains the largest compliance portfolio in the industry in terms of both breadth (total number of offerings) and depth (number of customer-facing services in assessment scope). Azure compliance offerings are grouped into four segments - globally applicable, US Government, industry specific, and region or country/region specific. Compliance offerings are based on various types of assurances, including formal certifications, attestations, validations, authorizations, and assessments produced by independent third-party auditing firms, as well as contractual amendments, self-assessments, and customer guidance documents produced by Microsoft.
+Azure compliance offerings are grouped into four segments - globally applicable, US Government, industry specific, and region/country specific. Compliance offerings are based on various types of assurances, including formal certifications, attestations, validations, authorizations, and assessments produced by independent third-party auditing firms, as well as contractual amendments, self-assessments, and customer guidance documents produced by Microsoft.
## Azure Cosmos DB certifications
To find out the latest compliance certifications for Azure Cosmos DB, see [Micro
## Next steps
-To learn more about Azure compliance certifications, see the following articles:
-
-* To find out the latest compliance certifications for Azure Cosmos DB, see [Microsoft Azure Compliance offerings](https://azure.microsoft.com/resources/microsoft-azure-compliance-offerings/), Appendix A & B.
-
-* For an overview of Azure Cosmos DB security and the latest improvements, see [Azure Cosmos database security](database-security.md).
-
-* For recommendations to improve the security posture of your Azure Cosmos DB deployment, see the [Azure Cosmos DB Security Baseline](security-baseline.md).
-
-* For more information about Microsoft certifications, see the [Azure Trust Center](https://azure.microsoft.com/support/trust-center/).
-
-* For FedRAMP compliance information, see [Azure services by FedRAMP and DoD CC SRG audit scope](../azure-government/compliance/azure-services-in-fedramp-auditscope.md).
+To learn more about Azure compliance coverage, see the following articles:
-* For DoD compliance information, see [DoD Compliance Offerings](/microsoft-365/compliance/offering-dod-disa-l2-l4-l5).
+- To find out the latest compliance certifications for Azure Cosmos DB, see [Microsoft Azure Compliance offerings](https://azure.microsoft.com/resources/microsoft-azure-compliance-offerings/), Appendix A & B.
+- For an overview of Azure Cosmos DB security and the latest improvements, see [Security in Azure Cosmos DB](database-security.md).
+- For recommendations to improve the security posture of your Azure Cosmos DB deployment, see [Azure Cosmos DB security baseline](security-baseline.md).
+- For more information about Azure certifications, see [Azure compliance documentation](../compliance/index.yml).
+- For FedRAMP and DoD compliance audit scope, see [Cloud services by audit scope](../azure-government/compliance/azure-services-in-fedramp-auditscope.md).
+- For DoD compliance information, see [DoD IL4](/azure/compliance/offerings/offering-dod-il4) and [DoD IL5](/azure/compliance/offerings/offering-dod-il5) compliance coverage.
cosmos-db How To Migrate From Change Feed Library https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/how-to-migrate-from-change-feed-library.md
Previously updated : 08/26/2021 Last updated : 09/13/2021
The .NET V3 SDK has several breaking changes, the following are the key steps to
1. Customizations that use `WithProcessorOptions` should be updated to use `WithLeaseConfiguration` and `WithPollInterval` for intervals, `WithStartTime` [for start time](./change-feed-processor.md#starting-time), and `WithMaxItems` to define the maximum item count. 1. Set the `processorName` on `GetChangeFeedProcessorBuilder` to match the value configured on `ChangeFeedProcessorOptions.LeasePrefix`, or use `string.Empty` otherwise. 1. The changes are no longer delivered as a `IReadOnlyList<Document>`, instead, it's a `IReadOnlyCollection<T>` where `T` is a type you need to define, there is no base item class anymore.
-1. To handle the changes, you no longer need an implementation, instead you need to [define a delegate](change-feed-processor.md#implementing-the-change-feed-processor). The delegate can be a static Function or, if you need to maintain state across executions, you can create your own class and pass an instance method as delegate.
+1. To handle the changes, you no longer need an implementation of `IChangeFeedObserver`, instead you need to [define a delegate](change-feed-processor.md#implementing-the-change-feed-processor). The delegate can be a static Function or, if you need to maintain state across executions, you can create your own class and pass an instance method as delegate.
For example, if the original code to build the change feed processor looks as follows:
The migrated code will look like:
[!code-csharp[Main](~/samples-cosmosdb-dotnet-v3/Microsoft.Azure.Cosmos.Samples/Usage/ChangeFeed/Program.cs?name=ChangeFeedProcessorMigrated)]
-And the delegate, can be a static method:
+For the delegate, you can have a static method to receive the events. If you were consuming information from the `IChangeFeedObserverContext` you can migrate to use the `ChangeFeedProcessorContext`:
+
+* `ChangeFeedProcessorContext.LeaseToken` can be used instead of `IChangeFeedObserverContext.PartitionKeyRangeId`
+* `ChangeFeedProcessorContext.Headers` can be used instead of `IChangeFeedObserverContext.FeedResponse`
+* `ChangeFeedProcessorContext.Diagnostics` contains detailed information about request latency for troubleshooting
[!code-csharp[Main](~/samples-cosmosdb-dotnet-v3/Microsoft.Azure.Cosmos.Samples/Usage/ChangeFeed/Program.cs?name=Delegate)]
cosmos-db Sql Api Sdk Java Spark V3 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-api-sdk-java-spark-v3.md
> * [Bulk executor - .NET v2](sql-api-sdk-bulk-executor-dot-net.md) > * [Bulk executor - Java](sql-api-sdk-bulk-executor-java.md)
-**Azure Cosmos DB Spark 3 OLTP connector** provides Apache Spark v3 support for Azure Cosmos DB using
-the SQL API.
-[Azure Cosmos DB](../introduction.md) is a globally-distributed database service which allows
-developers to work with data using a variety of standard APIs, such as SQL, MongoDB, Cassandra, Graph, and Table.
-
-## Documentation
--- [Getting started](https://github.com/Azure/azure-sdk-for-jav)-- [Catalog API](https://github.com/Azure/azure-sdk-for-jav)-- [Configuration Parameter Reference](https://github.com/Azure/azure-sdk-for-jav)--
-## Version compatibility
-
-| Connector | Spark | Minimum Java version | Supported Scala versions |
-| - | - | -- | -- |
-| 4.0.0 | 3.1.1 | 8 | 2.12 |
-
-## Download
-
-You can use the maven coordinate of the jar to auto install the Spark Connector to your Databricks Runtime 8 from Maven:
-`com.azure.cosmos.spark:azure-cosmos-spark_3-1_2-12:4.1.0`
-
-You can also integrate against Cosmos DB Spark Connector in your SBT project:
-```scala
-libraryDependencies += "com.azure.cosmos.spark" % "azure-cosmos-spark_3-1_2-12" % "4.1.0"
-```
-
-Cosmos DB Spark Connector is available on [Maven Central Repo](https://search.maven.org/artifact/com.azure.cosmos.spark/azure-cosmos-spark_3-1_2-12/).
-
-### General
-
-If you encounter any bug, please file an issue [here](https://github.com/Azure/azure-sdk-for-java/issues/new).
-
-To suggest a new feature or changes that could be made, file an issue the same way you would for a bug.
--
-## Next steps
-
-Review our [quickstart guide for working with Azure Cosmos DB Spark 3 OLTP connector](create-sql-api-spark.md).
cosmos-db Sql Query Ltrim https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-query-ltrim.md
Previously updated : 09/13/2019 Last updated : 09/14/2021 # LTRIM (Azure Cosmos DB) [!INCLUDE[appliesto-sql-api](../includes/appliesto-sql-api.md)]
- Returns a string expression after it removes leading blanks.
+ Returns a string expression after it removes leading whitespace or specified characters.
## Syntax ```sql
-LTRIM(<str_expr>)
+LTRIM(<str_expr1>[, <str_expr2>])
``` ## Arguments
-*str_expr*
- Is a string expression.
+*str_expr1*
+ Is a string expression
+
+*str_expr2*
+ Is an optional string expression to be trimmed from str_expr1. If not set, the default is whitespace.
## Return types
LTRIM(<str_expr>)
The following example shows how to use `LTRIM` inside a query. ```sql
-SELECT LTRIM(" abc") AS l1, LTRIM("abc") AS l2, LTRIM("abc ") AS l3
+SELECT LTRIM(" abc") AS t1,
+LTRIM(" abc ") AS t2,
+LTRIM("abc ") AS t3,
+LTRIM("abc") AS t4,
+LTRIM("abc", "ab") AS t5,
+LTRIM("abc", "abc") AS t6
``` Here is the result set. ```json
-[{"l1": "abc", "l2": "abc", "l3": "abc "}]
-```
+[
+ {
+ "t1": "abc",
+ "t2": "abc ",
+ "t3": "abc ",
+ "t4": "abc",
+ "t5": "c",
+ "t6": ""
+ }
+]
+```
## Remarks
cosmos-db Sql Query Rtrim https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-query-rtrim.md
Previously updated : 03/03/2020 Last updated : 09/14/2021 # RTRIM (Azure Cosmos DB) [!INCLUDE[appliesto-sql-api](../includes/appliesto-sql-api.md)]
- Returns a string expression after it removes trailing blanks.
+ Returns a string expression after it removes trailing whitespace or specified characters.
## Syntax ```sql
-RTRIM(<str_expr>)
+RTRIM(<str_expr1>[, <str_expr2>])
``` ## Arguments
-*str_expr*
- Is any valid string expression.
+*str_expr1*
+ Is a string expression
+
+*str_expr2*
+ Is an optional string expression to be trimmed from str_expr1. If not set, the default is whitespace.
## Return types
RTRIM(<str_expr>)
The following example shows how to use `RTRIM` inside a query. ```sql
-SELECT RTRIM(" abc") AS r1, RTRIM("abc") AS r2, RTRIM("abc ") AS r3
+SELECT RTRIM(" abc") AS t1,
+RTRIM(" abc ") AS t2,
+RTRIM("abc ") AS t3,
+RTRIM("abc") AS t4,
+RTRIM("abc", "bc") AS t5,
+RTRIM("abc", "abc") AS t6
``` Here is the result set. ```json
-[{"r1": " abc", "r2": "abc", "r3": "abc"}]
-```
+[
+ {
+ "t1": " abc",
+ "t2": " abc",
+ "t3": "abc",
+ "t4": "abc",
+ "t5": "a",
+ "t6": ""
+ }
+]
+```
## Remarks
cosmos-db Sql Query Trim https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-query-trim.md
Previously updated : 03/04/2020 Last updated : 09/14/2021 # TRIM (Azure Cosmos DB) [!INCLUDE[appliesto-sql-api](../includes/appliesto-sql-api.md)]
- Returns a string expression after it removes leading and trailing blanks.
+Returns a string expression after it removes leading and trailing whitespace or specified characters.
## Syntax ```sql
-TRIM(<str_expr>)
+TRIM(<str_expr1>[, <str_expr2>])
``` ## Arguments
-*str_expr*
- Is a string expression.
-
+*str_expr1*
+ Is a string expression
+
+*str_expr2*
+ Is an optional string expression to be trimmed from str_expr1. If not set, the default is whitespace.
+ ## Return types Returns a string expression.
TRIM(<str_expr>)
The following example shows how to use `TRIM` inside a query. ```sql
-SELECT TRIM(" abc") AS t1, TRIM(" abc ") AS t2, TRIM("abc ") AS t3, TRIM("abc") AS t4
+SELECT TRIM(" abc") AS t1,
+TRIM(" abc ") AS t2,
+TRIM("abc ") AS t3,
+TRIM("abc") AS t4,
+TRIM("abc", "ab") AS t5,
+TRIM("abc", "abc") AS t6
``` Here is the result set. ```json
-[{"t1": "abc", "t2": "abc", "t3": "abc", "t4": "abc"}]
+[
+ {
+ "t1": "abc",
+ "t2": "abc",
+ "t3": "abc",
+ "t4": "abc",
+ "t5": "c",
+ "t6": ""
+ }
+]
``` ## Remarks
cost-management-billing Link Partner Id Power Apps Accounts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/manage/link-partner-id-power-apps-accounts.md
When you have access to your customer's resources, use the Azure portal, PowerSh
1. Sign in to the [Azure portal](https://portal.azure.com). 1. Go to [Link to a partner ID](https://portal.azure.com/#blade/Microsoft_Azure_Billing/managementpartnerblade) in the Azure portal.
-1. Enter the [Microsoft Partner Network](https://partner.microsoft.com/) ID for your organization. Be sure to use the **Associated MPN ID** shown on your partner profile.
+1. Enter the [Microsoft Partner Network](https://partner.microsoft.com/) ID for your organization. Be sure to use the **Associated MPN ID** shown on your partner center profile. This is typically known as your [Partner Location Account MPN ID](/partner-center/account-structure).
:::image type="content" source="./media/link-partner-id-power-apps-accounts/link-partner-id.png" alt-text="Screenshot showing the Link to a partner ID window." lightbox="./media/link-partner-id-power-apps-accounts/link-partner-id.png" ::: 1. To link your partner ID to another customer, switch the directory. Under **Switch directory**, select the appropriate directory. :::image type="content" source="./media/link-partner-id-power-apps-accounts/switch-directory.png" alt-text="Screenshot showing the Directory + subscription window where can you switch your directory." lightbox="./media/link-partner-id-power-apps-accounts/switch-directory.png" :::
cost-management-billing View Purchase Refunds https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/reservations/view-purchase-refunds.md
# View reservation purchase and refund transactions
-There are a few different ways to view reservation purchase and refund transactions. You can use the Azure portal, Power BI, and REST APIs.
+There are a few different ways to view reservation purchase and refund transactions. You can use the Azure portal, Power BI, and REST APIs. You can view an exchanged reservation as refund and purchase in the transactions.
## View reservation purchases in the Azure portal
If you have questions or need help, [create a support request](https://portal.az
- To learn how to manage a reservation, see [Manage Azure Reservations](manage-reserved-vm-instance.md). - To learn more about Azure Reservations, see the following articles: - [What are Azure Reservations?](save-compute-costs-reservations.md)
- - [Manage Reservations in Azure](manage-reserved-vm-instance.md)
+ - [Manage Reservations in Azure](manage-reserved-vm-instance.md)
data-factory Data Factory Private Link https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-factory-private-link.md
Finally, you must create the private endpoint in your data factory.
> Disabling public network access is applicable only to the self-hosted integration runtime, not to Azure Integration Runtime and SQL Server Integration Services (SSIS) Integration Runtime. > [!NOTE]
-> You can still access the Azure Data Factory portal through a public network after you create private endpoint for portal.
+> You can still access the Azure Data Factory portal through a public network after you create private endpoint for the portal.
## Next steps
databox-online Azure Stack Edge Gpu Connect Powershell Interface https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-connect-powershell-interface.md
You want to perform this configuration before you configure compute from the Azu
`Set-HcsKubeClusterNetworkInfo -PodSubnet <subnet details> -ServiceSubnet <subnet details>`
- Replace the <subnet details> with the subnet range that you want to use.
+ Replace the \<subnet details\> with the subnet range that you want to use.
1. Once you have run this command, you can use the `Get-HcsKubeClusterNetworkInfo` command to verify that the pod and service subnets have changed.
To change the memory or processor limits for Kubernetes worker node, do the foll
1. To change the values of memory and processors for the worker node, run the following command:
- Set-AzureDataBoxEdgeRoleCompute -Name <Name value from the output of Get-AzureDataBoxEdgeRole> -Memory <Value in Bytes> -ProcessorCount <No. of cores>
+ ```powershell
+ Set-AzureDataBoxEdgeRoleCompute -Name <Name value from the output of Get-AzureDataBoxEdgeRole> -Memory <Value in Bytes> -ProcessorCount <No. of cores>
+ ```
- Here is a sample output.
+ Here is a sample output.
```powershell [10.100.10.10]: PS>Set-AzureDataBoxEdgeRoleCompute -Name IotRole -MemoryInBytes 32GB -ProcessorCount 16
databox Data Box Disk Deploy Copy Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox/data-box-disk-deploy-copy-data.md
Previously updated : 09/03/2019 Last updated : 09/14/2019 ms.localizationpriority: high
If you did not use the Split Copy tool to copy data, you will need to validate y
> [!TIP] > - Reset the tool between two runs.
- > - Use option 1 if dealing with large data set containing small files (~ KBs). This option only validates the files, as checksum generation may take a very long time and the performance could be very slow.
+ > - The checksum process may take more time if you have a large data set containing small files (~KBs). If you use option 1 and skip checksum creation, then you need to independently verify the data integrity of the uploaded data in Azure preferably via checksums before you delete any copies of the data in your possession.
3. If using multiple disks, run the command for each disk.
Take the following steps to verify your data.
For more information on data validation, see [Validate data](#validate-data). If you experience errors during validation, see [troubleshoot validation errors](data-box-disk-troubleshoot.md).
defender-for-iot How To Deploy Certificates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/organizations/how-to-deploy-certificates.md
This section describes how to convert existing certificates files to supported f
|**Description** | **CLI command** | |--|--|
-| Convert .crt file to .pem file | openssl x509 -inform PEM -in <full path>/<pem-file-name>.pem -out <fullpath>/<crt-file-name>.crt |
-| Convert .pem file to .crt file | openssl x509 -inform PEM -in <full path>/<pem-file-name>.pem -out <fullpath>/<crt-file-name>.crt |
-| Convert a PKCS#12 file (.pfx .p12) containing a private key and certificates to .pem | openssl pkcs12 -in keyStore.pfx -out keyStore.pem -nodes. You can add -nocerts to only output the private key, or add -nokeys to only output the certificates. |
+| Convert .crt file to .pem file | `openssl x509 -inform PEM -in <full path>/<pem-file-name>.pem -out <fullpath>/<crt-file-name>.crt` |
+| Convert .pem file to .crt file | `openssl x509 -inform PEM -in <full path>/<pem-file-name>.pem -out <fullpath>/<crt-file-name>.crt` |
+| Convert a PKCS#12 file (.pfx .p12) containing a private key and certificates to .pem | `openssl pkcs12 -in keyStore.pfx -out keyStore.pem -nodes`. You can add -nocerts to only output the private key, or add -nokeys to only output the certificates. |
## Troubleshooting
defender-for-iot References Work With Defender For Iot Apis https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/organizations/references-work-with-defender-for-iot-apis.md
Array of JSON objects that represent devices.
| Type | APIs | Example | |--|--|--|
-| GET | curl -k -H "Authorization: <AUTH_TOKEN>" https://<IP_ADDRESS>/api/v1/devices | curl -k -H "Authorization: 1234b734a9244d54ab8d40aedddcabcd" https:<span>//127<span>.0.0.1/api/v1/devices?authorized=true |
+| GET | `curl -k -H "Authorization: <AUTH_TOKEN>" https://<IP_ADDRESS>/api/v1/devices | curl -k -H "Authorization: 1234b734a9244d54ab8d40aedddcabcd" https://127.0.0.1/api/v1/devices?authorized=true` |
### Retrieve device connection information - /api/v1/devices/connections
Array of JSON objects that represent device connections.
> [!div class="mx-tdBreakAll"] > | Type | APIs | Example | > |--|--|--|
-> | GET | curl -k -H "Authorization: <AUTH_TOKEN>" https://<IP_ADDRESS>/api/v1/devices/connections | curl -k -H "Authorization: 1234b734a9244d54ab8d40aedddcabcd" https:/<span>/127.0.0.1/api/v1/devices/connections |
-> | GET | curl -k -H "Authorization: <AUTH_TOKEN>" 'https://<IP_ADDRESS>/api/v1/devices/<deviceId>/connections?lastActiveInMinutes=&discoveredBefore=&discoveredAfter=' | curl -k -H "Authorization: 1234b734a9244d54ab8d40aedddcabcd" 'https:/<span>/127.0.0.1/api/v1/devices/2/connections?lastActiveInMinutes=20&discoveredBefore=1594550986000&discoveredAfter=1594550986000' |
+> | GET | `curl -k -H "Authorization: <AUTH_TOKEN>" https://<IP_ADDRESS>/api/v1/devices/connections` | `curl -k -H "Authorization: 1234b734a9244d54ab8d40aedddcabcd" https:/<span>/127.0.0.1/api/v1/devices/connections` |
+> | GET | `curl -k -H "Authorization: <AUTH_TOKEN>" 'https://<IP_ADDRESS>/api/v1/devices/<deviceId>/connections?lastActiveInMinutes=&discoveredBefore=&discoveredAfter='` | `curl -k -H "Authorization: 1234b734a9244d54ab8d40aedddcabcd" 'https:/<span>/127.0.0.1/api/v1/devices/2/connections?lastActiveInMinutes=20&discoveredBefore=1594550986000&discoveredAfter=1594550986000'` |
### Retrieve information on CVEs - /api/v1/devices/cves
Array of JSON objects that represent CVEs identified on IP addresses.
| Type | APIs | Example | |--|--|--|
-| GET | curl -k -H "Authorization: <AUTH_TOKEN>" https://<IP_ADDRESS>/api/v1/devices/cves | curl -k -H "Authorization: 1234b734a9244d54ab8d40aedddcabcd" https:/<span>/127.0.0.1/api/v1/devices/cves |
-| GET | curl -k -H "Authorization: <AUTH_TOKEN>" https://<IP_ADDRESS>/api/v1/devices/<deviceIpAddress>/cves?top= | curl -k -H "Authorization: 1234b734a9244d54ab8d40aedddcabcd" https:/<span>/127.0.0.1/api/v1/devices/10.10.10.15/cves?top=50 |
+| GET | `curl -k -H "Authorization: <AUTH_TOKEN>" https://<IP_ADDRESS>/api/v1/devices/cves` | `curl -k -H "Authorization: 1234b734a9244d54ab8d40aedddcabcd" https:/<span>/127.0.0.1/api/v1/devices/cves` |
+| GET | `curl -k -H "Authorization: <AUTH_TOKEN>" https://<IP_ADDRESS>/api/v1/devices/<deviceIpAddress>/cves?top=` | `curl -k -H "Authorization: 1234b734a9244d54ab8d40aedddcabcd" https:/<span>/127.0.0.1/api/v1/devices/10.10.10.15/cves?top=50` |
### Retrieve alert information - /api/v1/alerts
Note that /api/v2/ is needed for the following information:
> [!div class="mx-tdBreakAll"] > | Type | APIs | Example | > |--|--|--|
-> | GET | curl -k -H "Authorization: <AUTH_TOKEN>" 'https://<IP_ADDRESS>/api/v1/alerts?state=&fromTime=&toTime=&type=' | curl -k -H "Authorization: 1234b734a9244d54ab8d40aedddcabcd" 'https:/<span>/127.0.0.1/api/v1/alerts?state=unhandled&fromTime=1594550986000&toTime=1594550986001&type=disconnections' |
+> | GET | `curl -k -H "Authorization: <AUTH_TOKEN>" 'https://<IP_ADDRESS>/api/v1/alerts?state=&fromTime=&toTime=&type='` | `curl -k -H "Authorization: 1234b734a9244d54ab8d40aedddcabcd" 'https:/<span>/127.0.0.1/api/v1/alerts?state=unhandled&fromTime=1594550986000&toTime=1594550986001&type=disconnections'` |
### Retrieve timeline events - /api/v1/events
Note that /api/v2/ is needed for the following information:
```rest [- {
-
"engine": "Operational",
-
"handled": false,
-
"title": "Traffic Detected on sensor Interface",
-
"additionalInformation": null,
-
"sourceDevice": 0,
-
"zoneId": 1,
-
"siteId": 1,
-
"time": 1594808245000,
-
"sensorId": 1,
-
"message": "The sensor resumed detecting network traffic on ens224.",
-
"destinationDevice": 0,
-
"id": 1,
-
"severity": "Warning"
-
},
-
{
-
"engine": "Anomaly",
-
"handled": false,
-
"title": "Address Scan Detected",
-
"additionalInformation": null,
-
"sourceDevice": 4,
-
"zoneId": 1,
-
"siteId": 1,
-
"time": 1594808260000,
-
"sensorId": 1,
-
"message": "Address scan detected.\nScanning address: 10.10.10.22\nScanned subnet: 10.11.0.0/16\nScanned addresses: 10.11.1.1, 10.11.20.1, 10.11.20.10, 10.11.20.100, 10.11.20.2, 10.11.20.3, 10.11.20.4, 10.11.20.5, 10.11.20.6, 10.11.20.7...\nIt is recommended to notify the security officer of the incident.",
-
"destinationDevice": 0,
-
"id": 2,
-
"severity": "Critical"
-
},
-
{
-
"engine": "Operational",
-
"handled": false,
-
"title": "Suspicion of Unresponsive MODBUS Device",
-
"additionalInformation": null,
-
"sourceDevice": 194,
-
"zoneId": 1,
-
"siteId": 1,
-
"time": 1594808285000,
-
"sensorId": 1,
-
"message": "Outstation device 10.13.10.1 (Protocol Address 53) seems to be unresponsive to MODBUS requests.",
-
"destinationDevice": 0,
-
"id": 3,
-
"severity": "Minor"
-
}
-
] ```
Note that /api/v2/ is needed for the following information:
> [!div class="mx-tdBreakAll"] > | Type | APIs | Example | > |--|--|--|
-> | GET | curl -k -H "Authorization: <AUTH_TOKEN>" 'https://<>IP_ADDRESS>/external/v1/alerts?state=&zoneId=&fromTime=&toTime=&siteId=&sensor=' | curl -k -H "Authorization: 1234b734a9244d54ab8d40aedddcabcd" 'https:/<span>/127.0.0.1/external/v1/alerts?state=unhandled&zoneId=1&fromTime=0&toTime=1594551777000&siteId=1&sensor=1' |
+> | GET | `curl -k -H "Authorization: <AUTH_TOKEN>" 'https://<>IP_ADDRESS>/external/v1/alerts?state=&zoneId=&fromTime=&toTime=&siteId=&sensor='` | `curl -k -H "Authorization: 1234b734a9244d54ab8d40aedddcabcd" 'https:/<span>/127.0.0.1/external/v1/alerts?state=unhandled&zoneId=1&fromTime=0&toTime=1594551777000&siteId=1&sensor=1'` |
### QRadar alerts
Array of JSON objects that represent devices.
| Type | APIs | Example | |--|--|--|
-| PUT | curl -k -X PUT -d '{"action": "<ACTION>"}' -H "Authorization: <AUTH_TOKEN>" https://<IP_ADDRESS>/external/v1/alerts/<UUID> | curl -k -X PUT -d '{"action": "handle"}' -H "Authorization: 1234b734a9244d54ab8d40aedddcabcd" https:/<span>/127.0.0.1/external/v1/alerts/1-1594550943000 |
+| PUT | `curl -k -X PUT -d '{"action": "<ACTION>"}' -H "Authorization: <AUTH_TOKEN>" https://<IP_ADDRESS>/external/v1/alerts/<UUID>` | `curl -k -X PUT -d '{"action": "handle"}' -H "Authorization: 1234b734a9244d54ab8d40aedddcabcd" https:/<span>/127.0.0.1/external/v1/alerts/1-1594550943000` |
### Alert exclusions (maintenance window) - /external/v1/maintenanceWindow
The APIs that you define here appear in the on-premises management console's **A
- **engines**: Defines from which security engine to suppress alerts during the maintenance process: - ANOMALY- - MALWARE- - OPERATIONAL- - POLICY_VIOLATION- - PROTOCOL_VIOLATION - **sensorIds**: Defines from which Defender for IoT sensor to suppress alerts during the maintenance process. It's the same ID retrieved from /api/v1/appliances (GET).
The APIs that you define here appear in the on-premises management console's **A
- **400 (Bad Request)**: Appears in the following cases: - The **ttl** parameter is not numeric or not positive.- - The **subnets** parameter was defined using a wrong format.- - The **ticketId** parameter is missing.- - The **engine** parameter does not match the existing security engines. - **404 (Not Found)**: One of the sensors doesn't exist.
This method is useful when you want to set a longer duration than the currently
- **400 (Bad Request)**: Appears in the following cases: - The **ttl** parameter is not numeric or not positive.- - The **ticketId** parameter is missing.- - The **ttl** parameter is missing. - **404 (Not Found)**: The ticket ID is not linked to an open maintenance window.
Array of JSON objects that represent maintenance window operations.
| Type | APIs | Example | |--|--|--|
-| POST | curl -k -X POST -d '{"ticketId": "<TICKET_ID>",ttl": <TIME_TO_LIVE>,"engines": [<ENGINE1, ENGINE2...ENGINEn>],"sensorIds": [<SENSOR_ID1, SENSOR_ID2...SENSOR_IDn>],"subnets": [<SUBNET1, SUBNET2....SUBNETn>]}' -H "Authorization: <AUTH_TOKEN>" https:/<span>/127.0.0.1/external/v1/maintenanceWindow | curl -k -X POST -d '{"ticketId": "a5fe99c-d914-4bda-9332-307384fe40bf","ttl": "20","engines": ["ANOMALY"],"sensorIds": ["5","3"],"subnets": ["10.0.0.3"]}' -H "Authorization: 1234b734a9244d54ab8d40aedddcabcd" https:/<span>/127.0.0.1/external/v1/maintenanceWindow |
-| PUT | curl -k -X PUT -d '{"ticketId": "<TICKET_ID>",ttl": "<TIME_TO_LIVE>"}' -H "Authorization: <AUTH_TOKEN>" https:/<span>/127.0.0.1/external/v1/maintenanceWindow | curl -k -X PUT -d '{"ticketId": "a5fe99c-d914-4bda-9332-307384fe40bf","ttl": "20"}' -H "Authorization: 1234b734a9244d54ab8d40aedddcabcd" https:/<span>/127.0.0.1/external/v1/maintenanceWindow |
-| DELETE | curl -k -X DELETE -d '{"ticketId": "<TICKET_ID>"}' -H "Authorization: <AUTH_TOKEN>" https:/<span>/127.0.0.1/external/v1/maintenanceWindow | curl -k -X DELETE -d '{"ticketId": "a5fe99c-d914-4bda-9332-307384fe40bf"}' -H "Authorization: 1234b734a9244d54ab8d40aedddcabcd" https:/<span>/127.0.0.1/external/v1/maintenanceWindow |
-| GET | curl -k -H "Authorization: <AUTH_TOKEN>" 'https://<IP_ADDRESS>/external/v1/maintenanceWindow?fromDate=&toDate=&ticketId=&tokenName=' | curl -k -H "Authorization: 1234b734a9244d54ab8d40aedddcabcd" 'https:/<span>/127.0.0.1/external/v1/maintenanceWindow?fromDate=2020-01-01&toDate=2020-07-14&ticketId=a5fe99c-d914-4bda-9332-307384fe40bf&tokenName=a' |
+| POST | `curl -k -X POST -d '{"ticketId": "<TICKET_ID>",ttl": <TIME_TO_LIVE>,"engines": [<ENGINE1, ENGINE2...ENGINEn>],"sensorIds": [<SENSOR_ID1, SENSOR_ID2...SENSOR_IDn>],"subnets": [<SUBNET1, SUBNET2....SUBNETn>]}' -H "Authorization: <AUTH_TOKEN>" https:/<span>/127.0.0.1/external/v1/maintenanceWindow` | `curl -k -X POST -d '{"ticketId": "a5fe99c-d914-4bda-9332-307384fe40bf","ttl": "20","engines": ["ANOMALY"],"sensorIds": ["5","3"],"subnets": ["10.0.0.3"]}' -H "Authorization: 1234b734a9244d54ab8d40aedddcabcd" https:/<span>/127.0.0.1/external/v1/maintenanceWindow` |
+| PUT | `curl -k -X PUT -d '{"ticketId": "<TICKET_ID>",ttl": "<TIME_TO_LIVE>"}' -H "Authorization: <AUTH_TOKEN>" https:/<span>/127.0.0.1/external/v1/maintenanceWindow` | `curl -k -X PUT -d '{"ticketId": "a5fe99c-d914-4bda-9332-307384fe40bf","ttl": "20"}' -H "Authorization: 1234b734a9244d54ab8d40aedddcabcd" https:/<span>/127.0.0.1/external/v1/maintenanceWindow` |
+| DELETE | `curl -k -X DELETE -d '{"ticketId": "<TICKET_ID>"}' -H "Authorization: <AUTH_TOKEN>" https:/<span>/127.0.0.1/external/v1/maintenanceWindow` | `curl -k -X DELETE -d '{"ticketId": "a5fe99c-d914-4bda-9332-307384fe40bf"}' -H "Authorization: 1234b734a9244d54ab8d40aedddcabcd" https:/<span>/127.0.0.1/external/v1/maintenanceWindow` |
+| GET | `curl -k -H "Authorization: <AUTH_TOKEN>" 'https://<IP_ADDRESS>/external/v1/maintenanceWindow?fromDate=&toDate=&ticketId=&tokenName='` | `curl -k -H "Authorization: 1234b734a9244d54ab8d40aedddcabcd" 'https:/<span>/127.0.0.1/external/v1/maintenanceWindow?fromDate=2020-01-01&toDate=2020-07-14&ticketId=a5fe99c-d914-4bda-9332-307384fe40bf&tokenName=a'` |
### Authenticate user credentials - /external/authentication/validation
response:
| Type | APIs | Example | |--|--|--|
-| POST | curl -k -d '{"username":"<USER_NAME>","password":"PASSWORD"}' 'https://<IP_ADDRESS>/external/authentication/validation' | curl -k -d '{"username":"myUser","password":"1234@abcd"}' 'https:/<span>/127.0.0.1/external/authentication/validation' |
+| POST | `curl -k -d '{"username":"<USER_NAME>","password":"PASSWORD"}' 'https://<IP_ADDRESS>/external/authentication/validation'` | `curl -k -d '{"username":"myUser","password":"1234@abcd"}' 'https:/<span>/127.0.0.1/external/authentication/validation'` |
### Change password - /external/authentication/set_password
response:
| Type | APIs | Example | |--|--|--|
-| POST | curl -k -d '{"username": "<USER_NAME>","password": "<CURRENT_PASSWORD>","new_password": "<NEW_PASSWORD>"}' -H 'Content-Type: application/json' https://<IP_ADDRESS>/external/authentication/set_password | curl -k -d '{"username": "myUser","password": "1234@abcd","new_password": "abcd@1234"}' -H 'Content-Type: application/json' https:/<span>/127.0.0.1/external/authentication/set_password |
+| POST | `curl -k -d '{"username": "<USER_NAME>","password": "<CURRENT_PASSWORD>","new_password": "<NEW_PASSWORD>"}' -H 'Content-Type: application/json' https://<IP_ADDRESS>/external/authentication/set_password` | `curl -k -d '{"username": "myUser","password": "1234@abcd","new_password": "abcd@1234"}' -H 'Content-Type: application/json' https:/<span>/127.0.0.1/external/authentication/set_password` |
### User password update by system admin - /external/authentication/set_password_by_admin
response:
> [!div class="mx-tdBreakAll"] > | Type | APIs | Example | > |--|--|--|
-> | POST | curl -k -d '{"admin_username":"<ADMIN_USERNAME>","admin_password":"<ADMIN_PASSWORD>","username": "<USER_NAME>","new_password": "<NEW_PASSWORD>"}' -H 'Content-Type: application/json' https://<IP_ADDRESS>/external/authentication/set_password_by_admin | curl -k -d '{"admin_user":"adminUser","admin_password": "1234@abcd","username": "myUser","new_password": "abcd@1234"}' -H 'Content-Type: application/json' https:/<span>/127.0.0.1/external/authentication/set_password_by_admin |
+> | POST | `curl -k -d '{"admin_username":"<ADMIN_USERNAME>","admin_password":"<ADMIN_PASSWORD>","username": "<USER_NAME>","new_password": "<NEW_PASSWORD>"}' -H 'Content-Type: application/json' https://<IP_ADDRESS>/external/authentication/set_password_by_admin` | `curl -k -d '{"admin_user":"adminUser","admin_password": "1234@abcd","username": "myUser","new_password": "abcd@1234"}' -H 'Content-Type: application/json' https:/<span>/127.0.0.1/external/authentication/set_password_by_admin` |
## Next steps
devtest-labs Activity Logs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/activity-logs.md
Title: Activity logs in Azure DevTest Labs | Microsoft Docs
+ Title: Activity logs
description: This article provides steps to view activity logs for Azure DevTest Labs. Last updated 07/10/2020
For more information about activity logs, see [Azure Activity Log](../azure-moni
- To learn about setting **alerts** on activity logs, see [Create alerts](create-alerts.md). - To learn more about activity logs, see [Azure Activity Log](../azure-monitor/essentials/activity-log.md).-
devtest-labs Add Artifact Repository https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/add-artifact-repository.md
Title: Add an artifact repository to your lab in Azure DevTest Labs | Microsoft Docs
+ Title: Add an artifact repository to your lab
description: Learn how to specify your own artifact repository for your lab in Azure DevTest Labs to store tools unavailable in the public artifact repository.-+ Last updated 06/26/2020
devtest-labs Add Artifact Vm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/add-artifact-vm.md
Title: Add an artifact to a VM in Azure DevTest Labs | Microsoft Docs
+ Title: Add an artifact to a VM
description: Learn how to add an artifact to a virtual machine in a lab in Azure DevTest Labs-+ Last updated 06/26/2020
devtest-labs Add Vm Use Shared Image https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/add-vm-use-shared-image.md
Title: Add a VM using a shared image in Azure DevTest Labs | Microsoft Docs
+ Title: Add a VM using a shared image
description: Learn how to add a virtual machine (VM) using an image from the attached shared image gallery in Azure DevTest Labs-+ Last updated 06/26/2020
devtest-labs Automate Add Lab User https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/automate-add-lab-user.md
Title: Automate adding a lab user in Azure DevTest Labs | Microsoft Docs
+ Title: Automate adding a lab user
description: This article shows you how to automate adding a user to a lab in Azure DevTest Labs using Azure Resource Manager templates, PowerShell, and CLI. -+ Last updated 06/26/2020
devtest-labs Best Practices Distributive Collaborative Development Environment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/best-practices-distributive-collaborative-development-environment.md
Title: Distributed collaborative development of Azure DevTest Labs resources description: Provides best practices for setting up a distributed and collaborative development environment to develop DevTest Labs resources. -+ Last updated 06/26/2020
devtest-labs Configure Lab Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/configure-lab-identity.md
Title: Configure a lab identity in Azure DevTest Labs
+ Title: Configure a lab identity
description: Learn how to configure a lab identity in Azure DevTest.-+ Last updated 08/20/2020
devtest-labs Configure Lab Remote Desktop Gateway https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/configure-lab-remote-desktop-gateway.md
Title: Configure a lab to use Remote Desktop Gateway in Azure DevTest Labs
+ Title: Configure a lab to use Remote Desktop Gateway
description: Learn how to configure a lab in Azure DevTest Labs with a remote desktop gateway to ensure secure access to the lab VMs without having to expose the RDP port. -+ Last updated 06/26/2020
devtest-labs Configure Shared Image Gallery https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/configure-shared-image-gallery.md
Title: Configure a shared image gallery in Azure DevTest Labs | Microsoft Docs
+ Title: Configure a shared image gallery
description: Learn how to configure a shared image gallery in Azure DevTest Labs, which enables users to access images from a shared location while creating lab resources.-+ Last updated 06/26/2020
devtest-labs Connect Environment Lab Virtual Network https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/connect-environment-lab-virtual-network.md
Title: Connect environments to a lab's vnet in Azure DevTest Labs | Microsoft Docs
+ Title: Connect environments to a lab's vnet
description: Learn how to connect an environment (like Service Fabric cluster) to your lab's virtual network in Azure DevTest Labs-+ Last updated 06/26/2020
devtest-labs Connect Linux Virtual Machine https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/connect-linux-virtual-machine.md
Title: Connect to your Linux virtual machines in Azure DevTest Labs
+ Title: Connect to your Linux virtual machines
description: Learn how to connect to your Linux virtual machine in a lab (Azure DevTest Labs) Last updated 07/17/2020
devtest-labs Connect Virtual Machine Through Browser https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/connect-virtual-machine-through-browser.md
Title: Connect to your virtual machines through a browser - Azure | Microsoft Docs
+ Title: Connect to your virtual machines through a browser
description: Learn how to connect to your virtual machines through a browser.-+ Last updated 06/26/2020
devtest-labs Connect Windows Virtual Machine https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/connect-windows-virtual-machine.md
Title: Connect to your Windows virtual machines in Azure DevTest Labs
+ Title: Connect to your Windows virtual machines
description: Learn how to connect to your Windows virtual machine in a lab (Azure DevTest Labs) Last updated 07/17/2020
devtest-labs Create Alerts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/create-alerts.md
Title: Create activity log alerts for labs in Azure DevTest Labs
+ Title: Create activity log alerts for labs
description: This article provides steps to create activity log alerts for lab in Azure DevTest Labs. Last updated 07/10/2020
In this example, you create an alert for all administrative operations on a lab
- To learn more about creating action groups using different action types, see [Create and manage action groups in the Azure portal](../azure-monitor/alerts/action-groups.md). - To learn more about activity logs, see [Azure Activity Log](../azure-monitor/essentials/activity-log.md). - To learn about setting alerts on activity logs, see [Alerts on activity log](../azure-monitor/alerts/activity-log-alerts.md).-
devtest-labs Create Application Centric Environment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/create-application-centric-environment.md
Title: Create an application-centric environment - Azure
+ Title: Create an application-centric environment
description: This article demonstrates how to create an application-centric environment with Cloud Shell Colony and Microsoft Azure. Last updated 11/26/2020
devtest-labs Create Environment Service Fabric Cluster https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/create-environment-service-fabric-cluster.md
Title: Create a Service Fabric cluster environment in Azure DevTest Labs
+ Title: Create a Service Fabric cluster environment
description: Learn how to create an environment with a self-contained Service Fabric cluster, and start and stop the cluster using schedules. -+ Last updated 06/26/2020
devtest-labs Deliver Proof Concept https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/deliver-proof-concept.md
Title: Deliver a proof of concept - Azure DevTest Labs | Microsoft Docs
+ Title: Deliver a proof of concept
description: Learn how to deliver a proof of concept so Azure DevTest Labs can be successfully incorporated into an enterprise environment.-+ Last updated 06/2/2020
devtest-labs Deploy Nested Template Environments https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/deploy-nested-template-environments.md
Title: Deploy nested template environments in Azure DevTest Labs
+ Title: Deploy nested template environments
description: Learn how to deploy nested Azure Resource Manager templates to provide environments with Azure DevTest Labs. -+ Last updated 06/26/2020
devtest-labs Devtest Lab Add Artifact Repo https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/devtest-lab-add-artifact-repo.md
Title: Add a Git repository to a lab in Azure DevTest Labs | Microsoft Docs
+ Title: Add a Git repository to a lab
description: Learn how to add a GitHub or Azure DevOps Services Git repository for your custom artifacts source in Azure DevTest Labs.-+ Last updated 06/26/2020
devtest-labs Devtest Lab Add Claimable Vm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/devtest-lab-add-claimable-vm.md
Title: Create and manage claimable VMs in Azure DevTest Labs | Microsoft Docs
+ Title: Create and manage claimable VMs
description: Learn how to use the Azure portal to add a claimable virtual machine in Azure DevTest Labs and see the processes to follows to claim/unclaim a virtual machine.-+ Last updated 06/26/2020
devtest-labs Devtest Lab Add Devtest User https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/devtest-lab-add-devtest-user.md
Title: Add owners and users in Azure DevTest Labs| Microsoft Docs
+ Title: Add owners and users
description: Add owners and users in Azure DevTest Labs using either the Azure portal or PowerShell-+ Last updated 06/26/2020
devtest-labs Devtest Lab Add Tag https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/devtest-lab-add-tag.md
Title: Add tags to a lab in Azure DevTest Labs | Microsoft Docs
+ Title: Add tags to a lab
description: Learn how to create custom tags in Azure DevTest Labs and use tags to categorize resources. You can see all the resources in your subscription that have a tag.-+ Last updated 06/26/2020
devtest-labs Devtest Lab Add Vm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/devtest-lab-add-vm.md
Title: Add a VM to a lab in Azure DevTest Labs | Microsoft Docs
+ Title: Add a VM to a lab
description: Learn how to use the Azure portal to add a virtual machine to a lab in Azure DevTest Labs. You can choose a base that is either a custom image or a formula.-+ Last updated 06/26/2020
devtest-labs Devtest Lab Announcements https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/devtest-lab-announcements.md
Title: Post an announcement to a lab in Azure DevTest Labs | Microsoft Docs
+ Title: Post an announcement to a lab
description: Learn how to post a custom announcement in an existing lab to notify users about recent changes or additions to the lab in Azure DevTest Labs.-+ Last updated 06/26/2020
devtest-labs Devtest Lab Artifact Author https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/devtest-lab-artifact-author.md
Title: Create custom artifacts for your DevTest Labs virtual machine | Microsoft Docs
+ Title: Create custom artifacts for your Azure DevTest Labs virtual machine
description: Learn how to create artifacts to use with Azure DevTest Labs to deploy and set up applications after you provision a virtual machine.-+ Last updated 06/26/2020
devtest-labs Devtest Lab Attach Detach Data Disk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/devtest-lab-attach-detach-data-disk.md
Title: Attach or detach a data disk to a virtual machine in Azure DevTest Labs
+ Title: Attach or detach a data disk to a virtual machine
description: Learn how to attach or detach a data disk to a virtual machine in Azure DevTest Labs-+ Last updated 06/26/2020
devtest-labs Devtest Lab Auto Shutdown https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/devtest-lab-auto-shutdown.md
Title: Manage autoshutdown policies in Azure DevTest Labs and Compute VMs | Microsoft Docs
+ Title: Manage autoshutdown policies in Azure DevTest Labs and Compute VMs
description: Learn how to set autoshutdown policy for a lab so that virtual machines are automatically shut down when they aren't in use. -+ Last updated 06/26/2020
devtest-labs Devtest Lab Auto Startup Vm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/devtest-lab-auto-startup-vm.md
Title: Configure autostart settings for a VM in Azure DevTest Labs | Microsoft Docs
+ Title: Configure autostart settings for a VM
description: Learn how to configure autostart settings for VMs in a lab. This setting allows VMs in the lab to be automatically started on a schedule. -+ Last updated 06/26/2020
devtest-labs Devtest Lab Comparing Vm Base Image Types https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/devtest-lab-comparing-vm-base-image-types.md
Title: Comparing custom images and formulas in DevTest Labs | Microsoft Docs
+ Title: Comparing custom images and formulas
description: Learn about the differences between custom images and formulas as VM bases so you can decide which one best suits your environment. Last updated 08/26/2021
devtest-labs Devtest Lab Concepts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/devtest-lab-concepts.md
Title: DevTest Labs concepts | Microsoft Docs
+ Title: Azure DevTest Labs concepts
description: Learn the basic concepts of DevTest Labs, and how it can make it easy to create, manage, and monitor Azure virtual machines-+ Last updated 05/13/2021
devtest-labs Devtest Lab Configure Cost Management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/devtest-lab-configure-cost-management.md
Title: View the monthly estimated lab cost trend in Azure DevTest Labs
+ Title: View the monthly estimated lab cost trend
description: This article provides information on how to track the cost of your lab (monthly estimated cost trend chart) in Azure DevTest Labs.-+ Last updated 06/26/2020
devtest-labs Devtest Lab Configure Marketplace Images https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/devtest-lab-configure-marketplace-images.md
Title: Configure Azure Marketplace image settings in Azure DevTest Labs
+ Title: Configure Azure Marketplace image settings
description: Configure which Azure Marketplace images can be used when creating a VM in Azure DevTest Labs-+ Last updated 06/26/2020
If you aren't able to find a specific image to enable for the lab, follow these
## Next steps Once you've configured how Azure Marketplace images are allowed when creating a VM, the next step is to [add a VM to your lab](devtest-lab-add-vm.md).-
devtest-labs Devtest Lab Configure Use Public Environments https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/devtest-lab-configure-use-public-environments.md
Title: Configure and use public environments in Azure DevTest Labs | Microsoft Docs
+ Title: Configure and use public environments
description: This article describes how to configure and use public environments (Azure Resource Manager templates in a Git repo) in Azure DevTest Labs.-+ Last updated 06/26/2020
devtest-labs Devtest Lab Configure Vnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/devtest-lab-configure-vnet.md
Title: Configure a virtual network in Azure DevTest Labs | Microsoft Docs
+ Title: Configure a virtual network
description: Learn how to configure an existing virtual network and subnet, and use them in a VM with Azure DevTest Labs-+ Last updated 06/26/2020
devtest-labs Devtest Lab Create Custom Image From Vhd Using Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/devtest-lab-create-custom-image-from-vhd-using-powershell.md
Title: Create a custom image from VHD file using Azure PowerShell description: Automate creation of a custom image in Azure DevTest Labs from a VHD file using PowerShell-+ Last updated 06/26/2020
devtest-labs Devtest Lab Create Custom Image From Vm Using Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/devtest-lab-create-custom-image-from-vm-using-portal.md
Title: Create an Azure DevTest Labs custom image from a VM | Microsoft Docs
+ Title: Create an Azure DevTest Labs custom image from a VM
description: Learn how to create a custom image in Azure DevTest Labs from a provisioned VM using the Azure portal-+ Last updated 06/26/2020
devtest-labs Devtest Lab Create Environment From Arm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/devtest-lab-create-environment-from-arm.md
Title: Create multi-VM environments and PaaS resources with templates description: Learn how to create multi-VM environments and PaaS resources in Azure DevTest Labs from an Azure Resource Manager template-+ Last updated 08/12/2020
devtest-labs Devtest Lab Create Lab https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/devtest-lab-create-lab.md
Title: Create a lab in Azure DevTest Labs | Microsoft Docs
+ Title: Create a lab
description: This article walks you through the process of creating a lab using the Azure portal and Azure DevTest Labs. -+ Last updated 10/12/2020
Once you've created your lab, here are some next steps to consider:
* [Create a lab template](devtest-lab-create-template.md) * [Create custom artifacts for your VMs](devtest-lab-artifact-author.md) * [Add a VM to a lab](devtest-lab-add-vm.md)-
devtest-labs Devtest Lab Create Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/devtest-lab-create-template.md
Title: Create an Azure DevTest Labs custom image from a VHD file | Microsoft Docs
+ Title: Create an Azure DevTest Labs custom image from a VHD file
description: Learn how to create a custom image in Azure DevTest Labs from a VHD file using the Azure portal-+ Last updated 06/26/2020
devtest-labs Devtest Lab Delete Lab Vm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/devtest-lab-delete-lab-vm.md
Title: Delete a lab or VM in a lab in Azure DevTest Labs
+ Title: Delete a lab or VM in a lab
description: This article shows you how to delete a lab or delete a VM in a lab using the Azure portal(Azure DevTest Labs). -+ Last updated 01/24/2020
devtest-labs Devtest Lab Dev Ops https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/devtest-lab-dev-ops.md
Title: Integration of Azure DevTest Labs and DevOps | Microsoft Docs
+ Title: Integration of Azure DevTest Labs and DevOps
description: Learn how to use labs of Azure DevTest Labs within a continuous integration (CI)/ continuous delivery (CD) pipelines in an enterprise environment. -+ Last updated 06/26/2020
devtest-labs Devtest Lab Developer Lab https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/devtest-lab-developer-lab.md
Title: Use Azure DevTest Labs for developers | Microsoft Docs
+ Title: Use Azure DevTest Labs for developers
description: Learn about Azure DevTest Labs features that can be used to meet developer requirements and the detailed steps that you can follow to set up a lab.-+ Last updated 06/26/2020
devtest-labs Devtest Lab Enable Licensed Images https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/devtest-lab-enable-licensed-images.md
Title: Enable a licensed image in your lab in Azure DevTest Labs | Microsoft Docs
+ Title: Enable a licensed image in your lab
description: Learn how to enable a licensed image in Azure DevTest Labs using the Azure portal-+ Last updated 06/26/2020
devtest-labs Devtest Lab Grant User Permissions To Specific Lab Policies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/devtest-lab-grant-user-permissions-to-specific-lab-policies.md
Title: Grant user permissions to specific lab policies | Microsoft Docs
+ Title: Grant user permissions to specific lab policies
description: Learn how to grant user permissions to specific lab policies in DevTest Labs based on each user's needs-+ Last updated 06/26/2020
devtest-labs Devtest Lab Guidance Get Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/devtest-lab-guidance-get-started.md
Title: Popular scenarios for using Azure DevTest Labs description: This article provides the primary scenarios for using Azure DevTest Labs and two general paths to start using the service in your organization. -+ Last updated 06/20/2020
devtest-labs Devtest Lab Guidance Governance Application Migration Integration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/devtest-lab-guidance-governance-application-migration-integration.md
Title: Application migration and integration in Azure DevTest Labs
+ Title: Application migration and integration
description: This article provides guidance for governance of Azure DevTest Labs infrastructure in the context of application migration and integration. -+ Last updated 06/26/2020
devtest-labs Devtest Lab Guidance Governance Cost Ownership https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/devtest-lab-guidance-governance-cost-ownership.md
Title: Manage cost and ownership in Azure DevTest Labs
+ Title: Manage cost and ownership
description: This article provides information that helps you optimize for cost and align ownership across your environment.-+ Last updated 06/26/2020
devtest-labs Devtest Lab Guidance Governance Policy Compliance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/devtest-lab-guidance-governance-policy-compliance.md
Title: Company policy and compliance in Azure DevTest Labs
+ Title: Company policy and compliance
description: This article provides guidance on governing company policy and compliance for Azure DevTest Labs infrastructure. -+ Last updated 06/26/2020
devtest-labs Devtest Lab Guidance Governance Resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/devtest-lab-guidance-governance-resources.md
Title: Governance of Azure DevTest Labs infrastructure - Resource
+ Title: Governance of Azure DevTest Labs infrastructure
description: This article addresses the alignment and management of resources for Azure DevTest Labs within your organization. -+ Last updated 06/26/2020
devtest-labs Devtest Lab Guidance Orchestrate Implementation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/devtest-lab-guidance-orchestrate-implementation.md
Title: Orchestrate implementation of Azure DevTest Labs
+ Title: Orchestrate implementation
description: This article provides guidance for orchestrating implementation of Azure DevTest Labs in your organization. -+ Last updated 06/26/2020
devtest-labs Devtest Lab Guidance Prescriptive Adoption https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/devtest-lab-guidance-prescriptive-adoption.md
Title: Adopt Azure DevTest Labs for your enterprise description: This article provides prescriptive guidance for using Azure DevTest Labs in your enterprise. -+ Last updated 06/26/2020
devtest-labs Devtest Lab Guidance Scale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/devtest-lab-guidance-scale.md
Title: Scale up your Azure DevTest Labs infrastructure description: This article provides guidance for scaling up your Azure DevTest Labs infrastructure. -+ Last updated 06/26/2020
devtest-labs Devtest Lab Integrate Ci Cd https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/devtest-lab-integrate-ci-cd.md
Title: Integrate Azure DevTest Labs into your Azure Pipelines description: Learn how to integrate Azure DevTest Labs into your Azure Pipelines continuous integration and delivery pipeline-+ Last updated 06/26/2020
devtest-labs Devtest Lab Internal Support Message https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/devtest-lab-internal-support-message.md
Title: Add an internal support statement to a lab in Azure DevTest Labs
+ Title: Add an internal support statement to a lab
description: Learn how to post an internal support statement to a lab in Azure DevTest Labs-+ Last updated 06/26/2020
devtest-labs Devtest Lab Manage Formulas https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/devtest-lab-manage-formulas.md
Title: Manage formulas in Azure DevTest Labs to create VMs | Microsoft Docs
+ Title: Manage formulas in Azure DevTest Labs to create VMs
description: This article illustrates how to create a formula from either a base (custom image, Marketplace image, or another formula) or an existing VM.-+ Last updated 06/26/2020
devtest-labs Devtest Lab Mandatory Artifacts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/devtest-lab-mandatory-artifacts.md
Title: Specify mandatory artifacts for your Azure DevTest Labs | Microsoft Docs
+ Title: Specify mandatory artifacts
description: Learn how to specify mandatory artifacts that need to installed prior to installing any user-selected artifacts on virtual machines (VMs) in the lab. -+ Last updated 06/26/2020
Now, as a lab user you can view the list of mandatory artifacts while creating a
## Next steps * Learn how to [add a Git artifact repository to a lab](devtest-lab-add-artifact-repo.md).-
devtest-labs Devtest Lab Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/devtest-lab-overview.md
Title: About Azure DevTest Labs | Microsoft Docs
+ Title: About Azure DevTest Labs
description: Learn how DevTest Labs can make it easy to create, manage, and monitor Azure virtual machines-+ Last updated 08/20/2021
devtest-labs Devtest Lab Redeploy Vm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/devtest-lab-redeploy-vm.md
Title: Redeploy a VM in a lab in Azure DevTest Labs | Microsoft Docs
+ Title: Redeploy a VM in a lab
description: Learn how to redeploy a virtual machine (move from one Azure node to another) in Azure DevTest Labs. -+ Last updated 06/26/2020
To redeploy a VM in a lab in Azure DevTest Labs, take the following steps:
## Next steps Learn how to resize a VM in Azure DevTest Labs, see [Resize a VM](devtest-lab-resize-vm.md).--
devtest-labs Devtest Lab Reference Architecture https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/devtest-lab-reference-architecture.md
Title: Enterprise reference architecture for Azure DevTest Labs
+ Title: Enterprise reference architecture
description: This article provides reference architecture guidance for Azure DevTest Labs in an enterprise. -+ Last updated 06/26/2020
devtest-labs Devtest Lab Resize Vm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/devtest-lab-resize-vm.md
Title: Resize a VM in a lab in Azure DevTest Labs
+ Title: Resize a VM in a lab
description: Learn how to change the size of a virtual machine (VM) in Azure DevTest Labs based on your needs for CPU, network, or disk performance.-+ Last updated 06/26/2020
To resize a VM in a lab in Azure DevTest Labs, take the following steps:
## Next steps For detailed information about the resize feature supported by Azure virtual machines, see [Resize virtual machines](https://azure.microsoft.com/blog/resize-virtual-machines/).--
devtest-labs Devtest Lab Restart Vm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/devtest-lab-restart-vm.md
Title: Restart a VM in a lab in Azure DevTest Labs | Microsoft Docs
+ Title: Restart a VM in a lab
description: This article provides steps to quickly and easily restart virtual machines (VM) in Azure DevTest Labs.-+ Last updated 06/26/2020
devtest-labs Devtest Lab Scale Lab https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/devtest-lab-scale-lab.md
Title: Scale quotas and limits in your lab in Azure DevTest Labs | Microsoft Docs
+ Title: Scale quotas and limits in your lab
description: This article describes how you can scale your lab in Azure DevTest Labs. View your usage quotas and limits, and request for an increase. -+ Last updated 06/26/2020
devtest-labs Devtest Lab Set Lab Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/devtest-lab-set-lab-policy.md
Title: Manage lab policies in Azure DevTest Labs | Microsoft Docs
+ Title: Manage lab policies
description: Learn how to define lab policies such as VM sizes, maximum VMs per user, and shutdown automation.-+ Last updated 06/26/2020
devtest-labs Devtest Lab Shared Ip https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/devtest-lab-shared-ip.md
Title: Understand shared IP addresses in Azure DevTest Labs | Microsoft Docs
+ Title: Understand shared IP addresses
description: Learn how Azure DevTest Labs uses shared IP addresses to minimize the public IP addresses required to access your lab VMs.-+ Last updated 06/26/2020
Whenever a VM with shared IP enabled is added to the subnet, DevTest Labs autom
* [Define lab policies in Azure DevTest Labs](devtest-lab-set-lab-policy.md) * [Configure a virtual network in Azure DevTest Labs](devtest-lab-configure-vnet.md)-----
devtest-labs Devtest Lab Store Secrets In Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/devtest-lab-store-secrets-in-key-vault.md
Title: Store secrets in a key vault in Azure DevTest Labs | Microsoft Docs
+ Title: Store secrets in a key vault
description: Learn how to store secrets in an Azure Key Vault and use them while creating a VM, formula, or an environment. -+ Last updated 06/26/2020
devtest-labs Devtest Lab Test Env https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/devtest-lab-test-env.md
Title: Use Azure DevTest Labs for VM and PaaS test environments | Microsoft Docs
+ Title: Use Azure DevTest Labs for VM and PaaS test environments
description: Learn how to use Azure DevTest Labs for VM and PaaS test environment scenarios.-+ Last updated 06/26/2020
devtest-labs Devtest Lab Training Lab https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/devtest-lab-training-lab.md
Title: Use Azure DevTest Labs for training | Microsoft Docs
+ Title: Use Azure DevTest Labs for training
description: This article provides detailed steps that you can follow to set up a lab for training in Azure DevTest Labs.-+ Last updated 06/26/2020
devtest-labs Devtest Lab Troubleshoot Apply Artifacts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/devtest-lab-troubleshoot-apply-artifacts.md
Title: Troubleshoot issues with artifacts in Azure DevTest Labs | Microsoft Docs
+ Title: Troubleshoot issues with artifacts
description: Learn how to troubleshoot issues that occur when applying artifacts in an Azure DevTest Labs virtual machine. -+ Last updated 06/26/2020
devtest-labs Devtest Lab Troubleshoot Artifact Failure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/devtest-lab-troubleshoot-artifact-failure.md
Title: Diagnose artifact failures in an Azure DevTest Labs virtual machine description: DevTest Labs provide information that you can use to diagnose an artifact failure. This article shows you how to troubleshoot artifact failures. -+ Last updated 06/26/2020
For instructions on finding the log files on a **Linux** VM, see the following a
## Next steps * Learn how to [add a Git repository to a lab](devtest-lab-add-artifact-repo.md).-
devtest-labs Devtest Lab Upload Vhd Using Azcopy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/devtest-lab-upload-vhd-using-azcopy.md
Title: Upload VHD file to Azure DevTest Labs using AzCopy | Microsoft Docs
+ Title: Upload VHD file to Azure DevTest Labs using AzCopy
description: This article provides a walkthrough to use the AzCopy command-line utility to upload a VHD file to a lab's storage account in Azure DevTest Labs.-+ Last updated 06/26/2020
devtest-labs Devtest Lab Upload Vhd Using Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/devtest-lab-upload-vhd-using-powershell.md
Title: Upload VHD file to Azure DevTest Labs using PowerShell | Microsoft Docs
+ Title: Upload VHD file to Azure DevTest Labs using PowerShell
description: This article provides a walkthrough that shows you how to upload a VHD file to Azure DevTest Labs using PowerShell.-+ Last updated 06/26/2020
devtest-labs Devtest Lab Upload Vhd Using Storage Explorer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/devtest-lab-upload-vhd-using-storage-explorer.md
Title: Upload VHD file to Azure DevTest Labs using Storage Explorer description: Upload VHD file to lab's storage account using Microsoft Azure Storage Explorer-+ Last updated 06/26/2020
devtest-labs Devtest Lab Use Arm And Powershell For Lab Resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/devtest-lab-use-arm-and-powershell-for-lab-resources.md
Title: Create or modify labs using Azure Resource Manager templates description: Learn how to use Azure Resource Manager templates with PowerShell to create or modify labs automatically in a DevTest lab-+ Last updated 06/26/2020
devtest-labs Devtest Lab Use Claim Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/devtest-lab-use-claim-capabilities.md
Title: Use claim capabilities in Azure DevTest Labs | Microsoft Docs
+ Title: Use claim capabilities
description: Learn about different scenarios for using claim/unclaim capabilities of Azure DevTest Labs-+ Last updated 06/26/2020
devtest-labs Devtest Lab Use Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/devtest-lab-use-resource-manager-template.md
Title: View and use a virtual machine's Azure Resource Manager template description: Learn how to use the Azure Resource Manager template from a virtual machine to create other VMs-+ Last updated 06/26/2020
devtest-labs Devtest Lab Vm Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/devtest-lab-vm-powershell.md
Title: Create a virtual machine in DevTest Labs with Azure PowerShell
+ Title: Create a virtual machine in Azure DevTest Labs with Azure PowerShell
description: Learn how to use Azure DevTest Labs to create and manage virtual machines with Azure PowerShell.-+ Last updated 06/26/2020
devtest-labs Devtest Lab Vmcli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/devtest-lab-vmcli.md
Title: Create and manage virtual machines in DevTest Labs with Azure CLI
+ Title: Create and manage virtual machines in Azure DevTest Labs with Azure CLI
description: Learn how to use Azure DevTest Labs to create and manage virtual machines with Azure CLI-+ Last updated 06/26/2020
devtest-labs Enable Browser Connection Lab Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/enable-browser-connection-lab-virtual-machines.md
Title: Enable browser connection on Azure DevTest Labs virtual machines description: DevTest Labs now integrates with Azure Bastion, as an owner of the lab you can enable accessing all lab virtual machines through a browser. -+ Last updated 06/26/2020
devtest-labs Enable Managed Identities Lab Vms https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/enable-managed-identities-lab-vms.md
Title: Enable managed identities on your lab VMs in Azure DevTest Labs
+ Title: Enable managed identities on your lab VMs
description: This article shows how a lab owner can enable user-assigned managed identities on your lab virtual machines. -+ Last updated 06/26/2020
To add a user assigned managed identity for lab VMs, follow these steps:
## Next steps To learn more about managed identities, see [What is managed identities for Azure resources?](../active-directory/managed-identities-azure-resources/overview.md).-------
devtest-labs Encrypt Disks Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/encrypt-disks-customer-managed-keys.md
Title: Encrypt OS disks using customer-managed keys in Azure DevTest Labs
+ Title: Encrypt OS disks using customer-managed keys
description: Learn how to encrypt operating system (OS) disks using customer-managed keys in Azure DevTest Labs. -+ Last updated 09/01/2020
devtest-labs Encrypt Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/encrypt-storage.md
Title: Encrypt an Azure storage account used by a lab in Azure DevTest Labs
+ Title: Encrypt an Azure storage account used by a lab
description: Learn how to configure encryption of an Azure storage used by a lab in Azure DevTest Labs Last updated 07/29/2020
devtest-labs Environment Security Alerts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/environment-security-alerts.md
Title: Security alerts for environments in Azure DevTest Labs
+ Title: Security alerts for environments
description: This article shows you how to view security alerts for an environment in DevTest Labs and take an appropriate action. -+ Last updated 06/26/2020
devtest-labs Extend Devtest Labs Azure Functions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/extend-devtest-labs-azure-functions.md
Title: Extend Azure DevTest Labs using Azure Functions | Microsoft Docs
+ Title: Extend Azure DevTest Labs using Azure Functions
description: Learn how to extend Azure DevTest Labs using Azure Functions. -+ Last updated 06/26/2020
Azure Functions can help extend the functionality of DevTest Labs beyond whatΓÇÖ
- [Frequently Asked Questions](devtest-lab-faq.yml) - [Scaling up DevTest Labs](devtest-lab-guidance-scale.md) - [Automating DevTest Labs with PowerShell](https://github.com/Azure/azure-devtestlab/tree/master/samples/DevTestLabs/Modules/Library/Tests)--------
devtest-labs Image Factory Create https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/image-factory-create.md
Title: Create an image factory in Azure DevTest Labs | Microsoft Docs
+ Title: Create an image factory
description: This article shows you how to set up a custom image factory by using sample scripts available in the Git repository (Azure DevTest Labs). -+ Last updated 06/26/2020
devtest-labs Image Factory Save Distribute Custom Images https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/image-factory-save-distribute-custom-images.md
Title: Save and distribute images in Azure DevTest Labs | Microsoft Docs
+ Title: Save and distribute images
description: This article gives you the steps to save custom images from the already created virtual machines (VMs) in Azure DevTest Labs.-+ Last updated 06/26/2020
devtest-labs Image Factory Set Retention Policy Cleanup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/image-factory-set-retention-policy-cleanup.md
Title: Set up retention policy in Azure DevTest Labs | Microsoft Docs
+ Title: Set up retention policy
description: Learn how to configure a retention policy, clean up the factory, and retire old images from DevTest Labs. -+ Last updated 06/26/2020
Adding a new image to your factory is also simple. When you want to include a ne
1. [Schedule your build/release](/azure/devops/pipelines/build/triggers?tabs=designer) to run the image factory periodically. It refreshes your factory-generated images on a regular basis. 2. Make more golden images for your factory. You may also consider [creating artifacts](devtest-lab-artifact-author.md) to script additional pieces of your VM setup tasks and include the artifacts in your factory images. 4. Create a [separate build/release](/azure/devops/pipelines/overview) to run the **DistributeImages** script separately. You can run this script when you make changes to Labs.json and get images copied to target labs without having to recreate all the images again.-
devtest-labs Image Factory Set Up Devops Lab https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/image-factory-set-up-devops-lab.md
Title: Run an image factory from Azure DevOps in Azure DevTest Labs
+ Title: Run an image factory from Azure DevOps
description: This article covers all the preparations needed to run the image factory from Azure DevOps (formerly Visual Studio Team Services).-+ Last updated 06/26/2020
devtest-labs Import Virtual Machines From Another Lab https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/import-virtual-machines-from-another-lab.md
Title: Import virtual machines from another lab in Azure DevTest Labs
+ Title: Import virtual machines from another lab
description: This article describes how to import virtual machines from another lab into the current lab in Azure DevTest Labs.-+ Last updated 06/26/2020
devtest-labs Integrate Environments Devops Pipeline https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/integrate-environments-devops-pipeline.md
Title: Integrate environments into Azure Pipelines in Azure DevTest Labs
+ Title: Integrate environments into Azure Pipelines
description: Learn how to integrate Azure DevTest Labs environments into your Azure DevOps continuous integration (CI) and continuous delivery (CD) pipelines. -+ Last updated 06/26/2020
See the following articles:
- [Create multi-VM environments with Resource Manager templates](devtest-lab-create-environment-from-arm.md). - Quickstart Resource Manager templates for DevTest Labs automation from the [DevTest Labs GitHub repository](https://github.com/Azure/azure-quickstart-templates). - [VSTS Troubleshooting page](/azure/devops/pipelines/troubleshooting)-
devtest-labs Network Isolation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/network-isolation.md
Title: Network isolation in Azure DevTest Labs
+ Title: Network isolation
description: Learn about network isolation in Azure DevTest Labs.-+ Last updated 08/25/2020
devtest-labs Personal Data Delete Export https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/personal-data-delete-export.md
Title: How to delete and export personal data from Azure DevTest Labs
+ Title: How to delete and export personal data
description: Learn how to delete and export personal data from the Azure DevLast Labs service to support your obligations under the General Data Protection Regulation (GDPR). -+ Last updated 06/26/2020
devtest-labs Report Usage Across Multiple Labs Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/report-usage-across-multiple-labs-subscriptions.md
Title: Azure DevTest Labs usage across multiple labs and subscriptions description: Learn how to report Azure DevTest Labs usage across multiple labs and subscriptions.-+ Last updated 06/26/2020
devtest-labs Resource Group Control https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/resource-group-control.md
Title: Specify resource group for VMs in Azure DevTest Labs | Microsoft Docs
+ Title: Specify resource group for VMs
description: Learn how to specify a resource group for VMs in a lab in Azure DevTest Labs. -+ Last updated 06/26/2020
devtest-labs Samples Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/samples-cli.md
Title: Azure CLI Samples for Azure Lab Services | Microsoft Docs
+ Title: Azure CLI Samples
description: This article provides a list of Azure CLI scripting samples that help you manage labs in Azure Lab Services.-+ Last updated 06/26/2020
devtest-labs Samples Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/samples-powershell.md
Title: Azure PowerShell Samples for Azure Lab Services | Microsoft Docs
+ Title: Azure PowerShell Samples
description: Azure PowerShell Samples - Scripts to help you manage labs in Azure Lab Services-+ Last updated 06/26/2020
The following table includes links to sample Azure PowerShell scripts for Azure
|[Create a custom image from a VHD](scripts/create-custom-image-from-vhd.md)| This PowerShell script creates a custom image in a lab in Azure DevTest Labs. | |[Create a custom role in a lab](scripts/create-custom-role-in-lab.md)| This PowerShell script creates a custom role in a lab in Azure Lab Services. | |[Set allowed VM sizes in a lab](scripts/set-allowed-vm-sizes-in-lab.md)| This PowerShell script sets allowed virtual machine (VM) sizes in a lab. |-
devtest-labs Add External User To Lab https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/scripts/add-external-user-to-lab.md
Title: PowerShell - Add an external user to a lab in Azure DevTest Labs
+ Title: PowerShell - Add an external user to a lab
description: This article provides an Azure PowerShell script that adds an external user to a lab in Azure DevTest Labs. ms.devlang: azurecli
devtest-labs Add Marketplace Images To Lab https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/scripts/add-marketplace-images-to-lab.md
Title: PowerShell - Add a marketplace image to a lab in Azure DevTest Labs
+ Title: PowerShell - Add a marketplace image to a lab
description: This PowerShell script adds a marketplace image to a lab in Azure DevTest Labs. ms.devlang: azurecli
devtest-labs Create Custom Image From Vhd https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/scripts/create-custom-image-from-vhd.md
Title: PowerShell - Create custom image from VHD file in Azure Lab Services
+ Title: PowerShell - Create custom image from VHD file
description: This PowerShell script creates a custom image from a VHD file in Azure Lab Services. ms.devlang: azurecli
devtest-labs Create Custom Role In Lab https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/scripts/create-custom-role-in-lab.md
Title: PowerShell - Create a custom role in a lab in Azure DevTest Labs
+ Title: PowerShell - Create a custom role in a lab
description: This article provides an Azure PowerShell script that creats a custom role in a lab in Azure DevTest Labs. ms.devlang: azurecli
devtest-labs Set Allowed Vm Sizes In Lab https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/scripts/set-allowed-vm-sizes-in-lab.md
Title: "PowerShell script: Set allowed VM sizes in Azure Lab Services | Microsoft Docs"
+ Title: "PowerShell script: Set allowed VM sizes"
description: This article includes a sample PowerShell script that sets allowed virtual machine (VM) sizes in Azure Lab Services. ms.devlang: azurecli
devtest-labs Start Connect Virtual Machine In Lab Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/scripts/start-connect-virtual-machine-in-lab-cli.md
Title: Azure CLI Script Sample - Start a virtual machine in a lab | Microsoft Docs
+ Title: Azure CLI Script Sample - Start a virtual machine in a lab
description: This Azure CLI script starts a virtual machine in a lab in Azure DevTest Labs. ms.devlang: azurecli
devtest-labs Start Machines Use Automation Runbooks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/start-machines-use-automation-runbooks.md
Title: Start machines using Automation runbooks in Azure DevTest Labs
+ Title: Start machines using Automation runbooks
description: Learn how to start virtual machines in a lab in Azure DevTest Labs by using Azure Automation runbooks. -+ Last updated 06/26/2020
devtest-labs Test App Azure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/test-app-azure.md
Title: How to test your app in Azure | Microsoft Docs
+ Title: How to test your app in Azure
description: Learn how to create a file share in a lab and mount it on your local machine and a virtual machine in the lab, and then deploy desktop/web applications to the file share and test them. -+ Last updated 06/26/2020
devtest-labs Troubleshoot Vm Environment Creation Failures https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/troubleshoot-vm-environment-creation-failures.md
Title: Troubleshoot VM and environment failures Azure DevTest Labs
+ Title: Troubleshoot VM and environment failures
description: Learn how to troubleshoot virtual machine (VM) and environment creation failures in Azure DevTest Labs.-+ Last updated 06/26/2020
devtest-labs Tutorial Create Custom Lab https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/tutorial-create-custom-lab.md
Title: Create a lab using Azure DevTest Labs | Microsoft Docs
+ Title: Create a lab tutorial
description: In this tutorial, you create a lab in Azure DevTest Labs by using the Azure portal. A lab admin sets up a lab, creates VMs in the lab, and configures policies. Last updated 06/26/2020
In this tutorial, you created a lab with a VM and gave a user access to the lab.
> [!div class="nextstepaction"] > [Tutorial: Access the lab](tutorial-use-custom-lab.md)-
devtest-labs Tutorial Use Custom Lab https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/tutorial-use-custom-lab.md
Title: Access a lab in Azure DevTest Labs | Microsoft Docs
+ Title: Access a lab
description: In this tutorial, you access the lab that's created by using Azure DevTest Labs, claim virtual machines, use them, and then unclaim them. Last updated 06/26/2020
This tutorial showed you how to access and use a lab that was created by using A
> [!div class="nextstepaction"] > [How to: Use VMs in a lab](devtest-lab-add-vm.md)-
devtest-labs Use Command Line Start Stop Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/use-command-line-start-stop-virtual-machines.md
Title: Use command-line tools to start and stop VMs Azure DevTest Labs
+ Title: Use command-line tools to start and stop VMs
description: Learn how to use command-line tools to start and stop virtual machines in Azure DevTest Labs. -+ Last updated 06/26/2020
devtest-labs Use Devtest Labs Build Release Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/use-devtest-labs-build-release-pipelines.md
Title: Use DevTest Labs in Azure Pipelines build and release pipelines description: Learn how to use Azure DevTest Labs in Azure Pipelines build and release pipelines. -+ Last updated 06/26/2020
devtest-labs Use Managed Identities Environments https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/use-managed-identities-environments.md
Title: Use Azure managed identities to create environments in DevTest Labs | Microsoft Docs
+ Title: Use Azure managed identities to create environments
description: Learn how to use managed identities in Azure to deploy environments in a lab in Azure DevTest Labs. -+ Last updated 06/26/2020
devtest-labs Use Paas Services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/use-paas-services.md
Title: Use Platform-as-a-Service (PaaS) services in Azure DevTest Labs
+ Title: Use Platform-as-a-Service (PaaS) services
description: Learn how to use Platform-as-a-Service (Pass) services in Azure DevTest Labs. -+ Last updated 06/26/2020
See the following articles for details about environments:
- [Connect an environment to your lab's virtual network in Azure DevTest Labs](connect-environment-lab-virtual-network.md) - [Integrate environments into your Azure DevOps CI/CD pipelines](integrate-environments-devops-pipeline.md) -----
event-grid Monitor Virtual Machine Changes Event Grid Logic App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/monitor-virtual-machine-changes-event-grid-logic-app.md
Now add an [*action*](../logic-apps/logic-apps-overview.md#logic-app-concepts) s
1. To check that your logic app is getting the specified events, update your virtual machine.
- For example, you can resize your virtual machine in the Azure portal or [resize your VM with Azure PowerShell](../virtual-machines/windows/resize-vm.md).
+ For example, you can [resize your virtual machine](../virtual-machines/resize-vm.md).
After a few moments, you should get an email. For example:
frontdoor Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/standard-premium/faq.md
Alternative way to lock down your application to accept traffic only from your s
</configuration> ```
+* Azure Front Door also supports additional service tags, *AzureFrontDoor.Frontend* and *AzureFrontDoor.FirstParty*, to integrate internally with other Azure services. See [available service tags](../../virtual-network/service-tags-overview.md#available-service-tags) for more details on Azure Front Door service tags use cases.
+ ### Can the anycast IP change over the lifetime of my Front Door? The frontend anycast IP for your Front Door should typically not change and may remain static for the lifetime of the Front Door. However, there are **no guarantees** for the same. Kindly don't take any direct dependencies on the IP.
governance Deploy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/azure-security-benchmark-foundation/deploy.md
Title: Deploy Azure Security Benchmark Foundation blueprint sample description: Deploy steps for the Azure Security Benchmark Foundation blueprint sample including blueprint artifact parameter details. Previously updated : 03/12/2021 Last updated : 09/08/2021 # Deploy the Azure Security Benchmark Foundation blueprint sample
governance Index https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/azure-security-benchmark-foundation/index.md
Title: Azure Security Benchmark Foundation blueprint sample overview description: Overview and architecture of the Azure Security Benchmark Foundation blueprint sample. Previously updated : 03/12/2021 Last updated : 09/08/2021 # Overview of the Azure Security Benchmark Foundation blueprint sample
The Azure Security Benchmark Foundation blueprint sample provides a set of baseline infrastructure patterns to help you build a secure and compliant Azure environment. The blueprint helps you deploy a cloud-based architecture that offers solutions to scenarios that have accreditation or compliance
-requirements. This foundational blueprint sample is an extension of the [Azure Security Benchmark
-sample blueprint](../azure-security-benchmark.md). It deploys and configures network boundaries,
-monitoring, and other resources in alignment with the policies and other guardrails defined in the
+requirements. It deploys and configures network boundaries, monitoring, and other resources in
+alignment with the policies and other guardrails defined in the
[Azure Security Benchmark](../../../../security/benchmarks/index.yml). ## Architecture
governance Azure Security Benchmark https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/azure-security-benchmark.md
- Title: Azure Security Benchmark blueprint sample overview
-description: Overview of the Azure Security Benchmark blueprint sample. This blueprint sample helps customers assess specific controls.
Previously updated : 04/02/2021--
-# Azure Security Benchmark blueprint sample
-
-The Azure Security Benchmark blueprint sample provides governance guardrails using
-[Azure Policy](../../policy/overview.md) that help you assess specific
-[Azure Security Benchmark v1](../../../security/benchmarks/overview.md) controls. This blueprint
-helps customers deploy a core set of policies for any Azure-deployed architecture where they intend
-to implement Azure Security Benchmark controls.
-
-## Control mapping
-
-The [Azure Policy control mapping](../../policy/samples/azure-security-benchmark.md) provides
-details on policy definitions included within this blueprint and how these policy definitions map to
-the **compliance domains** and **controls** in the Azure Security Benchmark. When assigned to an
-architecture, resources are evaluated by Azure Policy for non-compliance with assigned policy
-definitions. For more information, see [Azure Policy](../../policy/overview.md).
-
-## Deploy
-
-To deploy the Azure Blueprints Azure Security Benchmark blueprint sample, the following steps must
-be taken:
-
-> [!div class="checklist"]
-> - Create a new blueprint from the sample
-> - Mark your copy of the sample as **Published**
-> - Assign your copy of the blueprint to an existing subscription
-
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free)
-before you begin.
-
-### Create blueprint from sample
-
-First, implement the blueprint sample by creating a new blueprint in your environment using the
-sample as a starter.
-
-1. Select **All services** in the left pane. Search for and select **Blueprints**.
-
-1. From the **Getting started** page on the left, select the **Create** button under _Create a
- blueprint_.
-
-1. Find the **Azure Security Benchmark v1** blueprint sample under _Other Samples_ and select the
- name to select this sample.
-
-1. Enter the _Basics_ of the blueprint sample:
-
- - **Blueprint name**: Provide a name for your copy of the Azure Security Benchmark blueprint
- sample.
- - **Definition location**: Use the ellipsis and select the management group to save your copy of
- the sample to.
-
-1. Select the _Artifacts_ tab at the top of the page or **Next: Artifacts** at the bottom of the
- page.
-
-1. Review the list of artifacts that are included in the blueprint sample. Many of the artifacts
- have parameters that we'll define later. Select **Save Draft** when you've finished reviewing the
- blueprint sample.
-
-### Publish the sample copy
-
-Your copy of the blueprint sample has now been created in your environment. It's created in
-**Draft** mode and must be **Published** before it can be assigned and deployed. The copy of the
-blueprint sample can be customized to your environment and needs, but that modification may move it
-away from alignment with Azure Security Benchmark recommendations.
-
-1. Select **All services** in the left pane. Search for and select **Blueprints**.
-
-1. Select the **Blueprint definitions** page on the left. Use the filters to find your copy of the
- blueprint sample and then select it.
-
-1. Select **Publish blueprint** at the top of the page. In the new page on the right, provide a
- **Version** for your copy of the blueprint sample. This property is useful for if you make a
- modification later. Provide **Change notes** such as "First version published from the Azure
- Security Benchmark blueprint sample." Then select **Publish** at the bottom of the page.
-
-### Assign the sample copy
-
-Once the copy of the blueprint sample has been successfully **Published**, it can be assigned to a
-subscription within the management group it was saved to. This step is where parameters are
-provided to make each deployment of the copy of the blueprint sample unique.
-
-1. Select **All services** in the left pane. Search for and select **Blueprints**.
-
-1. Select the **Blueprint definitions** page on the left. Use the filters to find your copy of the
- blueprint sample and then select it.
-
-1. Select **Assign blueprint** at the top of the blueprint definition page.
-
-1. Provide the parameter values for the blueprint assignment:
-
- - Basics
-
- - **Subscriptions**: Select one or more of the subscriptions that are in the management group
- you saved your copy of the blueprint sample to. If you select more than one subscription, an
- assignment will be created for each using the parameters entered.
- - **Assignment name**: The name is pre-populated for you based on the name of the blueprint.
- Change as needed or leave as is.
- - **Location**: Select a region for the managed identity to be created in. Azure Blueprint uses
- this managed identity to deploy all artifacts in the assigned blueprint. To learn more, see
- [managed identities for Azure resources](../../../active-directory/managed-identities-azure-resources/overview.md).
- - **Blueprint definition version**: Pick a **Published** version of your copy of the blueprint
- sample.
-
- - Lock Assignment
-
- Select the blueprint lock setting for your environment. For more information, see
- [blueprints resource locking](../concepts/resource-locking.md).
-
- - Managed Identity
-
- Leave the default _system assigned_ managed identity option.
-
- - Artifact parameters
-
- The parameters defined in this section apply to the artifact under which it's defined. These
- parameters are [dynamic parameters](../concepts/parameters.md#dynamic-parameters) since they're
- defined during the assignment of the blueprint. For a full list or artifact parameters and
- their descriptions, see [Artifact parameters table](#artifact-parameters-table).
-
-1. Once all parameters have been entered, select **Assign** at the bottom of the page. The blueprint
- assignment is created and artifact deployment begins. Deployment takes roughly an hour. To check
- on the status of deployment, open the blueprint assignment.
-
-> [!WARNING]
-> The Azure Blueprints service and the built-in blueprint samples are **free of cost**. Azure
-> resources are [priced by product](https://azure.microsoft.com/pricing/). Use the
-> [pricing calculator](https://azure.microsoft.com/pricing/calculator/) to estimate the cost of
-> running resources deployed by this blueprint sample.
-
-### Artifact parameters table
-
-The following table provides a list of the blueprint artifact parameters:
-
-|Artifact name|Artifact type|Parameter name|Description|
-|-|-|-|-|
-|Audit Azure Security Benchmark recommendations and deploy specific supporting VM Extensions|Policy assignment|List of users excluded from Windows VM Administrators group|A semicolon-separated list of members that should be excluded in the Administrators local group. Ex: Administrator; myUser1; myUser2|
-|Audit Azure Security Benchmark recommendations and deploy specific supporting VM Extensions|Policy assignment|List of users that must be included in Windows VM Administrators group|A semicolon-separated list of members that should be included in the Administrators local group. Ex: Administrator; myUser1; myUser2|
-|Audit Azure Security Benchmark recommendations and deploy specific supporting VM Extensions|Policy assignment|List of users that Windows VM Administrators group must *only* include|A semicolon-separated list of all the expected members of the Administrators local group. Ex: Administrator; myUser1; myUser2|
-|Audit Azure Security Benchmark recommendations and deploy specific supporting VM Extensions|Policy assignment|List of regions where Network Watcher should be enabled|To see a complete list of regions use Get-AzLocation|
-|Audit Azure Security Benchmark recommendations and deploy specific supporting VM Extensions|Policy assignment|Virtual network where VMs should be connected|Example: /subscriptions/YourSubscriptionId/resourceGroups/YourResourceGroupName/providers/Microsoft.Network/virtualNetworks/Name|
-|Audit Azure Security Benchmark recommendations and deploy specific supporting VM Extensions|Policy assignment|Network gateway that virtual networks should use|Example: /subscriptions/YourSubscriptionId/resourceGroups/YourResourceGroup/providers/Microsoft.Network/virtualNetworkGateways/Name|
-|Audit Azure Security Benchmark recommendations and deploy specific supporting VM Extensions|Policy assignment|List of workspace IDs where Log Analytics agents should connect|A semicolon-separated list of the workspace IDs that the Log Analytics agent should be connected to|
-|Audit Azure Security Benchmark recommendations and deploy specific supporting VM Extensions|Policy assignment|List of resource types that should have diagnostic logs enabled|Audit diagnostic setting for selected resource types|
-|Audit Azure Security Benchmark recommendations and deploy specific supporting VM Extensions|Policy assignment|Latest PHP version|Latest supported PHP version for App Services|
-|Audit Azure Security Benchmark recommendations and deploy specific supporting VM Extensions|Policy assignment|Latest Java version|Latest supported Java version for App Services|
-|Audit Azure Security Benchmark recommendations and deploy specific supporting VM Extensions|Policy assignment|Latest Windows Python version|Latest supported Python version for App Services|
-|Audit Azure Security Benchmark recommendations and deploy specific supporting VM Extensions|Policy assignment|Latest Linux Python version|Latest supported Python version for App Services|
-
-## Next steps
-
-Additional articles about blueprints and how to use them:
--- Learn about the [blueprint lifecycle](../concepts/lifecycle.md).-- Understand how to use [static and dynamic parameters](../concepts/parameters.md).-- Learn to customize the [blueprint sequencing order](../concepts/sequencing-order.md).-- Find out how to make use of [blueprint resource locking](../concepts/resource-locking.md).-- Learn how to [update existing assignments](../how-to/update-existing-assignments.md).
governance Canada Federal Pbmm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/canada-federal-pbmm.md
Title: Canada Federal PBMM blueprint sample description: Overview of the Canada Federal PBMM blueprint sample. This blueprint sample helps customers assess specific controls. Previously updated : 05/04/2021 Last updated : 09/08/2021 # Canada Federal PBMM blueprint sample
governance Cis Azure 1 1 0 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/cis-azure-1-1-0.md
Title: CIS Microsoft Azure Foundations Benchmark v1.1.0 blueprint sample description: Overview of the CIS Microsoft Azure Foundations Benchmark v1.1.0 blueprint sample. This blueprint sample helps customers assess specific controls. Previously updated : 03/11/2021 Last updated : 09/08/2021 # CIS Microsoft Azure Foundations Benchmark v1.1.0 blueprint sample
governance Cis Azure 1 3 0 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/cis-azure-1-3-0.md
Title: CIS Microsoft Azure Foundations Benchmark v1.3.0 blueprint sample description: Overview of the CIS Microsoft Azure Foundations Benchmark v1.3.0 blueprint sample. This blueprint sample helps customers assess specific controls. Previously updated : 03/11/2021 Last updated : 09/08/2021 # CIS Microsoft Azure Foundations Benchmark v1.3.0 blueprint sample
governance Cmmc L3 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/cmmc-l3.md
Title: CMMC Level 3 blueprint sample description: Overview of the CMMC Level 3 blueprint sample. This blueprint sample helps customers assess specific controls. Previously updated : 03/24/2021 Last updated : 09/08/2021 # CMMC Level 3 blueprint sample
governance Control Mapping https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/dod-impact-level-4/control-mapping.md
- Title: DoD Impact Level 4 blueprint sample controls
-description: Control mapping of the DoD Impact Level 4 blueprint sample. Each control is mapped to one or more Azure Policy definitions that assist with assessment.
Previously updated : 04/02/2021--
-# Control mapping of the DoD Impact Level 4 blueprint sample
-
-The following article details how the Azure Blueprints Department of Defense Impact Level 4 (DoD
-IL4) blueprint sample maps to the DoD Impact Level 4 controls. For more information about the
-controls, see
-[DoD Cloud Computing Security Requirements Guide (SRG)](https://dl.dod.cyber.mil/wp-content/uploads/cloud/pdf/Cloud_Computing_SRG_v1r3.pdf).
-The Defense Information Systems Agency (DISA) is an agency of the US Department of Defense (DoD)
-that is responsible for developing and maintaining the DoD Cloud Computing Security Requirements
-Guide (SRG). The SRG defines the baseline security requirements for cloud service providers (CSPs)
-that host DoD information, systems, and applications, and for DoD's use of cloud services.
-
-The following mappings are to the **DoD Impact Level 4** controls. Use the navigation on the right to jump
-directly to a specific control mapping. Many of the mapped controls are implemented with an
-[Azure Policy](../../../policy/overview.md) initiative. To review the complete initiative, open
-**Policy** in the Azure portal and select the **Definitions** page. Then, find and select the
-**\[Preview\]: DoD Impact Level 4** built-in policy initiative.
-
-> [!IMPORTANT]
-> Each control below is associated with one or more [Azure Policy](../../../policy/overview.md)
-> definitions. These policies may help you
-> [assess compliance](../../../policy/how-to/get-compliance-data.md) with the control; however,
-> there often is not a one-to-one or complete match between a control and one or more policies. As
-> such, **Compliant** in Azure Policy refers only to the policies themselves; this doesn't ensure
-> you're fully compliant with all requirements of a control. In addition, the compliance standard
-> includes controls that aren't addressed by any Azure Policy definitions at this time. Therefore,
-> compliance in Azure Policy is only a partial view of your overall compliance status. The
-> associations between controls and Azure Policy definitions for this compliance blueprint sample
-> may change over time. To view the change history, see the
-> [GitHub Commit History](https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/dod-impact-level-4/control-mapping.md).
-
-## AC-2 Account Management
-
-This blueprint helps you review accounts that may not comply with your organization's account
-management requirements. This blueprint assigns [Azure Policy](../../../policy/overview.md)
-definitions that audit external accounts with read, write, and owner permissions on a subscription
-and deprecated accounts. By reviewing the accounts audited by these policies, you can take
-appropriate action to ensure account management requirements are met.
--- Deprecated accounts should be removed from your subscription-- Deprecated accounts with owner permissions should be removed from your subscription-- External accounts with owner permissions should be removed from your subscription-- External accounts with read permissions should be removed from your subscription-- External accounts with write permissions should be removed from your subscription-
-## AC-2 (7) Account Management | Role-Based Schemes
-
-Azure implements
-[Azure role-based access control (Azure RBAC)](../../../../role-based-access-control/overview.md) to
-help you manage who has access to resources in Azure. Using the Azure portal, you can review who has
-access to Azure resources and their permissions. This blueprint also assigns
-[Azure Policy](../../../policy/overview.md) definitions to audit use of Azure Active Directory
-authentication for SQL Servers and Service Fabric. Using Azure Active Directory authentication
-enables simplified permission management and centralized identity management of database users and
-other Microsoft services. Additionally, this blueprint assigns an Azure Policy definition to audit
-the use of custom Azure RBAC rules. Understanding where custom Azure RBAC rules are implement can
-help you verify need and proper implementation, as custom Azure RBAC rules are error prone.
--- An Azure Active Directory administrator should be provisioned for SQL servers-- Audit usage of custom RBAC rules-- Service Fabric clusters should only use Azure Active Directory for client authentication-
-## AC-2 (12) Account Management | Account Monitoring / Atypical Usage
-
-Just-in-time (JIT) virtual machine access locks down inbound traffic to Azure virtual machines,
-reducing exposure to attacks while providing easy access to connect to VMs when needed. All JIT
-requests to access virtual machines are logged in the Activity Log allowing you to monitor for
-atypical usage. This blueprint assigns an [Azure Policy](../../../policy/overview.md) definition
-that helps you monitor virtual machines that can support just-in-time access but haven't yet been
-configured.
--- Just-In-Time network access control should be applied on virtual machines-
-## AC-4 Information Flow Enforcement
-
-Cross origin resource sharing (CORS) can allow App Services resources to be requested from an
-outside domain. Microsoft recommends that you allow only required domains to interact with your API,
-function, and web applications. This blueprint assigns an
-[Azure Policy](../../../policy/overview.md) definition to help you monitor CORS resources access
-restrictions in Azure Security Center. Understanding CORS implementations can help you verify that
-information flow controls are implemented.
--- CORS should not allow every resource to access your Web Application-
-## AC-5 Separation of Duties
-
-Having only one Azure subscription owner doesn't allow for administrative redundancy. Conversely,
-having too many Azure subscription owners can increase the potential for a breach via a compromised
-owner account. This blueprint helps you maintain an appropriate number of Azure subscription owners
-by assigning [Azure Policy](../../../policy/overview.md) definitions that audit the number of owners
-for Azure subscriptions. This blueprint also assigns Azure Policy definitions that help you control
-membership of the Administrators group on Windows virtual machines. Managing subscription owner and
-virtual machine administrator permissions can help you implement appropriate separation of duties.
--- A maximum of 3 owners should be designated for your subscription-- Audit Windows VMs in which the Administrators group contains any of the specified members-- Audit Windows VMs in which the Administrators group does not contain all of the specified members-- Deploy requirements to audit Windows VMs in which the Administrators group contains any of the
- specified members
-- Deploy requirements to audit Windows VMs in which the Administrators group does not contain all of
- the specified members
-- There should be more than one owner assigned to your subscription-
-## AC-6 (7) Least Privilege | Review of User Privileges
-
-Azure implements [Azure role-based access control (Azure RBAC)](../../../../role-based-access-control/overview.md)
-to help you manage who has access to resources in Azure. Using the Azure portal, you can
-review who has access to Azure resources and their permissions. This blueprint assigns
-[Azure Policy](../../../policy/overview.md) definitions to audit accounts that should be prioritized
-for review. Reviewing these account indicators can help you ensure least privilege controls are
-implemented.
--- A maximum of 3 owners should be designated for your subscription-- Audit Windows VMs in which the Administrators group contains any of the specified members-- Audit Windows VMs in which the Administrators group does not contain all of the specified members-- Deploy requirements to audit Windows VMs in which the Administrators group contains any of the
- specified members
-- Deploy requirements to audit Windows VMs in which the Administrators group does not contain all of
- the specified members
-- There should be more than one owner assigned to your subscription-
-## AC-17 (1) Remote Access | Automated Monitoring / Control
-
-This blueprint helps you monitor and control remote access by assigning
-[Azure Policy](../../../policy/overview.md) definitions to monitors that remote debugging for Azure
-App Service application is turned off and policy definitions that audit Linux virtual machines that
-allow remote connections from accounts without passwords. This blueprint also assigns an Azure
-Policy definition that helps you monitor unrestricted access to storage accounts. Monitoring these
-indicators can help you ensure remote access methods comply with your security policy.
--- \[Preview\]: Audit Linux VMs that allow remote connections from accounts without passwords-- \[Preview\]: Deploy requirements to audit Linux VMs that allow remote connections from accounts
- without passwords
-- Audit unrestricted network access to storage accounts-- Remote debugging should be turned off for API App-- Remote debugging should be turned off for Function App-- Remote debugging should be turned off for Web Application-
-## AC-23 Data Mining
-
-This blueprint provides policy definitions that help you ensure data security notifications are
-properly enabled. In addition, this blueprint ensures that auditing and advanced data security are
-configured on SQL Servers.
--- Advanced data security should be enabled on your SQL servers-- Advanced data security should be enabled on your SQL managed instances-- Advanced Threat Protection types should be set to 'All' in SQL server Advanced Data Security settings-- Advanced Threat Protection types should be set to 'All' in SQL managed instance Advanced Data
- Security settings
-- Auditing should be enabled on advanced data security settings on SQL Server-- Email notifications to admins and subscription owners should be enabled in SQL server advanced
- data security settings
-- Email notifications to admins and subscription owners should be enabled in SQL managed instance
- advanced data security settings
-- Advanced data security settings for SQL server should contain an email address to receive security
- alerts
-- Advanced data security settings for SQL managed instance should contain an email address to
- receive security alerts
-
-## AU-3 (2) Content of Audit Records | Centralized Management of Planned Audit Record Content
-
-Log data collected by Azure Monitor is stored in a Log Analytics workspace enabling centralized
-configuration and management. This blueprint helps you ensure events are logged by assigning
-[Azure Policy](../../../policy/overview.md) definitions that audit and enforce deployment of the Log
-Analytics agent on Azure virtual machines.
--- \[Preview\]: Audit Log Analytics Agent Deployment - VM Image (OS) unlisted-- Audit Log Analytics agent deployment in virtual machine scale sets - VM Image (OS) unlisted-- \[Preview\]: Audit Log Analytics Workspace for VM - Report Mismatch-- Deploy Log Analytics agent for Linux virtual machine scale sets-- \[Preview\]: Deploy Log Analytics Agent for Linux VMs-- Deploy Log Analytics agent for Windows virtual machine scale sets-- \[Preview\]: Deploy Log Analytics Agent for Windows VMs-
-## AU-5 Response to Audit Processing Failures
-
-This blueprint assigns [Azure Policy](../../../policy/overview.md) definitions that monitor
-audit and event logging configurations. Monitoring these configurations can provide an indicator of
-an audit system failure or misconfiguration and help you take corrective action.
--- Audit diagnostic setting-- Auditing should be enabled on advanced data security settings on SQL Server-- Advanced data security should be enabled on your managed instances-- Advanced data security should be enabled on your SQL servers-
-## AU-6 (4) Audit Review, Analysis, and Reporting | Central Review and Analysis
-
-Log data collected by Azure Monitor is stored in a Log Analytics workspace enabling centralized
-reporting and analysis. This blueprint helps you ensure events are logged by assigning
-[Azure Policy](../../../policy/overview.md) definitions that audit and enforce deployment of the Log
-Analytics agent on Azure virtual machines.
--- \[Preview\]: Audit Log Analytics Agent Deployment - VM Image (OS) unlisted-- Audit Log Analytics agent deployment in virtual machine scale sets - VM Image (OS) unlisted-- \[Preview\]: Audit Log Analytics Workspace for VM - Report Mismatch-- Deploy Log Analytics agent for Linux virtual machine scale sets-- \[Preview\]: Deploy Log Analytics Agent for Linux VMs-- Deploy Log Analytics agent for Windows virtual machine scale sets-- \[Preview\]: Deploy Log Analytics Agent for Windows VMs-
-## AU-6 (5) Audit Review, Analysis, and Reporting | Integration / Scanning and Monitoring Capabilities
-
-This blueprint provides policy definitions that audit records with analysis of vulnerability
-assessment on virtual machines, virtual machine scale sets, SQL Database servers, and SQL Managed
-Instance servers. These policy definitions also audit configuration of diagnostic logs to provide
-insight into operations that are performed within Azure resources. These insights provide real-time
-information about the security state of your deployed resources and can help you prioritize
-remediation actions. For detailed vulnerability scanning and monitoring, we recommend you use
-Azure Sentinel and Azure Security Center as well.
--- \[Preview\]: Vulnerability Assessment should be enabled on Virtual Machines-- Vulnerability assessment should be enabled on your SQL servers-- Audit diagnostic setting-- Vulnerability assessment should be enabled on your SQL managed instances-- Vulnerability assessment should be enabled on your SQL servers-- Vulnerabilities in security configuration on your machines should be remediated-- Vulnerabilities on your SQL databases should be remediated-- Vulnerabilities should be remediated by a Vulnerability Assessment solution-- Vulnerabilities in security configuration on your virtual machine scale sets should be remediated-- \[Preview\]: Audit Log Analytics Agent Deployment - VM Image (OS) unlisted-- Audit Log Analytics agent deployment in virtual machine scale sets - VM Image (OS) unlisted-
-## AU-12 Audit Generation
-
-This blueprint provides policy definitions that audit and enforce deployment of the Log Analytics
-agent on Azure virtual machines and configuration of audit settings for other Azure resource types.
-These policy definitions also audit configuration of diagnostic logs to provide insight into
-operations that are performed within Azure resources. Additionally, auditing and Advanced Data
-Security are configured on SQL servers.
--- \[Preview\]: Audit Log Analytics Agent Deployment - VM Image (OS) unlisted-- Audit Log Analytics agent deployment in virtual machine scale sets - VM Image (OS) unlisted-- \[Preview\]: Audit Log Analytics Workspace for VM - Report Mismatch-- Deploy Log Analytics agent for Linux virtual machine scale sets-- \[Preview\]: Deploy Log Analytics Agent for Linux VMs-- Deploy Log Analytics agent for Windows virtual machine scale sets-- \[Preview\]: Deploy Log Analytics Agent for Windows VMs-- Audit diagnostic setting-- Auditing should be enabled on advanced data security settings on SQL Server-- Advanced data security should be enabled on your managed instances-- Advanced data security should be enabled on your SQL servers-- Deploy Advanced Data Security on SQL servers-- Deploy Auditing on SQL servers-- Deploy Diagnostic Settings for Network Security Groups-
-## AU-12 (01) Audit Generation | System-Wide / Time-Correlated Audit Trail
-
-This blueprint helps you ensure system events are logged by assigning
-[Azure Policy](../../../policy/overview.md) definitions that audit log settings on Azure resources.
-This built-in policy requires you to specify an array of resource types to check whether diagnostic
-settings are enabled or not.
--- Audit diagnostic setting-
-## CM-7 (2) Least Functionality | Prevent Program Execution
-
-Adaptive application control in Azure Security Center is an intelligent, automated end-to-end
-application allowlist solution that can block or prevent specific software from running on your
-virtual machines. Application control can run in an enforcement mode that prohibits non-approved
-application from running. This blueprint assigns an Azure Policy definition that helps you monitor
-virtual machines where an application allowlist is recommended but has not yet been configured.
--- Adaptive application controls for defining safe applications should be enabled on your machines-
-## CM-7 (5) Least Functionality | Authorized Software / Whitelisting
-
-Adaptive application control in Azure Security Center is an intelligent, automated end-to-end
-application allowlist solution that can block or prevent specific software from running on your
-virtual machines. Application control helps you create approved application lists for your virtual
-machines. This blueprint assigns an [Azure Policy](../../../policy/overview.md) definition that
-helps you monitor virtual machines where an application allowlist is recommended but has not yet
-been configured.
--- Adaptive application controls for defining safe applications should be enabled on your machines-
-## CM-11 User-Installed Software
-
-Adaptive application control in Azure Security Center is an intelligent, automated end-to-end
-application allowlist solution that can block or prevent specific software from running on your
-virtual machines. Application control can help you enforce and monitor compliance with software
-restriction policies. This blueprint assigns an [Azure Policy](../../../policy/overview.md)
-definition that helps you monitor virtual machines where an application allowlist is recommended but
-has not yet been configured.
--- Adaptive application controls for defining safe applications should be enabled on your machines-
-## CP-7 Alternate Processing Site
-
-Azure Site Recovery replicates workloads running on virtual machines from a primary location to a
-secondary location. If an outage occurs at the primary site, the workload fails over the secondary
-location. This blueprint assigns an [Azure Policy](../../../policy/overview.md) definition that
-audits virtual machines without disaster recovery configured. Monitoring this indicator can help you
-ensure necessary contingency controls are in place.
--- Audit virtual machines without disaster recovery configured-
-## CP-9 (05) Information System Backup | Transfer to Alternate Storage Site
-
-This blueprint assigns Azure Policy definitions that audit the organization's system backup
-information to the alternate storage site electronically. For physical shipment of storage metadata,
-consider using Azure Data Box.
--- Geo-redundant storage should be enabled for Storage Accounts-- Geo-redundant backup should be enabled for Azure Database for PostgreSQL-- Geo-redundant backup should be enabled for Azure Database for MySQL-- Geo-redundant backup should be enabled for Azure Database for MariaDB-- Long-term geo-redundant backup should be enabled for Azure SQL Databases-
-## IA-2 (1) Identification and Authentication (Organizational Users) | Network Access to Privileged Accounts
-
-This blueprint helps you restrict and control privileged access by assigning
-[Azure Policy](../../../policy/overview.md) definitions to audit accounts with owner and/or write
-permissions that don't have multi-factor authentication enabled. Multi-factor authentication helps
-keep accounts secure even if one piece of authentication information is compromised. By monitoring
-accounts without multi-factor authentication enabled, you can identify accounts that may be more
-likely to be compromised.
--- MFA should be enabled on accounts with owner permissions on your subscription-- MFA should be enabled on accounts with write permissions on your subscription-
-## IA-2 (2) Identification and Authentication (Organizational Users) | Network Access to Non-Privileged Accounts
-
-This blueprint helps you restrict and control access by assigning an
-[Azure Policy](../../../policy/overview.md) definition to audit accounts with read permissions that
-don't have multi-factor authentication enabled. Multi-factor authentication helps keep accounts
-secure even if one piece of authentication information is compromised. By monitoring accounts
-without multi-factor authentication enabled, you can identify accounts that may be more likely to be
-compromised.
--- MFA should be enabled on accounts with read permissions on your subscription-
-## IA-5 Authenticator Management
-
-This blueprint assigns [Azure Policy](../../../policy/overview.md) definitions that audit Linux
-virtual machines that allow remote connections from accounts without passwords and/or have incorrect
-permissions set on the passwd file. This blueprint also assigns policy definitions that audit the
-configuration of the password encryption type for Windows virtual machines. Monitoring these
-indicators helps you ensure that system authenticators comply with your organization's
-identification and authentication policy.
--- \[Preview\]: Audit Linux VMs that do not have the passwd file permissions set to 0644-- \[Preview\]: Audit Linux VMs that have accounts without passwords-- \[Preview\]: Audit Windows VMs that do not store passwords using reversible encryption-- \[Preview\]: Deploy requirements to audit Linux VMs that do not have the passwd file permissions
- set to 0644
-- \[Preview\]: Deploy requirements to audit Linux VMs that have accounts without passwords-- \[Preview\]: Deploy requirements to audit Windows VMs that do not store passwords using reversible
- encryption
-
-## IA-5 (1) Authenticator Management | Password-Based Authentication
-
-This blueprint helps you enforce strong passwords by assigning
-[Azure Policy](../../../policy/overview.md) definitions that audit Windows virtual machines that
-don't enforce minimum strength and other password requirements. Awareness of virtual machines in
-violation of the password strength policy helps you take corrective actions to ensure passwords for
-all virtual machine user accounts comply with your organization's password policy.
--- \[Preview\]: Audit Windows VMs that allow re-use of the previous 24 passwords-- \[Preview\]: Audit Windows VMs that do not have a maximum password age of 70 days-- \[Preview\]: Audit Windows VMs that do not have a minimum password age of 1 day-- \[Preview\]: Audit Windows VMs that do not have the password complexity setting enabled-- \[Preview\]: Audit Windows VMs that do not restrict the minimum password length to 14 characters-- \[Preview\]: Audit Windows VMs that do not store passwords using reversible encryption-- \[Preview\]: Deploy requirements to audit Windows VMs that allow re-use of the previous 24
- passwords
-- \[Preview\]: Deploy requirements to audit Windows VMs that do not have a maximum password age of
- 70 days
-- \[Preview\]: Deploy requirements to audit Windows VMs that do not have a minimum password age of 1
- day
-- \[Preview\]: Deploy requirements to audit Windows VMs that do not have the password complexity
- setting enabled
-- \[Preview\]: Deploy requirements to audit Windows VMs that do not restrict the minimum password
- length to 14 characters
-- \[Preview\]: Deploy requirements to audit Windows VMs that do not store passwords using reversible
- encryption
-
-## IR-6 (2) Incident Reporting | Vulnerabilities Related to Incidents
-
-This blueprint provides policy definitions that audit records with analysis of vulnerability
-assessment on virtual machines, virtual machine scale sets, and SQL servers. These insights provide
-real-time information about the security state of your deployed resources and can help you
-prioritize remediation actions.
--- Vulnerabilities in security configuration on your virtual machine scale sets should be remediated-- Vulnerabilities should be remediated by a Vulnerability Assessment solution-- Vulnerabilities in security configuration on your machines should be remediated-- Vulnerabilities in container security configurations should be remediated-- Vulnerabilities on your SQL databases should be remediated-
-## RA-5 Vulnerability Scanning
-
-This blueprint helps you manage information system vulnerabilities by assigning
-[Azure Policy](../../../policy/overview.md) definitions that monitor operating system
-vulnerabilities, SQL vulnerabilities, and virtual machine vulnerabilities in Azure Security Center.
-Azure Security Center provides reporting capabilities that enable you to have real-time insight into
-the security state of deployed Azure resources. This blueprint also assigns policy definitions that
-audit and enforce Advanced Data Security on SQL servers. Advanced data security included
-vulnerability assessment and advanced threat protection capabilities to help you understand
-vulnerabilities in your deployed resources.
--- Advanced data security should be enabled on your managed instances-- Advanced data security should be enabled on your SQL servers-- Deploy Advanced Data Security on SQL servers-- Vulnerabilities in security configuration on your virtual machine scale sets should be remediated-- Vulnerabilities in security configuration on your virtual machines should be remediated-- Vulnerabilities on your SQL databases should be remediated-- Vulnerabilities should be remediated by a Vulnerability Assessment solution-
-## SC-5 Denial of Service Protection
-
-Azure's distributed denial of service (DDoS) Standard tier provides additional features and
-mitigation capabilities over the basic service tier. These additional features include Azure Monitor
-integration and the ability to review post-attack mitigation reports. This blueprint assigns an
-[Azure Policy](../../../policy/overview.md) definition that audits if the DDoS Standard tier is
-enabled. Understanding the capability difference between the service tiers can help you select the
-best solution to address denial of service protections for your Azure environment.
--- DDoS Protection Standard should be enabled-
-## SC-7 Boundary Protection
-
-This blueprint helps you manage and control the system boundary by assigning an
-[Azure Policy](../../../policy/overview.md) definition that monitors for network security group
-hardening recommendations in Azure Security Center. Azure Security Center analyzes traffic patterns
-of Internet facing virtual machines and provides network security group rule recommendations to
-reduce the potential attack surface. Additionally, this blueprint also assigns policy definitions
-that monitor unprotected endpoints, applications, and storage accounts. Endpoints and applications
-that aren't protected by a firewall, and storage accounts with unrestricted access can allow
-unintended access to information contained within the information system.
--- Network Security Group Rules for Internet facing virtual machines should be hardened-- Access through Internet facing endpoint should be restricted-- The NSGs rules for web applications on IaaS should be hardened-- Audit unrestricted network access to storage accounts-
-## SC-7 (3) Boundary Protection | Access Points
-
-Just-in-time (JIT) virtual machine access locks down inbound traffic to Azure virtual machines,
-reducing exposure to attacks while providing easy access to connect to VMs when needed. JIT virtual
-machine access helps you limit the number of external connections to your resources in Azure. This
-blueprint assigns an [Azure Policy](../../../policy/overview.md) definition that helps you monitor
-virtual machines that can support just-in-time access but haven't yet been configured.
--- Just-In-Time network access control should be applied on virtual machines-
-## SC-7 (4) Boundary Protection | External Telecommunications Services
-
-Just-in-time (JIT) virtual machine access locks down inbound traffic to Azure virtual machines,
-reducing exposure to attacks while providing easy access to connect to VMs when needed. JIT virtual
-machine access helps you manage exceptions to your traffic flow policy by facilitating the access
-request and approval processes. This blueprint assigns an
-[Azure Policy](../../../policy/overview.md) definition that helps you monitor virtual machines that
-can support just-in-time access but haven't yet been configured.
--- Just-In-Time network access control should be applied on virtual machines-
-## SC-8 (1) Transmission Confidentiality and Integrity | Cryptographic or Alternate Physical Protection
-
-This blueprint helps you protect the confidential and integrity of transmitted information by
-assigning [Azure Policy](../../../policy/overview.md) definitions that help you monitor
-cryptographic mechanism implemented for communications protocols. Ensuring communications are
-properly encrypted can help you meet your organization's requirements or protecting information from
-unauthorized disclosure and modification.
--- API App should only be accessible over HTTPS-- Audit Windows web servers that are not using secure communication protocols-- Deploy requirements to audit Windows web servers that are not using secure communication protocols-- Function App should only be accessible over HTTPS-- Only secure connections to your Redis Cache should be enabled-- Secure transfer to storage accounts should be enabled-- Web Application should only be accessible over HTTPS-
-## SC-28 (1) Protection of Information at Rest | Cryptographic Protection
-
-This blueprint helps you enforce your policy on the use of cryptograph controls to protect
-information at rest by assigning [Azure Policy](../../../policy/overview.md) definitions that
-enforce specific cryptograph controls and audit use of weak cryptographic settings. Understanding
-where your Azure resources may have non-optimal cryptographic configurations can help you take
-corrective actions to ensure resources are configured in accordance with your information security
-policy. Specifically, the policy definitions assigned by this blueprint require encryption for data
-lake storage accounts; require transparent data encryption on SQL databases; and audit missing
-encryption on SQL databases, virtual machine disks, and automation account variables.
--- Advanced data security should be enabled on your managed instances-- Advanced data security should be enabled on your SQL servers-- Deploy Advanced Data Security on SQL servers-- Deploy SQL DB transparent data encryption-- Disk encryption should be applied on virtual machines-- Require encryption on Data Lake Store accounts-- Transparent Data Encryption on SQL databases should be enabled-
-## SI-2 Flaw Remediation
-
-This blueprint helps you manage information system flaws by assigning
-[Azure Policy](../../../policy/overview.md) definitions that monitor missing system updates,
-operating system vulnerabilities, SQL vulnerabilities, and virtual machine vulnerabilities in Azure
-Security Center. Azure Security Center provides reporting capabilities that enable you to have
-real-time insight into the security state of deployed Azure resources. This blueprint also assigns a
-policy definition that ensures patching of the operating system for virtual machine scale sets.
--- Require automatic OS image patching on Virtual Machine Scale Sets-- System updates on virtual machine scale sets should be installed-- System updates should be installed on your virtual machines-- Vulnerabilities in security configuration on your virtual machine scale sets should be remediated-- Vulnerabilities in security configuration on your virtual machines should be remediated-- Vulnerabilities on your SQL databases should be remediated-- Vulnerabilities should be remediated by a Vulnerability Assessment solution-
-## SI-02 (06) Flaw Remediation | Removal of Previous Versions of Software / Firmware
-
-This blueprint assigns policy definitions that help you ensure applications are using the latest
-version of HTTP, Java, PHP, Python, and TLS. This blueprint also assigns
-a policy definition that ensures that Kubernetes Services is upgraded to its non-vulnerable version.
--- Ensure that 'HTTP Version' is the latest, if used to run the API app-- Ensure that 'HTTP Version' is the latest, if used to run the Function app-- Ensure that 'HTTP Version' is the latest, if used to run the Web app-- Ensure that 'Java version' is the latest, if used as a part of the API app-- Ensure that 'Java version' is the latest, if used as a part of the Function app-- Ensure that 'Java version' is the latest, if used as a part of the Web app-- Ensure that 'PHP version' is the latest, if used as a part of the API app-- Ensure that 'PHP version' is the latest, if used as a part of the WEB app-- Ensure that 'Python version' is the latest, if used as a part of the API app-- Ensure that 'Python version' is the latest, if used as a part of the Function app-- Ensure that 'Python version' is the latest, if used as a part of the Web app-- Latest TLS version should be used in your API App-- Latest TLS version should be used in your Function App-- Latest TLS version should be used in your Web App-- Kubernetes Services should be upgraded to a non-vulnerable Kubernetes version-
-## SI-3 Malicious Code Protection
-
-This blueprint helps you manage endpoint protection, including malicious code protection, by
-assigning [Azure Policy](../../../policy/overview.md) definitions that monitor for missing
-endpoint protection on virtual machines in Azure Security Center and enforce the Microsoft
-antimalware solution on Windows virtual machines.
--- Deploy default Microsoft IaaSAntimalware extension for Windows Server-- Endpoint protection solution should be installed on virtual machine scale sets-- Monitor missing Endpoint Protection in Azure Security Center-
-## SI-3 (1) Malicious Code Protection | Central Management
-
-This blueprint helps you manage endpoint protection, including malicious code protection, by
-assigning [Azure Policy](../../../policy/overview.md) definitions that monitor for missing
-endpoint protection on virtual machines in Azure Security Center. Azure Security Center provides
-centralized management and reporting capabilities that enable you to have real-time insight into the
-security state of deployed Azure resources.
--- Endpoint protection solution should be installed on virtual machine scale sets-- Monitor missing Endpoint Protection in Azure Security Center-
-## SI-4 Information System Monitoring
-
-This blueprint helps you monitor your system by auditing and enforcing logging and data security
-across Azure resources. Specifically, the policies assigned audit and enforce deployment of the Log
-Analytics agent, and enhanced security settings for SQL databases, storage accounts and network
-resources. These capabilities can help you detect anomalous behavior and indicators of attacks so
-you can take appropriate action.
--- \[Preview\]: Audit Log Analytics Agent Deployment - VM Image (OS) unlisted-- Audit Log Analytics agent deployment in virtual machine scale sets - VM Image (OS) unlisted-- \[Preview\]: Audit Log Analytics Workspace for VM - Report Mismatch-- Deploy Log Analytics agent for Linux virtual machine scale sets-- \[Preview\]: Deploy Log Analytics Agent for Linux VMs-- Deploy Log Analytics agent for Windows virtual machine scale sets-- \[Preview\]: Deploy Log Analytics Agent for Windows VMs-- Advanced data security should be enabled on your managed instances-- Advanced data security should be enabled on your SQL servers-- Deploy Advanced Data Security on SQL servers-- Deploy Advanced Threat Protection on Storage Accounts-- Deploy Auditing on SQL servers-- Deploy network watcher when virtual networks are created-- Deploy Threat Detection on SQL servers-- Allowed locations-- Allowed locations for resource groups-
-## SI-4 (12) Information System Monitoring | Automated Alerts
-
-This blueprint provides policy definitions that help you ensure data security notifications are
-properly enabled. In addition, this blueprint ensures that the Standard pricing tier is enabled
-for Azure Security Center. Note that the Standard pricing tier enables threat detection for networks
-and virtual machines, providing threat intelligence, anomaly detection, and behavior analytics in
-Azure Security Center.
--- Email notification to subscription owner for high severity alerts should be enabled-- A security contact email address should be provided for your subscription-- Email notifications to admins and subscription owners should be enabled in SQL managed instance
- advanced data security settings
-- Email notifications to admins and subscription owners should be enabled in SQL server advanced
- data security settings
-- A security contact phone number should be provided for your subscription-- Advanced data security settings for SQL server should contain an email address to receive security
- alerts
-- Security Center standard pricing tier should be selected-
-## SI-4 (18) Information System Monitoring | Analyze Traffic / Covert Exfiltration
-
-Advanced Threat Protection for Azure Storage detects unusual and potentially harmful attempts to
-access or exploit storage accounts. Protection alerts include anomalous access patterns, anomalous
-extracts/uploads, and suspicious storage activity. These indicators can help you detect covert
-exfiltration of information.
--- Deploy Advanced Threat Protection on Storage Accounts-
-> [!NOTE]
-> Availability of specific Azure Policy definitions may vary in Azure Government and other national
-> clouds.
-
-## Next steps
-
-Now that you've reviewed the control mapping of the DoD Impact Level 4 blueprint, visit the
-following articles to learn about the blueprint and how to deploy this sample:
-
-> [!div class="nextstepaction"]
-> [DoD Impact Level 4 blueprint - Overview](./index.md)
-> [DoD Impact Level 4 blueprint - Deploy steps](./deploy.md)
-
-Additional articles about blueprints and how to use them:
--- Learn about the [blueprint lifecycle](../../concepts/lifecycle.md).-- Understand how to use [static and dynamic parameters](../../concepts/parameters.md).-- Learn to customize the [blueprint sequencing order](../../concepts/sequencing-order.md).-- Find out how to make use of [blueprint resource locking](../../concepts/resource-locking.md).-- Learn how to [update existing assignments](../../how-to/update-existing-assignments.md).
governance Deploy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/dod-impact-level-4/deploy.md
- Title: DoD Impact Level 4 blueprint sample
-description: Deploy steps for the DoD Impact Level 4 blueprint sample including blueprint artifact parameter details.
Previously updated : 04/13/2021--
-# Deploy the DoD Impact Level 4 blueprint sample
-
-To deploy the Azure Blueprints Department of Defense Impact Level 4 (DoD IL4) blueprint sample, the following steps must be taken:
-
-> [!div class="checklist"]
-> - Create a new blueprint from the sample
-> - Mark your copy of the sample as **Published**
-> - Assign your copy of the blueprint to an existing subscription
-
-If you don't have an Azure Government subscription, request a
-[trial subscription](https://azure.microsoft.com/global-infrastructure/government/request/) before
-you begin.
-
-## Create blueprint from sample
-
-First, implement the blueprint sample by creating a new blueprint in your environment using the
-sample as a starter.
-
-1. Select **All services** in the left pane. Search for and select **Blueprints**.
-
-1. From the **Getting started** page on the left, select the **Create** button under _Create a
- blueprint_.
-
-1. Find the **DoD Impact Level 4** blueprint sample under _Other Samples_ and select **Use this sample**.
-
-1. Enter the _Basics_ of the blueprint sample:
-
- - **Blueprint name**: Provide a name for your copy of the DoD Impact Level 4 blueprint sample.
- - **Definition location**: Use the ellipsis and select the management group to save your copy of
- the sample to.
-
-1. Select the _Artifacts_ tab at the top of the page or **Next: Artifacts** at the bottom of the
- page.
-
-1. Review the list of artifacts that make up the blueprint sample. Many of the artifacts have
- parameters that we'll define later. Select **Save Draft** when you've finished reviewing the
- blueprint sample.
-
-## Publish the sample copy
-
-Your copy of the blueprint sample has now been created in your environment. It's created in
-**Draft** mode and must be **Published** before it can be assigned and deployed. The copy of the
-blueprint sample can be customized to your environment and needs, but that modification may move it
-away from alignment with DoD Impact Level 4 controls.
-
-1. Select **All services** in the left pane. Search for and select **Blueprints**.
-
-1. Select the **Blueprint definitions** page on the left. Use the filters to find your copy of the
- blueprint sample and then select it.
-
-1. Select **Publish blueprint** at the top of the page. In the new page on the right, provide a
- **Version** for your copy of the blueprint sample. This property is useful for if you make a
- modification later. Provide **Change notes** such as "First version published from the DoD
- IL4 blueprint sample." Then select **Publish** at the bottom of the page.
-
-## Assign the sample copy
-
-Once the copy of the blueprint sample has been successfully **Published**, it can be assigned to a
-subscription within the management group it was saved to. This step is where parameters are
-provided to make each deployment of the copy of the blueprint sample unique.
-
-1. Select **All services** in the left pane. Search for and select **Blueprints**.
-
-1. Select the **Blueprint definitions** page on the left. Use the filters to find your copy of the
- blueprint sample and then select it.
-
-1. Select **Assign blueprint** at the top of the blueprint definition page.
-
-1. Provide the parameter values for the blueprint assignment:
-
- - Basics
-
- - **Subscriptions**: Select one or more of the subscriptions that are in the management group
- you saved your copy of the blueprint sample to. If you select more than one subscription, an
- assignment will be created for each using the parameters entered.
- - **Assignment name**: The name is pre-populated for you based on the name of the blueprint.
- Change as needed or leave as is.
- - **Location**: Select a region for the managed identity to be created in. Azure Blueprint uses
- this managed identity to deploy all artifacts in the assigned blueprint. To learn more, see
- [managed identities for Azure resources](../../../../active-directory/managed-identities-azure-resources/overview.md).
- - **Blueprint definition version**: Pick a **Published** version of your copy of the blueprint
- sample.
-
- - Lock Assignment
-
- Select the blueprint lock setting for your environment. For more information, see
- [blueprints resource locking](../../concepts/resource-locking.md).
-
- - Managed Identity
-
- Leave the default _system assigned_ managed identity option.
-
- - Artifact parameters
-
- The parameters defined in this section apply to the artifact under which it's defined. These
- parameters are [dynamic parameters](../../concepts/parameters.md#dynamic-parameters) since
- they're defined during the assignment of the blueprint. For a full list or artifact parameters
- and their descriptions, see [Artifact parameters table](#artifact-parameters-table).
-
-1. Once all parameters have been entered, select **Assign** at the bottom of the page. The blueprint
- assignment is created and artifact deployment begins. Deployment takes roughly an hour. To check
- on the status of deployment, open the blueprint assignment.
-
-> [!WARNING]
-> The Azure Blueprints service and the built-in blueprint samples are **free of cost**. Azure
-> resources are [priced by product](https://azure.microsoft.com/pricing/). Use the
-> [pricing calculator](https://azure.microsoft.com/pricing/calculator/) to estimate the cost of
-> running resources deployed by this blueprint sample.
-
-## Artifact parameters table
-
-The following table provides a list of the blueprint artifact parameters:
-
-|Artifact name|Artifact type|Parameter name|Description|
-|-|-|-|-|
-|Allowed locations|Policy Assignment|Allowed Locations|This policy enables you to restrict the locations your organization can specify when deploying resources. Use to enforce your geo-compliance requirements.|
-|Allowed Locations for resource groups|Policy Assignment |Allowed Locations|This policy enables you to restrict the locations your organization can create resource groups in. Use to enforce your geo-compliance requirements.|
-|Deploy Auditing on SQL servers|Policy assignment|The value in days of the retention period (0 indicates unlimited retention) |Retention days (optional, 180 days if unspecified) |
-|Deploy Auditing on SQL servers|Policy assignment|Resource group name for storage account for SQL server auditing|Auditing writes database events to an audit log in your Azure Storage account (a storage account will be created in each region where a SQL Server is created that will be shared by all servers in that region). Important - for proper operation of Auditing do not delete or rename the resource group or the storage accounts.|
-|Deploy diagnostic settings for Network Security Groups|Policy assignment|Storage account prefix for network security group diagnostics|This prefix will be combined with the network security group location to form the created storage account name.|
-|Deploy diagnostic settings for Network Security Groups|Policy assignment|Resource group name for storage account for network security group diagnostics (must exist) |The resource group that the storage account will be created in. This resource group must already exist.|
-|Deploy Log Analytics agent for Linux virtual machine scale sets|Policy assignment|Log Analytics workspace for Linux virtual machine scale sets|If this workspace is outside of the scope of the assignment you must manually grant 'Log Analytics Contributor' permissions (or similar) to the policy assignment's principal ID.|
-|Deploy Log Analytics agent for Linux virtual machine scale sets|Policy assignment|Optional: List of VM images that have supported Linux OS to add to scope|An empty array may be used to indicate no optional parameters: \[\]|
-|Deploy Log Analytics Agent for Linux VMs|Policy assignment|Log Analytics workspace for Linux VMs|If this workspace is outside of the scope of the assignment you must manually grant 'Log Analytics Contributor' permissions (or similar) to the policy assignment's principal ID.|
-|Deploy Log Analytics Agent for Linux VMs|Policy assignment|Optional: List of VM images that have supported Linux OS to add to scope|An empty array may be used to indicate no optional parameters: \[\]|
-|Deploy Log Analytics agent for Windows virtual machine scale sets|Policy assignment|Log Analytics workspace for Windows virtual machine scale sets|If this workspace is outside of the scope of the assignment you must manually grant 'Log Analytics Contributor' permissions (or similar) to the policy assignment's principal ID.|
-|Deploy Log Analytics agent for Windows virtual machine scale sets|Policy assignment|Optional: List of VM images that have supported Windows OS to add to scope|An empty array may be used to indicate no optional parameters: \[\]|
-|Deploy Log Analytics Agent for Windows VMs|Policy assignment|Log Analytics workspace for Windows VMs|If this workspace is outside of the scope of the assignment you must manually grant 'Log Analytics Contributor' permissions (or similar) to the policy assignment's principal ID.|
-|Deploy Log Analytics Agent for Windows VMs|Policy assignment|Optional: List of VM images that have supported Windows OS to add to scope|An empty array may be used to indicate no optional parameters: \[\]|
-|\[Preview\]: DoD Impact Level 4|Policy assignment|Members to be included in the Administrators local group|A semicolon-separated list of members that should be excluded in the Administrators local group. Ex: Administrator; myUser1; myUser2|
-|\[Preview\]: DoD Impact Level 4|Policy assignment|Members that should be excluded in the Administrators local group|A semicolon-separated list of members that should be included in the Administrators local group. Ex: Administrator; myUser1; myUser2|
-|\[Preview\]: DoD Impact Level 4|Policy assignment|List of resource types that should have diagnostic logs enabled|List of resource types to audit if diagnostic log setting is not enabled. Acceptable values can be found at [Azure Monitor diagnostic logs schemas](../../../../azure-monitor/essentials/resource-logs-schema.md#service-specific-schemas).|
-|\[Preview\]: DoD Impact Level 4|Policy assignment|Log Analytics workspace ID that VMs should be configured for|This is the ID (GUID) of the Log Analytics workspace that the VMs should be configured for.|
-|\[Preview\]: DoD Impact Level 4|Policy assignment|Long-term geo-redundant backup should be enabled for Azure SQL Databases|Information about policy effects can be found at [Understand Azure Policy Effects](../../../policy/concepts/effects.md).|
-|\[Preview\]: DoD Impact Level 4|Policy assignment|Vulnerability assessment should be enabled on your SQL managed instances|Information about policy effects can be found at [Understand Azure Policy Effects](../../../policy/concepts/effects.md).|
-|\[Preview\]: DoD Impact Level 4|Policy assignment|Vulnerability assessment should be enabled on your SQL servers|Information about policy effects can be found at [Understand Azure Policy Effects](../../../policy/concepts/effects.md).|
-|\[Preview\]: DoD Impact Level 4|Policy assignment|Geo-redundant storage should be enabled for Storage Accounts|Information about policy effects can be found at [Understand Azure Policy Effects](../../../policy/concepts/effects.md).|
-|\[Preview\]: DoD Impact Level 4|Policy assignment|Geo-redundant backup should be enabled for Azure Database for MySQL|Information about policy effects can be found at [Understand Azure Policy Effects](../../../policy/concepts/effects.md).|
-|\[Preview\]: DoD Impact Level 4|Policy assignment|Geo-redundant backup should be enabled for Azure Database for PostgreSQL|Information about policy effects can be found at [Understand Azure Policy Effects](../../../policy/concepts/effects.md).|
-|\[Preview\]: DoD Impact Level 4|Policy assignment|Web Application should only be accessible over HTTPS|Information about policy effects can be found at [Understand Azure Policy Effects](../../../policy/concepts/effects.md).|
-|\[Preview\]: DoD Impact Level 4|Policy assignment|Function App should only be accessible over HTTPS|Information about policy effects can be found at [Understand Azure Policy Effects](../../../policy/concepts/effects.md).|
-|\[Preview\]: DoD Impact Level 4|Policy assignment|External accounts with write permissions should be removed from your subscription|Information about policy effects can be found at [Understand Azure Policy Effects](../../../policy/concepts/effects.md).|
-|\[Preview\]: DoD Impact Level 4|Policy assignment|External accounts with read permissions should be removed from your subscription|Information about policy effects can be found at [Understand Azure Policy Effects](../../../policy/concepts/effects.md).|
-|\[Preview\]: DoD Impact Level 4|Policy assignment|External accounts with owner permissions should be removed from your subscription|Information about policy effects can be found at [Understand Azure Policy Effects](../../../policy/concepts/effects.md).|
-|\[Preview\]: DoD Impact Level 4|Policy assignment|Deprecated accounts with owner permissions should be removed from your subscription|Information about policy effects can be found at [Understand Azure Policy Effects](../../../policy/concepts/effects.md).|
-|\[Preview\]: DoD Impact Level 4|Policy assignment|Deprecated accounts should be removed from your subscription|Information about policy effects can be found at [Understand Azure Policy Effects](../../../policy/concepts/effects.md).|
-|\[Preview\]: DoD Impact Level 4|Policy assignment|CORS shouldn't allow every resource to access your Web Application|Information about policy effects can be found at [Understand Azure Policy Effects](../../../policy/concepts/effects.md).|
-|\[Preview\]: DoD Impact Level 4|Policy assignment|System updates on virtual machine scale sets should be installed|Information about policy effects can be found at [Understand Azure Policy Effects](../../../policy/concepts/effects.md).|
-|\[Preview\]: DoD Impact Level 4|Policy assignment|MFA should be enabled on accounts with read permissions on your subscription|Information about policy effects can be found at [Understand Azure Policy Effects](../../../policy/concepts/effects.md).|
-|\[Preview\]: DoD Impact Level 4|Policy assignment|MFA should be enabled on accounts with owner permissions on your subscription|Information about policy effects can be found at [Understand Azure Policy Effects](../../../policy/concepts/effects.md).|
-|\[Preview\]: DoD Impact Level 4|Policy assignment|MFA should be enabled on accounts with write permissions on your subscription|Information about policy effects can be found at [Understand Azure Policy Effects](../../../policy/concepts/effects.md).|
-
-## Next steps
-
-Now that you've reviewed the steps to deploy the DoD Impact Level 4 blueprint sample, visit the following
-articles to learn about the blueprint and control mapping:
-
-> [!div class="nextstepaction"]
-> [DoD Impact Level 4 blueprint - Overview](./index.md)
-> [DoD Impact Level 4 blueprint - Control mapping](./control-mapping.md)
-
-Additional articles about blueprints and how to use them:
--- Learn about the [blueprint lifecycle](../../concepts/lifecycle.md).-- Understand how to use [static and dynamic parameters](../../concepts/parameters.md).-- Learn to customize the [blueprint sequencing order](../../concepts/sequencing-order.md).-- Find out how to make use of [blueprint resource locking](../../concepts/resource-locking.md).-- Learn how to [update existing assignments](../../how-to/update-existing-assignments.md).
governance Index https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/dod-impact-level-4/index.md
- Title: DoD Impact Level 4 blueprint sample overview
-description: Overview of the DoD Impact Level 4 sample. This blueprint sample helps customers assess specific DoD Impact Level 4 controls.
Previously updated : 04/02/2021--
-# Overview of the DoD Impact Level 4 blueprint sample
-
-The Department of Defense Impact Level 4 (DoD IL4) blueprint sample provides governance guardrails
-using [Azure Policy](../../../policy/overview.md) that help you assess specific DoD Impact Level 4
-controls. This blueprint helps customers deploy a core set of policies for any Azure-deployed
-architecture that must implement DoD Impact Level 4 controls. For latest information on which Azure
-Clouds and Services meet DoD Impact Level 4 authorization, see
-[Azure services by FedRAMP and DoD CC SRG audit scope](../../../../azure-government/compliance/azure-services-in-fedramp-auditscope.md).
-
-> [!NOTE]
-> This blueprint sample is available in Azure Government.
-
-## Control mapping
-
-The control mapping section provides details on policies included within this blueprint and how
-these policies address various controls in DoD Impact Level 4. When assigned to an architecture,
-resources are evaluated by Azure Policy for non-compliance with assigned policies. For more
-information, see [Azure Policy](../../../policy/overview.md).
-
-## Next steps
-
-You've reviewed the overview of the DoD Impact Level 4 blueprint sample. Next, visit the following
-articles to learn about the control mapping and how to deploy this sample:
-
-> [!div class="nextstepaction"]
-> [DoD Impact Level 4 blueprint - Control mapping](./control-mapping.md)
-> [DoD Impact Level 4 blueprint - Deploy steps](./deploy.md)
-
-Additional articles about blueprints and how to use them:
--- Learn about the [blueprint lifecycle](../../concepts/lifecycle.md).-- Understand how to use [static and dynamic parameters](../../concepts/parameters.md).-- Learn to customize the [blueprint sequencing order](../../concepts/sequencing-order.md).-- Find out how to make use of [blueprint resource locking](../../concepts/resource-locking.md).-- Learn how to [update existing assignments](../../how-to/update-existing-assignments.md).
governance Control Mapping https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/dod-impact-level-5/control-mapping.md
- Title: DoD Impact Level 5 blueprint sample controls
-description: Control mapping of the DoD Impact Level 5 blueprint sample. Each control is mapped to one or more Azure Policy definitions that assist with assessment.
Previously updated : 04/02/2021--
-# Control mapping of the DoD Impact Level 5 blueprint sample
-
-The following article details how the Azure Blueprints Department of Defense Impact Level 5 (DoD
-IL5) blueprint sample maps to the DoD Impact Level 5 controls. For more information about the
-controls, see
-[DoD Cloud Computing Security Requirements Guide (SRG)](https://dl.dod.cyber.mil/wp-content/uploads/cloud/pdf/Cloud_Computing_SRG_v1r3.pdf).
-The Defense Information Systems Agency (DISA) is an agency of the US Department of Defense (DoD)
-that is responsible for developing and maintaining the DoD Cloud Computing Security Requirements
-Guide (SRG). The SRG defines the baseline security requirements for cloud service providers (CSPs)
-that host DoD information, systems, and applications, and for DoD's use of cloud services.
-
-The following mappings are to the **DoD Impact Level 5** controls. Use the navigation on the right
-to jump directly to a specific control mapping. Many of the mapped controls are implemented with an
-[Azure Policy](../../../policy/overview.md) initiative. To review the complete initiative, open
-**Policy** in the Azure portal and select the **Definitions** page. Then, find and select the
-**\[Preview\]: DoD Impact Level 5** built-in policy initiative.
-
-> [!IMPORTANT]
-> Each control below is associated with one or more [Azure Policy](../../../policy/overview.md)
-> definitions. These policies may help you
-> [assess compliance](../../../policy/how-to/get-compliance-data.md) with the control; however,
-> there often is not a one-to-one or complete match between a control and one or more policies. As
-> such, **Compliant** in Azure Policy refers only to the policies themselves; this doesn't ensure
-> you're fully compliant with all requirements of a control. In addition, the compliance standard
-> includes controls that aren't addressed by any Azure Policy definitions at this time. Therefore,
-> compliance in Azure Policy is only a partial view of your overall compliance status. The
-> associations between controls and Azure Policy definitions for this compliance blueprint sample
-> may change over time. To view the change history, see the
-> [GitHub Commit History](https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/dod-impact-level-5/control-mapping.md).
-
-## AC-2 Account Management
-
-This blueprint helps you review accounts that may not comply with your organization's account
-management requirements. This blueprint assigns [Azure Policy](../../../policy/overview.md)
-definitions that audit external accounts with read, write, and owner permissions on a subscription
-and deprecated accounts. By reviewing the accounts audited by these policies, you can take
-appropriate action to ensure account management requirements are met.
--- Deprecated accounts should be removed from your subscription-- Deprecated accounts with owner permissions should be removed from your subscription-- External accounts with owner permissions should be removed from your subscription-- External accounts with read permissions should be removed from your subscription-- External accounts with write permissions should be removed from your subscription-
-## AC-2 (7) Account Management | Role-Based Schemes
-
-Azure implements
-[Azure role-based access control (Azure RBAC)](../../../../role-based-access-control/overview.md) to
-help you manage who has access to resources in Azure. Using the Azure portal, you can review who has
-access to Azure resources and their permissions. This blueprint also assigns
-[Azure Policy](../../../policy/overview.md) definitions to audit use of Azure Active Directory
-authentication for SQL Servers and Service Fabric. Using Azure Active Directory authentication
-enables simplified permission management and centralized identity management of database users and
-other Microsoft services. Additionally, this blueprint assigns an Azure Policy definition to audit
-the use of custom Azure RBAC rules. Understanding where custom Azure RBAC rules are implement can
-help you verify need and proper implementation, as custom Azure RBAC rules are error prone.
--- An Azure Active Directory administrator should be provisioned for SQL servers-- Audit usage of custom RBAC rules-- Service Fabric clusters should only use Azure Active Directory for client authentication-
-## AC-2 (12) Account Management | Account Monitoring / Atypical Usage
-
-Just-in-time (JIT) virtual machine access locks down inbound traffic to Azure virtual machines,
-reducing exposure to attacks while providing easy access to connect to VMs when needed. All JIT
-requests to access virtual machines are logged in the Activity Log allowing you to monitor for
-atypical usage. This blueprint assigns an [Azure Policy](../../../policy/overview.md) definition
-that helps you monitor virtual machines that can support just-in-time access but haven't yet been
-configured.
--- Management ports of virtual machines should be protected with just-in-time network access control-
-## AC-4 Information Flow Enforcement
-
-Cross origin resource sharing (CORS) can allow App Services resources to be requested from an
-outside domain. Microsoft recommends that you allow only required domains to interact with your API,
-function, and web applications. This blueprint assigns an
-[Azure Policy](../../../policy/overview.md) definition to help you monitor CORS resources access
-restrictions in Azure Security Center. Understanding CORS implementations can help you verify that
-information flow controls are implemented.
--- CORS should not allow every resource to access your Web Applications-
-## AC-5 Separation of Duties
-
-Having only one Azure subscription owner doesn't allow for administrative redundancy. Conversely,
-having too many Azure subscription owners can increase the potential for a breach via a compromised
-owner account. This blueprint helps you maintain an appropriate number of Azure subscription owners
-by assigning [Azure Policy](../../../policy/overview.md) definitions that audit the number of owners
-for Azure subscriptions. This blueprint also assigns Azure Policy definitions that help you control
-membership of the Administrators group on Windows virtual machines. Managing subscription owner and
-virtual machine administrator permissions can help you implement appropriate separation of duties.
--- A maximum of 3 owners should be designated for your subscription-- Show audit results from Windows VMs in which the Administrators group contains any of the
- specified members
-- Show audit results from Windows VMs in which the Administrators group does not contain all of the
- specified members
-- Deploy prerequisites to audit Windows VMs in which the Administrators group contains any of the
- specified members
-- Deploy prerequisites to audit Windows VMs in which the Administrators group does not contain all
- of the specified members
-- There should be more than one owner assigned to your subscription-
-## AC-6 (7) Least Privilege | Review of User Privileges
-
-Azure implements
-[Azure role-based access control (Azure RBAC)](../../../../role-based-access-control/overview.md) to
-help you manage who has access to resources in Azure. Using the Azure portal, you can review who has
-access to Azure resources and their permissions. This blueprint assigns
-[Azure Policy](../../../policy/overview.md) definitions to audit accounts that should be prioritized
-for review. Reviewing these account indicators can help you ensure least privilege controls are
-implemented.
--- A maximum of 3 owners should be designated for your subscription-- Show audit results from Windows VMs in which the Administrators group contains any of the
- specified members
-- Show audit results from Windows VMs in which the Administrators group does not contain all of the
- specified members
-- Deploy prerequisites to audit Windows VMs in which the Administrators group contains any of the
- specified members
-- Deploy prerequisites to audit Windows VMs in which the Administrators group does not contain all
- of the specified members
-- There should be more than one owner assigned to your subscription-
-## AC-17 (1) Remote Access | Automated Monitoring / Control
-
-This blueprint helps you monitor and control remote access by assigning
-[Azure Policy](../../../policy/overview.md) definitions to monitors that remote debugging for Azure
-App Service application is turned off and policy definitions that audit Linux virtual machines that
-allow remote connections from accounts without passwords. This blueprint also assigns an Azure
-Policy definition that helps you monitor unrestricted access to storage accounts. Monitoring these
-indicators can help you ensure remote access methods comply with your security policy.
--- Show audit results from Linux VMs that allow remote connections from accounts without passwords-- Deploy prerequisites to audit Linux VMs that allow remote connections from accounts without
- passwords
-- Storage accounts should restrict network access-- Remote debugging should be turned off for API Apps-- Remote debugging should be turned off for Function Apps-- Remote debugging should be turned off for Web Applications-
-## AC-23 Data Mining
-
-This blueprint ensures that auditing and advanced data security are configured on SQL Servers.
--- Advanced data security should be enabled on your SQL servers-- Advanced data security should be enabled on SQL Managed Instance-- Auditing on SQL server should be enabled-
-## AU-3 (2) Content of Audit Records | Centralized Management of Planned Audit Record Content
-
-Log data collected by Azure Monitor is stored in a Log Analytics workspace enabling centralized
-configuration and management. This blueprint helps you ensure events are logged by assigning
-[Azure Policy](../../../policy/overview.md) definitions that audit and enforce deployment of the Log
-Analytics agent on Azure virtual machines.
--- \[Preview\]: Audit Log Analytics Agent Deployment - VM Image (OS) unlisted-- Audit Log Analytics agent deployment in virtual machine scale sets - VM Image (OS) unlisted-- Audit Log Analytics workspace for VM - Report Mismatch-- The Log Analytics agent should be installed on Virtual Machine Scale Sets-- The Log Analytics agent should be installed on virtual machines-
-## AU-5 Response to Audit Processing Failures
-
-This blueprint assigns [Azure Policy](../../../policy/overview.md) definitions that monitor
-audit and event logging configurations. Monitoring these configurations can provide an indicator of
-an audit system failure or misconfiguration and help you take corrective action.
--- Audit diagnostic setting-- Auditing on SQL server should be enabled-- Advanced data security should be enabled on SQL Managed Instance-- Advanced data security should be enabled on your SQL servers-
-## AU-6 (4) Audit Review, Analysis, and Reporting | Central Review and Analysis
-
-Log data collected by Azure Monitor is stored in a Log Analytics workspace enabling centralized
-reporting and analysis. This blueprint helps you ensure events are logged by assigning
-[Azure Policy](../../../policy/overview.md) definitions that audit and enforce deployment of the Log
-Analytics agent on Azure virtual machines.
--- \[Preview\]: Audit Log Analytics Agent Deployment - VM Image (OS) unlisted-- Audit Log Analytics agent deployment in virtual machine scale sets - VM Image (OS) unlisted-- Audit Log Analytics workspace for VM - Report Mismatch-
-## AU-6 (5) Audit Review, Analysis, and Reporting | Integration / Scanning and Monitoring Capabilities
-
-This blueprint provides policy definitions that audit records with analysis of vulnerability
-assessment on virtual machines, virtual machine scale sets, SQL Database servers, and SQL Managed
-Instance servers. These policy definitions also audit configuration of diagnostic logs to provide
-insight into operations that are performed within Azure resources. These insights provide real-time
-information about the security state of your deployed resources and can help you prioritize
-remediation actions. For detailed vulnerability scanning and monitoring, we recommend you use
-Azure Sentinel and Azure Security Center as well.
--- Audit diagnostic setting-- Vulnerability assessment should be enabled on SQL Managed Instance-- Vulnerability assessment should be enabled on your SQL servers-- Vulnerabilities in security configuration on your machines should be remediated-- Vulnerabilities on your SQL databases should be remediated-- Vulnerabilities should be remediated by a Vulnerability Assessment solution-- Vulnerabilities in security configuration on your virtual machine scale sets should be remediated-- \[Preview\]: Audit Log Analytics Agent Deployment - VM Image (OS) unlisted-- Audit Log Analytics agent deployment in virtual machine scale sets - VM Image (OS) unlisted-
-## AU-12 Audit Generation
-
-This blueprint provides policy definitions that audit and enforce deployment of the Log Analytics
-agent on Azure virtual machines and configuration of audit settings for other Azure resource types.
-These policy definitions also audit configuration of diagnostic logs to provide insight into
-operations that are performed within Azure resources. Additionally, auditing and Advanced Data
-Security are configured on SQL servers.
--- \[Preview\]: Audit Log Analytics Agent Deployment - VM Image (OS) unlisted-- Audit Log Analytics agent deployment in virtual machine scale sets - VM Image (OS) unlisted-- Audit Log Analytics workspace for VM - Report Mismatch-- Audit diagnostic setting-- Auditing on SQL server should be enabled-- Advanced data security should be enabled on SQL Managed Instance-- Advanced data security should be enabled on your SQL servers-
-## AU-12 (01) Audit Generation | System-Wide / Time-Correlated Audit Trail
-
-This blueprint helps you ensure system events are logged by assigning
-[Azure Policy](../../../policy/overview.md) definitions that audit log settings on Azure resources.
-This built-in policy requires you to specify an array of resource types to check whether diagnostic
-settings are enabled or not.
--- Audit diagnostic setting-
-## CM-7 (2) Least Functionality | Prevent Program Execution
-
-Adaptive application control in Azure Security Center is an intelligent, automated end-to-end
-application allowlist solution that can block or prevent specific software from running on your
-virtual machines. Application control can run in an enforcement mode that prohibits non-approved
-application from running. This blueprint assigns an Azure Policy definition that helps you monitor
-virtual machines where an application allowlist is recommended but has not yet been configured.
--- Adaptive application controls for defining safe applications should be enabled on your machines-
-## CM-7 (5) Least Functionality | Authorized Software / Whitelisting
-
-Adaptive application control in Azure Security Center is an intelligent, automated end-to-end
-application allowlist solution that can block or prevent specific software from running on your
-virtual machines. Application control helps you create approved application lists for your virtual
-machines. This blueprint assigns an [Azure Policy](../../../policy/overview.md) definition that
-helps you monitor virtual machines where an application allowlist is recommended but has not yet
-been configured.
--- Adaptive application controls for defining safe applications should be enabled on your machines-
-## CM-11 User-Installed Software
-
-Adaptive application control in Azure Security Center is an intelligent, automated end-to-end
-application allowlist solution that can block or prevent specific software from running on your
-virtual machines. Application control can help you enforce and monitor compliance with software
-restriction policies. This blueprint assigns an [Azure Policy](../../../policy/overview.md)
-definition that helps you monitor virtual machines where an application allowlist is recommended but
-has not yet been configured.
--- Adaptive application controls for defining safe applications should be enabled on your machines-
-## CP-7 Alternate Processing Site
-
-Azure Site Recovery replicates workloads running on virtual machines from a primary location to a
-secondary location. If an outage occurs at the primary site, the workload fails over the secondary
-location. This blueprint assigns an [Azure Policy](../../../policy/overview.md) definition that
-audits virtual machines without disaster recovery configured. Monitoring this indicator can help you
-ensure necessary contingency controls are in place.
--- Audit virtual machines without disaster recovery configured-
-## CP-9 (05) Information System Backup | Transfer to Alternate Storage Site
-
-This blueprint assigns Azure Policy definitions that audit the organization's system backup
-information to the alternate storage site electronically. For physical shipment of storage metadata,
-consider using Azure Data Box.
--- Geo-redundant storage should be enabled for Storage Accounts-- Geo-redundant backup should be enabled for Azure Database for PostgreSQL-- Geo-redundant backup should be enabled for Azure Database for MySQL-- Long-term geo-redundant backup should be enabled for Azure SQL Databases-
-## IA-2 (1) Identification and Authentication (Organizational Users) | Network Access to Privileged Accounts
-
-This blueprint helps you restrict and control privileged access by assigning
-[Azure Policy](../../../policy/overview.md) definitions to audit accounts with owner and/or write
-permissions that don't have multi-factor authentication enabled. Multi-factor authentication helps
-keep accounts secure even if one piece of authentication information is compromised. By monitoring
-accounts without multi-factor authentication enabled, you can identify accounts that may be more
-likely to be compromised.
--- MFA should be enabled on accounts with owner permissions on your subscription-- MFA should be enabled accounts with write permissions on your subscription-
-## IA-2 (2) Identification and Authentication (Organizational Users) | Network Access to Non-Privileged Accounts
-
-This blueprint helps you restrict and control access by assigning an
-[Azure Policy](../../../policy/overview.md) definition to audit accounts with read permissions that
-don't have multi-factor authentication enabled. Multi-factor authentication helps keep accounts
-secure even if one piece of authentication information is compromised. By monitoring accounts
-without multi-factor authentication enabled, you can identify accounts that may be more likely to be
-compromised.
--- MFA should be enabled on accounts with read permissions on your subscription-
-## IA-5 Authenticator Management
-
-This blueprint assigns [Azure Policy](../../../policy/overview.md) definitions that audit Linux
-virtual machines that allow remote connections from accounts without passwords and/or have incorrect
-permissions set on the passwd file. This blueprint also assigns policy definitions that audit the
-configuration of the password encryption type for Windows virtual machines. Monitoring these
-indicators helps you ensure that system authenticators comply with your organization's
-identification and authentication policy.
--- Show audit results from Linux VMs that do not have the passwd file permissions set to 0644-- Show audit results from Linux VMs that have accounts without passwords-- Show audit results from Windows VMs that do not store passwords using reversible encryption-- Deploy prerequisites to audit Linux VMs that do not have the passwd file permissions set to 0644-- Deploy prerequisites to audit Linux VMs that have accounts without passwords-- Deploy prerequisites to audit Windows VMs that do not store passwords using reversible encryption-
-## IA-5 (1) Authenticator Management | Password-Based Authentication
-
-This blueprint helps you enforce strong passwords by assigning
-[Azure Policy](../../../policy/overview.md) definitions that audit Windows virtual machines that
-don't enforce minimum strength and other password requirements. Awareness of virtual machines in
-violation of the password strength policy helps you take corrective actions to ensure passwords for
-all virtual machine user accounts comply with your organization's password policy.
--- Show audit results from Windows VMs that allow re-use of the previous 24 passwords-- Show audit results from Windows VMs that do not have a maximum password age of 70 days-- Show audit results from Windows VMs that do not have a minimum password age of 1 day-- Show audit results from Windows VMs that do not have the password complexity setting enabled-- Show audit results from Windows VMs that do not restrict the minimum password length to 14
- characters
-- Show audit results from Windows VMs that do not store passwords using reversible encryption-- Deploy prerequisites to audit Windows VMs that allow re-use of the previous 24 passwords-- Deploy prerequisites to audit Windows VMs that do not have a maximum password age of 70 days-- Deploy prerequisites to audit Windows VMs that do not have a minimum password age of 1 day-- Deploy prerequisites to audit Windows VMs that do not have the password complexity setting enabled-- Deploy prerequisites to audit Windows VMs that do not restrict the minimum password length to 14
- characters
-- Deploy prerequisites to audit Windows VMs that do not store passwords using reversible encryption-
-## IR-6 (2) Incident Reporting | Vulnerabilities Related to Incidents
-
-This blueprint provides policy definitions that audit records with analysis of vulnerability
-assessment on virtual machines, virtual machine scale sets, and SQL servers. These insights provide
-real-time information about the security state of your deployed resources and can help you
-prioritize remediation actions.
--- Vulnerabilities in security configuration on your virtual machine scale sets should be remediated-- Vulnerabilities should be remediated by a Vulnerability Assessment solution-- Vulnerabilities in security configuration on your machines should be remediated-- Vulnerabilities in container security configurations should be remediated-- Vulnerabilities on your SQL databases should be remediated-
-## RA-5 Vulnerability Scanning
-
-This blueprint helps you manage information system vulnerabilities by assigning
-[Azure Policy](../../../policy/overview.md) definitions that monitor operating system
-vulnerabilities, SQL vulnerabilities, and virtual machine vulnerabilities in Azure Security Center.
-Azure Security Center provides reporting capabilities that enable you to have real-time insight into
-the security state of deployed Azure resources. This blueprint also assigns policy definitions that
-audit and enforce Advanced Data Security on SQL servers. Advanced data security included
-vulnerability assessment and advanced threat protection capabilities to help you understand
-vulnerabilities in your deployed resources.
--- Advanced data security should be enabled on SQL Managed Instance-- Advanced data security should be enabled on your SQL servers-- Vulnerabilities in security configuration on your virtual machine scale sets should be remediated-- Vulnerabilities in security configuration on your machines should be remediated-- Vulnerabilities on your SQL databases should be remediated-- Vulnerabilities should be remediated by a Vulnerability Assessment solution-
-## SC-5 Denial of Service Protection
-
-Azure's distributed denial of service (DDoS) Standard tier provides additional features and
-mitigation capabilities over the basic service tier. These additional features include Azure Monitor
-integration and the ability to review post-attack mitigation reports. This blueprint assigns an
-[Azure Policy](../../../policy/overview.md) definition that audits if the DDoS Standard tier is
-enabled. Understanding the capability difference between the service tiers can help you select the
-best solution to address denial of service protections for your Azure environment.
--- Azure DDoS Protection Standard should be enabled-
-## SC-7 Boundary Protection
-
-This blueprint helps you manage and control the system boundary by assigning an
-[Azure Policy](../../../policy/overview.md) definition that monitors for network security group
-hardening recommendations in Azure Security Center. Azure Security Center analyzes traffic patterns
-of Internet facing virtual machines and provides network security group rule recommendations to
-reduce the potential attack surface. Additionally, this blueprint also assigns policy definitions
-that monitor unprotected endpoints, applications, and storage accounts. Endpoints and applications
-that aren't protected by a firewall, and storage accounts with unrestricted access can allow
-unintended access to information contained within the information system.
--- Access through Internet facing endpoint should be restricted-- Storage accounts should restrict network access-
-## SC-7 (3) Boundary Protection | Access Points
-
-Just-in-time (JIT) virtual machine access locks down inbound traffic to Azure virtual machines,
-reducing exposure to attacks while providing easy access to connect to VMs when needed. JIT virtual
-machine access helps you limit the number of external connections to your resources in Azure. This
-blueprint assigns an [Azure Policy](../../../policy/overview.md) definition that helps you monitor
-virtual machines that can support just-in-time access but haven't yet been configured.
--- Management ports of virtual machines should be protected with just-in-time network access control-
-## SC-7 (4) Boundary Protection | External Telecommunications Services
-
-Just-in-time (JIT) virtual machine access locks down inbound traffic to Azure virtual machines,
-reducing exposure to attacks while providing easy access to connect to VMs when needed. JIT virtual
-machine access helps you manage exceptions to your traffic flow policy by facilitating the access
-request and approval processes. This blueprint assigns an
-[Azure Policy](../../../policy/overview.md) definition that helps you monitor virtual machines that
-can support just-in-time access but haven't yet been configured.
--- Just-In-Time network access control should be applied on virtual machines-
-## SC-8 (1) Transmission Confidentiality and Integrity | Cryptographic or Alternate Physical Protection
-
-This blueprint helps you protect the confidential and integrity of transmitted information by
-assigning [Azure Policy](../../../policy/overview.md) definitions that help you monitor
-cryptographic mechanism implemented for communications protocols. Ensuring communications are
-properly encrypted can help you meet your organization's requirements or protecting information from
-unauthorized disclosure and modification.
--- API App should only be accessible over HTTPS-- Show audit results from Windows web servers that are not using secure communication protocols-- Deploy prerequisites to audit Windows web servers that are not using secure communication
- protocols
-- Function App should only be accessible over HTTPS-- Only secure connections to your Azure Cache for Redis should be enabled-- Secure transfer to storage accounts should be enabled-- Web Application should only be accessible over HTTPS-
-## SC-28 (1) Protection of Information at Rest | Cryptographic Protection
-
-This blueprint helps you enforce your policy on the use of cryptograph controls to protect
-information at rest by assigning [Azure Policy](../../../policy/overview.md) definitions that
-enforce specific cryptograph controls and audit use of weak cryptographic settings. Understanding
-where your Azure resources may have non-optimal cryptographic configurations can help you take
-corrective actions to ensure resources are configured in accordance with your information security
-policy. Specifically, the policy definitions assigned by this blueprint require encryption for data
-lake storage accounts; require transparent data encryption on SQL databases; and audit missing
-encryption on SQL databases, virtual machine disks, and automation account variables.
--- Advanced data security should be enabled on SQL Managed Instance-- Advanced data security should be enabled on your SQL servers-- Disk encryption should be applied on virtual machines-- Transparent Data Encryption on SQL databases should be enabled-
-## SI-2 Flaw Remediation
-
-This blueprint helps you manage information system flaws by assigning
-[Azure Policy](../../../policy/overview.md) definitions that monitor missing system updates,
-operating system vulnerabilities, SQL vulnerabilities, and virtual machine vulnerabilities in Azure
-Security Center. Azure Security Center provides reporting capabilities that enable you to have
-real-time insight into the security state of deployed Azure resources. This blueprint also assigns a
-policy definition that ensures patching of the operating system for virtual machine scale sets.
--- System updates on virtual machine scale sets should be installed-- System updates should be installed on your machines-- Vulnerabilities in security configuration on your virtual machine scale sets should be remediated-- Vulnerabilities in security configuration on your machines should be remediated-- Vulnerabilities on your SQL databases should be remediated-- Vulnerabilities should be remediated by a Vulnerability Assessment solution-
-## SI-02 (06) Flaw Remediation | Removal of Previous Versions of Software / Firmware
-
-This blueprint assigns policy definitions that help you ensure applications are using the latest
-version of HTTP, Java, PHP, Python, and TLS. This blueprint also assigns a policy definition that
-ensures that Kubernetes Services is upgraded to its non-vulnerable version.
--- Ensure that 'HTTP Version' is the latest, if used to run the API app-- Ensure that 'HTTP Version' is the latest, if used to run the Function app-- Ensure that 'HTTP Version' is the latest, if used to run the Web app-- Ensure that 'Java version' is the latest, if used as a part of the API app-- Ensure that 'Java version' is the latest, if used as a part of the Function app-- Ensure that 'Java version' is the latest, if used as a part of the Web app-- Ensure that 'PHP version' is the latest, if used as a part of the API app-- Ensure that 'PHP version' is the latest, if used as a part of the WEB app-- Ensure that 'Python version' is the latest, if used as a part of the API app-- Ensure that 'Python version' is the latest, if used as a part of the Function app-- Ensure that 'Python version' is the latest, if used as a part of the Web app-- Latest TLS version should be used in your API App-- Latest TLS version should be used in your Function App-- Latest TLS version should be used in your Web App-- Kubernetes Services should be upgraded to a non-vulnerable Kubernetes version-
-## SI-3 Malicious Code Protection
-
-This blueprint helps you manage endpoint protection, including malicious code protection, by
-assigning [Azure Policy](../../../policy/overview.md) definitions that monitor for missing endpoint
-protection on virtual machines in Azure Security Center and enforce the Microsoft antimalware
-solution on Windows virtual machines.
--- Endpoint protection solution should be installed on virtual machine scale sets-- Monitor missing Endpoint Protection in Azure Security Center-- Microsoft IaaSAntimalware extension should be deployed on Windows servers-
-## SI-3 (1) Malicious Code Protection | Central Management
-
-This blueprint helps you manage endpoint protection, including malicious code protection, by
-assigning [Azure Policy](../../../policy/overview.md) definitions that monitor for missing endpoint
-protection on virtual machines in Azure Security Center. Azure Security Center provides centralized
-management and reporting capabilities that enable you to have real-time insight into the security
-state of deployed Azure resources.
--- Endpoint protection solution should be installed on virtual machine scale sets-- Monitor missing Endpoint Protection in Azure Security Center-
-## SI-4 Information System Monitoring
-
-This blueprint helps you monitor your system by auditing and enforcing logging and data security
-across Azure resources. Specifically, the policies assigned audit and enforce deployment of the Log
-Analytics agent, and enhanced security settings for SQL databases, storage accounts and network
-resources. These capabilities can help you detect anomalous behavior and indicators of attacks so
-you can take appropriate action.
--- \[Preview\]: Audit Log Analytics Agent Deployment - VM Image (OS) unlisted-- Audit Log Analytics agent deployment in virtual machine scale sets - VM Image (OS) unlisted-- Audit Log Analytics workspace for VM - Report Mismatch-- Advanced data security should be enabled on SQL Managed Instance-- Advanced data security should be enabled on your SQL servers-- Network Watcher should be enabled-
-## SI-4 (12) Information System Monitoring | Automated Alerts
-
-This blueprint provides policy definitions that help you ensure data security notifications are
-properly enabled. In addition, this blueprint ensures that the Standard pricing tier is enabled for
-Azure Security Center. Note that the Standard pricing tier enables threat detection for networks and
-virtual machines, providing threat intelligence, anomaly detection, and behavior analytics in Azure
-Security Center.
--- Email notification to subscription owner for high severity alerts should be enabled-- A security contact email address should be provided for your subscription-- A security contact phone number should be provided for your subscription-
-> [!NOTE]
-> Availability of specific Azure Policy definitions may vary in Azure Government and other national
-> clouds.
-
-## Next steps
-
-Now that you've reviewed the control mapping of the DoD Impact Level 5 blueprint, visit the following
-articles to learn about the blueprint and how to deploy this sample:
-
-> [!div class="nextstepaction"]
-> [DoD Impact Level 5 blueprint - Overview](./index.md)
-> [DoD Impact Level 5 blueprint - Deploy steps](./deploy.md)
-
-Additional articles about blueprints and how to use them:
--- Learn about the [blueprint lifecycle](../../concepts/lifecycle.md).-- Understand how to use [static and dynamic parameters](../../concepts/parameters.md).-- Learn to customize the [blueprint sequencing order](../../concepts/sequencing-order.md).-- Find out how to make use of [blueprint resource locking](../../concepts/resource-locking.md).-- Learn how to [update existing assignments](../../how-to/update-existing-assignments.md).
governance Deploy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/dod-impact-level-5/deploy.md
- Title: DoD Impact Level 5 blueprint sample
-description: Deploy steps for the DoD Impact Level 5 blueprint sample including blueprint artifact parameter details.
Previously updated : 04/13/2021--
-# Deploy the DoD Impact Level 5 blueprint sample
-
-To deploy the Azure Blueprints Department of Defense Impact Level 5 (DoD IL5) blueprint sample, the following steps must be taken:
-
-> [!div class="checklist"]
-> - Create a new blueprint from the sample
-> - Mark your copy of the sample as **Published**
-> - Assign your copy of the blueprint to an existing subscription
-
-If you don't have an Azure Government subscription, request a
-[trial subscription](https://azure.microsoft.com/global-infrastructure/government/request/) before
-you begin.
-
-## Create blueprint from sample
-
-First, implement the blueprint sample by creating a new blueprint in your environment using the
-sample as a starter.
-
-1. Select **All services** in the left pane. Search for and select **Blueprints**.
-
-1. From the **Getting started** page on the left, select the **Create** button under _Create a
- blueprint_.
-
-1. Find the **DoD Impact Level 5** blueprint sample under _Other Samples_ and select **Use this sample**.
-
-1. Enter the _Basics_ of the blueprint sample:
-
- - **Blueprint name**: Provide a name for your copy of the DoD Impact Level 5 blueprint sample.
- - **Definition location**: Use the ellipsis and select the management group to save your copy of
- the sample to.
-
-1. Select the _Artifacts_ tab at the top of the page or **Next: Artifacts** at the bottom of the
- page.
-
-1. Review the list of artifacts that make up the blueprint sample. Many of the artifacts have
- parameters that we'll define later. Select **Save Draft** when you've finished reviewing the
- blueprint sample.
-
-## Publish the sample copy
-
-Your copy of the blueprint sample has now been created in your environment. It's created in
-**Draft** mode and must be **Published** before it can be assigned and deployed. The copy of the
-blueprint sample can be customized to your environment and needs, but that modification may move it
-away from alignment with DoD Impact Level 5 controls.
-
-1. Select **All services** in the left pane. Search for and select **Blueprints**.
-
-1. Select the **Blueprint definitions** page on the left. Use the filters to find your copy of the
- blueprint sample and then select it.
-
-1. Select **Publish blueprint** at the top of the page. In the new page on the right, provide a
- **Version** for your copy of the blueprint sample. This property is useful for if you make a
- modification later. Provide **Change notes** such as "First version published from the DoD
- IL5 blueprint sample." Then select **Publish** at the bottom of the page.
-
-## Assign the sample copy
-
-Once the copy of the blueprint sample has been successfully **Published**, it can be assigned to a
-subscription within the management group it was saved to. This step is where parameters are
-provided to make each deployment of the copy of the blueprint sample unique.
-
-1. Select **All services** in the left pane. Search for and select **Blueprints**.
-
-1. Select the **Blueprint definitions** page on the left. Use the filters to find your copy of the
- blueprint sample and then select it.
-
-1. Select **Assign blueprint** at the top of the blueprint definition page.
-
-1. Provide the parameter values for the blueprint assignment:
-
- - Basics
-
- - **Subscriptions**: Select one or more of the subscriptions that are in the management group
- you saved your copy of the blueprint sample to. If you select more than one subscription, an
- assignment will be created for each using the parameters entered.
- - **Assignment name**: The name is pre-populated for you based on the name of the blueprint.
- Change as needed or leave as is.
- - **Location**: Select a region for the managed identity to be created in. Azure Blueprint uses
- this managed identity to deploy all artifacts in the assigned blueprint. To learn more, see
- [managed identities for Azure resources](../../../../active-directory/managed-identities-azure-resources/overview.md).
- - **Blueprint definition version**: Pick a **Published** version of your copy of the blueprint
- sample.
-
- - Lock Assignment
-
- Select the blueprint lock setting for your environment. For more information, see
- [blueprints resource locking](../../concepts/resource-locking.md).
-
- - Managed Identity
-
- Leave the default _system assigned_ managed identity option.
-
- - Artifact parameters
-
- The parameters defined in this section apply to the artifact under which it's defined. These
- parameters are [dynamic parameters](../../concepts/parameters.md#dynamic-parameters) since
- they're defined during the assignment of the blueprint. For a full list or artifact parameters
- and their descriptions, see [Artifact parameters table](#artifact-parameters-table).
-
-1. Once all parameters have been entered, select **Assign** at the bottom of the page. The blueprint
- assignment is created and artifact deployment begins. Deployment takes roughly an hour. To check
- on the status of deployment, open the blueprint assignment.
-
-> [!WARNING]
-> The Azure Blueprints service and the built-in blueprint samples are **free of cost**. Azure
-> resources are [priced by product](https://azure.microsoft.com/pricing/). Use the
-> [pricing calculator](https://azure.microsoft.com/pricing/calculator/) to estimate the cost of
-> running resources deployed by this blueprint sample.
-
-## Artifact parameters table
-
-The following table provides a list of the blueprint artifact parameters:
-
-|Artifact name|Artifact type|Parameter name|Description|
-|-|-|-|-|
-|DoD Impact Level 5|Policy Assignment|List of users that must be included in Windows VM Administrators group|A semicolon-separated list of users that should be included in the Administrators local group; Ex: Administrator; myUser1; myUser2|
-|DoD Impact Level 5|Policy Assignment|List of users excluded from Windows VM Administrators group|A semicolon-separated list of users that should be excluded in the Administrators local group; Ex: Administrator; myUser1; myUser2|
-|DoD Impact Level 5|Policy Assignment|List of resource types that should have diagnostic logs enabled||
-|DoD Impact Level 5|Policy Assignment|Log Analytics workspace ID for VM agent reporting|ID (GUID) of the Log Analytics workspace where VMs agents should report|
-|DoD Impact Level 5|Policy Assignment|List of regions where Network Watcher should be enabled|To see a complete list of regions use Get-AzLocation,|
-|DoD Impact Level 5|Policy Assignment|Minimum TLS version for Windows web servers|The minimum TLS protocol version that should be enabled on Windows web servers|
-|DoD Impact Level 5|Policy Assignment|Latest PHP version|Latest supported PHP version for App Services|
-|DoD Impact Level 5|Policy Assignment|Latest Java version|Latest supported Java version for App Services|
-|DoD Impact Level 5|Policy Assignment|Latest Windows Python version|Latest supported Python version for App Services|
-|DoD Impact Level 5|Policy Assignment|Latest Linux Python version|Latest supported Python version for App Services|
-|DoD Impact Level 5|Policy Assignment|Optional: List of Windows VM images that support Log Analytics agent to add to audit scope|A semicolon-separated list of images; Ex: /subscriptions/<subscriptionId>/resourceGroups/YourResourceGroup/providers/Microsoft.Compute/images/ContosoStdImage|
-|DoD Impact Level 5|Policy Assignment|Optional: List of Linux VM images that support Log Analytics agent to add to audit scope|A semicolon-separated list of images; Ex: /subscriptions/<subscriptionId>/resourceGroups/YourResourceGroup/providers/Microsoft.Compute/images/ContosoStdImage|
-|DoD Impact Level 5|Policy Assignment|Effect for policy: There should be more than one owner assigned to your subscription|Azure Policy effect for this policy; for more information about effects, visit https://aka.ms/policyeffects|
-|DoD Impact Level 5|Policy Assignment|Effect for policy: Disk encryption should be applied on virtual machines|Azure Policy effect for this policy; for more information about effects, visit https://aka.ms/policyeffects|
-|DoD Impact Level 5|Policy Assignment|Effect for policy: Email notification to subscription owner for high severity alerts should be enabled|Azure Policy effect for this policy; for more information about effects, visit https://aka.ms/policyeffects|
-|DoD Impact Level 5|Policy Assignment|Effect for policy: Remote debugging should be turned off for Function Apps|Azure Policy effect for this policy; for more information about effects, visit https://aka.ms/policyeffects|
-|DoD Impact Level 5|Policy Assignment|Effect for policy: Ensure that '.NET Framework' version is the latest, if used as a part of the Function App|Azure Policy effect for this policy; for more information about effects, visit https://aka.ms/policyeffects|
-|DoD Impact Level 5|Policy Assignment|Effect for policy: Transparent Data Encryption on SQL databases should be enabled|Azure Policy effect for this policy; for more information about effects, visit https://aka.ms/policyeffects|
-|DoD Impact Level 5|Policy Assignment|Effect for policy: Vulnerability assessment should be enabled on your SQL managed instances|Azure Policy effect for this policy; for more information about effects, visit https://aka.ms/policyeffects|
-|DoD Impact Level 5|Policy Assignment|Effect for policy: Ensure that 'PHP version' is the latest, if used as a part of the API app|Azure Policy effect for this policy; for more information about effects, visit https://aka.ms/policyeffects|
-|DoD Impact Level 5|Policy Assignment|Effect for policy: An Azure Active Directory administrator should be provisioned for SQL servers|Azure Policy effect for this policy; for more information about effects, visit https://aka.ms/policyeffects|
-|DoD Impact Level 5|Policy Assignment|Effect for policy: Only secure connections to your Redis Cache should be enabled|Azure Policy effect for this policy; for more information about effects, visit https://aka.ms/policyeffects|
-|DoD Impact Level 5|Policy Assignment|Effect for policy: Endpoint protection solution should be installed on virtual machine scale sets|Azure Policy effect for this policy; for more information about effects, visit https://aka.ms/policyeffects|
-|DoD Impact Level 5|Policy Assignment|Effect for policy: Audit unrestricted network access to storage accounts|Azure Policy effect for this policy; for more information about effects, visit https://aka.ms/policyeffects|
-|DoD Impact Level 5|Policy Assignment|Effect for policy: Advanced data security settings for SQL managed instance should contain an email address to receive security alerts|Azure Policy effect for this policy; for more information about effects, visit https://aka.ms/policyeffects|
-|DoD Impact Level 5|Policy Assignment|Effect for policy: Vulnerabilities in security configuration on your virtual machine scale sets should be remediated|Azure Policy effect for this policy; for more information about effects, visit https://aka.ms/policyeffects|
-|DoD Impact Level 5|Policy Assignment|Effect for policy: Secure transfer to storage accounts should be enabled|Azure Policy effect for this policy; for more information about effects, visit https://aka.ms/policyeffects|
-|DoD Impact Level 5|Policy Assignment|Effect for policy: Adaptive Application Controls should be enabled on virtual machines|Azure Policy effect for this policy; for more information about effects, visit https://aka.ms/policyeffects|
-|DoD Impact Level 5|Policy Assignment|Effect for policy: Geo-redundant backup should be enabled for Azure Database for PostgreSQL|Azure Policy effect for this policy; for more information about effects, visit https://aka.ms/policyeffects|
-|DoD Impact Level 5|Policy Assignment|Effect for policy: Ensure that 'Java version' is the latest, if used as a part of the Web app|Azure Policy effect for this policy; for more information about effects, visit https://aka.ms/policyeffects|
-|DoD Impact Level 5|Policy Assignment|Effect for policy: A maximum of 3 owners should be designated for your subscription|Azure Policy effect for this policy; for more information about effects, visit https://aka.ms/policyeffects|
-|DoD Impact Level 5|Policy Assignment|Effect for policy: A security contact email address should be provided for your subscription|Azure Policy effect for this policy; for more information about effects, visit https://aka.ms/policyeffects|
-|DoD Impact Level 5|Policy Assignment|Effect for policy: CORS should not allow every resource to access your Web Applications|Azure Policy effect for this policy; for more information about effects, visit https://aka.ms/policyeffects|
-|DoD Impact Level 5|Policy Assignment|Effect for policy: External accounts with write permissions should be removed from your subscription|Azure Policy effect for this policy; for more information about effects, visit https://aka.ms/policyeffects|
-|DoD Impact Level 5|Policy Assignment|Effect for policy: External accounts with read permissions should be removed from your subscription|Azure Policy effect for this policy; for more information about effects, visit https://aka.ms/policyeffects|
-|DoD Impact Level 5|Policy Assignment|Effect for policy: Deprecated accounts should be removed from your subscription|Azure Policy effect for this policy; for more information about effects, visit https://aka.ms/policyeffects|
-|DoD Impact Level 5|Policy Assignment|Effect for policy: Function App should only be accessible over HTTPS|Azure Policy effect for this policy; for more information about effects, visit https://aka.ms/policyeffects|
-|DoD Impact Level 5|Policy Assignment|Effect for policy: Ensure that 'Python version' is the latest, if used as a part of the Web app|Azure Policy effect for this policy; for more information about effects, visit https://aka.ms/policyeffects|
-|DoD Impact Level 5|Policy Assignment|Effect for policy: Ensure that 'Python version' is the latest, if used as a part of the Function app|Azure Policy effect for this policy; for more information about effects, visit https://aka.ms/policyeffects|
-|DoD Impact Level 5|Policy Assignment|Effect for policy: Ensure that 'PHP version' is the latest, if used as a part of the WEB app|Azure Policy effect for this policy; for more information about effects, visit https://aka.ms/policyeffects|
-|DoD Impact Level 5|Policy Assignment|Effect for policy: Ensure that 'Python version' is the latest, if used as a part of the API app|Azure Policy effect for this policy; for more information about effects, visit https://aka.ms/policyeffects|
-|DoD Impact Level 5|Policy Assignment|Effect for policy: Vulnerabilities should be remediated by a Vulnerability Assessment solution|Azure Policy effect for this policy; for more information about effects, visit https://aka.ms/policyeffects|
-|DoD Impact Level 5|Policy Assignment|Effect for policy: Geo-redundant backup should be enabled for Azure Database for MySQL|Azure Policy effect for this policy; for more information about effects, visit https://aka.ms/policyeffects|
-|DoD Impact Level 5|Policy Assignment|Effect for policy: Ensure that '.NET Framework' version is the latest, if used as a part of the Web app|Azure Policy effect for this policy; for more information about effects, visit https://aka.ms/policyeffects|
-|DoD Impact Level 5|Policy Assignment|Effect for policy: System updates should be installed on your machines|Azure Policy effect for this policy; for more information about effects, visit https://aka.ms/policyeffects|
-|DoD Impact Level 5|Policy Assignment|Effect for policy: Ensure that 'Java version' is the latest, if used as a part of the API app|Azure Policy effect for this policy; for more information about effects, visit https://aka.ms/policyeffects|
-|DoD Impact Level 5|Policy Assignment|Effect for policy: Ensure that 'HTTP Version' is the latest, if used to run the Web app|Azure Policy effect for this policy; for more information about effects, visit https://aka.ms/policyeffects|
-|DoD Impact Level 5|Policy Assignment|Effect for policy: Latest TLS version should be used in your API App|Azure Policy effect for this policy; for more information about effects, visit https://aka.ms/policyeffects|
-|DoD Impact Level 5|Policy Assignment|Effect for policy: MFA should be enabled accounts with write permissions on your subscription|Azure Policy effect for this policy; for more information about effects, visit https://aka.ms/policyeffects|
-|DoD Impact Level 5|Policy Assignment|Effect for policy: Advanced data security settings for SQL server should contain an email address to receive security alerts|Azure Policy effect for this policy; for more information about effects, visit https://aka.ms/policyeffects|
-|DoD Impact Level 5|Policy Assignment|Effect for policy: Ensure that 'HTTP Version' is the latest, if used to run the API app|Azure Policy effect for this policy; for more information about effects, visit https://aka.ms/policyeffects|
-|DoD Impact Level 5|Policy Assignment|Effect for policy: Microsoft IaaSAntimalware extension should be deployed on Windows servers|Azure Policy effect for this policy; for more information about effects, visit https://aka.ms/policyeffects|
-|DoD Impact Level 5|Policy Assignment|Effect for policy: Ensure that 'Java version' is the latest, if used as a part of the Function app|Azure Policy effect for this policy; for more information about effects, visit https://aka.ms/policyeffects|
-|DoD Impact Level 5|Policy Assignment|Effect for policy: Access through Internet facing endpoint should be restricted|Azure Policy effect for this policy; for more information about effects, visit https://aka.ms/policyeffects|
-|DoD Impact Level 5|Policy Assignment|Effect for policy: Security Center standard pricing tier should be selected|Azure Policy effect for this policy; for more information about effects, visit https://aka.ms/policyeffects|
-|DoD Impact Level 5|Policy Assignment|Effect for policy: Audit usage of custom RBAC rules|Azure Policy effect for this policy; for more information about effects, visit https://aka.ms/policyeffects|
-|DoD Impact Level 5|Policy Assignment|Effect for policy: Web Application should only be accessible over HTTPS|Azure Policy effect for this policy; for more information about effects, visit https://aka.ms/policyeffects|
-|DoD Impact Level 5|Policy Assignment|Effect for policy: Auditing on SQL server should be enabled|Azure Policy effect for this policy; for more information about effects, visit https://aka.ms/policyeffects|
-|DoD Impact Level 5|Policy Assignment|Effect for policy: The Log Analytics agent should be installed on virtual machines|Azure Policy effect for this policy; for more information about effects, visit https://aka.ms/policyeffects|
-|DoD Impact Level 5|Policy Assignment|Effect for policy: DDoS Protection Standard should be enabled|Azure Policy effect for this policy; for more information about effects, visit https://aka.ms/policyeffects|
-|DoD Impact Level 5|Policy Assignment|Effect for policy: MFA should be enabled on accounts with owner permissions on your subscription|Azure Policy effect for this policy; for more information about effects, visit https://aka.ms/policyeffects|
-|DoD Impact Level 5|Policy Assignment|Effect for policy: Ensure that 'PHP version' is the latest, if used as a part of the Function app|Azure Policy effect for this policy; for more information about effects, visit https://aka.ms/policyeffects|
-|DoD Impact Level 5|Policy Assignment|Effect for policy: Advanced data security should be enabled on your SQL servers|Azure Policy effect for this policy; for more information about effects, visit https://aka.ms/policyeffects|
-|DoD Impact Level 5|Policy Assignment|Effect for policy: Advanced data security should be enabled on your SQL managed instances|Azure Policy effect for this policy; for more information about effects, visit https://aka.ms/policyeffects|
-|DoD Impact Level 5|Policy Assignment|Effect for policy: Email notifications to admins and subscription owners should be enabled in SQL managed instance advanced data security settings|Azure Policy effect for this policy; for more information about effects, visit https://aka.ms/policyeffects|
-|DoD Impact Level 5|Policy Assignment|Effect for policy: Monitor missing Endpoint Protection in Azure Security Center|Azure Policy effect for this policy; for more information about effects, visit https://aka.ms/policyeffects|
-|DoD Impact Level 5|Policy Assignment|Effect for policy: Just-In-Time network access control should be applied on virtual machines|Azure Policy effect for this policy; for more information about effects, visit https://aka.ms/policyeffects|
-|DoD Impact Level 5|Policy Assignment|Effect for policy: A security contact phone number should be provided for your subscription|Azure Policy effect for this policy; for more information about effects, visit https://aka.ms/policyeffects|
-|DoD Impact Level 5|Policy Assignment|Effect for policy: Service Fabric clusters should only use Azure Active Directory for client authentication|Azure Policy effect for this policy; for more information about effects, visit https://aka.ms/policyeffects|
-|DoD Impact Level 5|Policy Assignment|Effect for policy: API App should only be accessible over HTTPS|Azure Policy effect for this policy; for more information about effects, visit https://aka.ms/policyeffects|
-|DoD Impact Level 5|Policy Assignment|Effect for policy: Advanced Threat Protection types should be set to 'All' in SQL managed instance Advanced Data Security settings|Azure Policy effect for this policy; for more information about effects, visit https://aka.ms/policyeffects|
-|DoD Impact Level 5|Policy Assignment|Effect for policy: Geo-redundant storage should be enabled for Storage Accounts|Azure Policy effect for this policy; for more information about effects, visit https://aka.ms/policyeffects|
-|DoD Impact Level 5|Policy Assignment|Effect for policy: Ensure that '.NET Framework' version is the latest, if used as a part of the API app|Azure Policy effect for this policy; for more information about effects, visit https://aka.ms/policyeffects|
-|DoD Impact Level 5|Policy Assignment|Effect for policy: System updates on virtual machine scale sets should be installed|Azure Policy effect for this policy; for more information about effects, visit https://aka.ms/policyeffects|
-|DoD Impact Level 5|Policy Assignment|Effect for policy: Email notifications to admins and subscription owners should be enabled in SQL server advanced data security settings|Azure Policy effect for this policy; for more information about effects, visit https://aka.ms/policyeffects|
-|DoD Impact Level 5|Policy Assignment|Effect for policy: Remote debugging should be turned off for Web Applications|Azure Policy effect for this policy; for more information about effects, visit https://aka.ms/policyeffects|
-|DoD Impact Level 5|Policy Assignment|Effect for policy: Long-term geo-redundant backup should be enabled for Azure SQL Databases|Azure Policy effect for this policy; for more information about effects, visit https://aka.ms/policyeffects|
-|DoD Impact Level 5|Policy Assignment|Effect for policy: Vulnerabilities in security configuration on your machines should be remediated|Azure Policy effect for this policy; for more information about effects, visit https://aka.ms/policyeffects|
-|DoD Impact Level 5|Policy Assignment|Effect for policy: Ensure that 'HTTP Version' is the latest, if used to run the Function app|Azure Policy effect for this policy; for more information about effects, visit https://aka.ms/policyeffects|
-|DoD Impact Level 5|Policy Assignment|Effect for policy: MFA should be enabled on accounts with read permissions on your subscription|Azure Policy effect for this policy; for more information about effects, visit https://aka.ms/policyeffects|
-|DoD Impact Level 5|Policy Assignment|Effect for policy: Advanced Threat Protection types should be set to 'All' in SQL server Advanced Data Security settings|Azure Policy effect for this policy; for more information about effects, visit https://aka.ms/policyeffects|
-|DoD Impact Level 5|Policy Assignment|Effect for policy: Vulnerabilities in container security configurations should be remediated|Azure Policy effect for this policy; for more information about effects, visit https://aka.ms/policyeffects|
-|DoD Impact Level 5|Policy Assignment|Effect for policy: Remote debugging should be turned off for API Apps|Azure Policy effect for this policy; for more information about effects, visit https://aka.ms/policyeffects|
-|DoD Impact Level 5|Policy Assignment|Effect for policy: Deprecated accounts with owner permissions should be removed from your subscription|Azure Policy effect for this policy; for more information about effects, visit https://aka.ms/policyeffects|
-|DoD Impact Level 5|Policy Assignment|Effect for policy: Vulnerability assessment should be enabled on your SQL servers|Azure Policy effect for this policy; for more information about effects, visit https://aka.ms/policyeffects|
-|DoD Impact Level 5|Policy Assignment|Effect for policy: The Log Analytics agent should be installed on Virtual Machine Scale Sets|Azure Policy effect for this policy; for more information about effects, visit https://aka.ms/policyeffects|
-|DoD Impact Level 5|Policy Assignment|Effect for policy: Latest TLS version should be used in your Web App|Azure Policy effect for this policy; for more information about effects, visit https://aka.ms/policyeffects|
-|DoD Impact Level 5|Policy Assignment|Effect for policy: External accounts with owner permissions should be removed from your subscription|Azure Policy effect for this policy; for more information about effects, visit https://aka.ms/policyeffects|
-|DoD Impact Level 5|Policy Assignment|Effect for policy: Latest TLS version should be used in your Function App|Azure Policy effect for this policy; for more information about effects, visit https://aka.ms/policyeffects|
-|DoD Impact Level 5|Policy Assignment|Effect for policy: [Preview]: Kubernetes Services should be upgraded to a non-vulnerable Kubernetes version|Azure Policy effect for this policy; for more information about effects, visit https://aka.ms/policyeffects|
-|DoD Impact Level 5|Policy Assignment|Effect for policy: Vulnerabilities on your SQL databases should be remediated|Azure Policy effect for this policy; for more information about effects, visit https://aka.ms/policyeffects|
-
-## Next steps
-
-Now that you've reviewed the steps to deploy the DoD Impact Level 5 blueprint sample, visit the following
-articles to learn about the blueprint and control mapping:
-
-> [!div class="nextstepaction"]
-> [DoD Impact Level 5 blueprint - Overview](./index.md)
-> [DoD Impact Level 5 blueprint - Control mapping](./control-mapping.md)
-
-Additional articles about blueprints and how to use them:
--- Learn about the [blueprint lifecycle](../../concepts/lifecycle.md).-- Understand how to use [static and dynamic parameters](../../concepts/parameters.md).-- Learn to customize the [blueprint sequencing order](../../concepts/sequencing-order.md).-- Find out how to make use of [blueprint resource locking](../../concepts/resource-locking.md).-- Learn how to [update existing assignments](../../how-to/update-existing-assignments.md).
governance Index https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/dod-impact-level-5/index.md
- Title: DoD Impact Level 5 blueprint sample overview
-description: Overview of the DoD Impact Level 5 sample. This blueprint sample helps customers assess specific DoD Impact Level 5 controls.
Previously updated : 04/02/2021--
-# Overview of the DoD Impact Level 5 blueprint sample
-
-The Department of Defense Impact Level 5 (DoD IL5) blueprint sample provides governance guardrails
-using [Azure Policy](../../../policy/overview.md) that help you assess specific DoD Impact Level 5
-controls. This blueprint helps customers deploy a core set of policies for any Azure-deployed
-architecture that must implement DoD Impact Level 5 controls. For latest information on which Azure
-Clouds and Services meet DoD Impact Level 5 authorization, see
-[Azure services by FedRAMP and DoD CC SRG audit scope](../../../../azure-government/compliance/azure-services-in-fedramp-auditscope.md).
-
-> [!NOTE]
-> This blueprint sample is available in Azure Government.
-
-## Control mapping
-
-The control mapping section provides details on policies included within this blueprint and how
-these policies address various controls in DoD Impact Level 5. When assigned to an architecture,
-resources are evaluated by Azure Policy for non-compliance with assigned policies. For more
-information, see [Azure Policy](../../../policy/overview.md).
-
-## Next steps
-
-You've reviewed the overview of the DoD Impact Level 5 blueprint sample. Next, visit the following
-articles to learn about the control mapping and how to deploy this sample:
-
-> [!div class="nextstepaction"]
-> [DoD Impact Level 5 blueprint - Control mapping](./control-mapping.md)
-> [DoD Impact Level 5 blueprint - Deploy steps](./deploy.md)
-
-Additional articles about blueprints and how to use them:
--- Learn about the [blueprint lifecycle](../../concepts/lifecycle.md).-- Understand how to use [static and dynamic parameters](../../concepts/parameters.md).-- Learn to customize the [blueprint sequencing order](../../concepts/sequencing-order.md).-- Find out how to make use of [blueprint resource locking](../../concepts/resource-locking.md).-- Learn how to [update existing assignments](../../how-to/update-existing-assignments.md).
governance Control Mapping https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/fedramp-h/control-mapping.md
- Title: FedRAMP High blueprint sample controls
-description: Control mapping of the FedRAMP High blueprint sample. Each control is mapped to one or more Azure Policy definitions that assist with assessment.
Previously updated : 04/02/2021--
-# Control mapping of the FedRAMP High blueprint sample
-
-The following article details how the Azure Blueprints FedRAMP High blueprint sample maps to the
-FedRAMP High controls. For more information about the controls, see
-[FedRAMP Security Controls Baseline](https://www.fedramp.gov/).
-
-The following mappings are to the **FedRAMP High** controls. Use the navigation on the right to jump
-directly to a specific control mapping. Many of the mapped controls are implemented with an
-[Azure Policy](../../../policy/overview.md) initiative. To review the complete initiative, open
-**Policy** in the Azure portal and select the **Definitions** page. Then, find and select the
-**\[Preview\]: Audit FedRAMP High controls and deploy specific VM Extensions to support audit
-requirements** built-in policy initiative.
-
-> [!IMPORTANT]
-> Each control below is associated with one or more [Azure Policy](../../../policy/overview.md)
-> definitions. These policies may help you
-> [assess compliance](../../../policy/how-to/get-compliance-data.md) with the control; however,
-> there often is not a one-to-one or complete match between a control and one or more policies. As
-> such, **Compliant** in Azure Policy refers only to the policies themselves; this doesn't ensure
-> you're fully compliant with all requirements of a control. In addition, the compliance standard
-> includes controls that aren't addressed by any Azure Policy definitions at this time. Therefore,
-> compliance in Azure Policy is only a partial view of your overall compliance status. The
-> associations between controls and Azure Policy definitions for this compliance blueprint sample
-> may change over time. To view the change history, see the
-> [GitHub Commit History](https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/fedramp-h/control-mapping.md).
-
-## AC-2 Account Management
-
-This blueprint helps you review accounts that may not comply with your organization's account
-management requirements. This blueprint assigns [Azure Policy](../../../policy/overview.md)
-definitions that audit external accounts with read, write, and owner permissions on a subscription
-and deprecated accounts. By reviewing the accounts audited by these policies, you can take
-appropriate action to ensure account management requirements are met.
--- Deprecated accounts should be removed from your subscription-- Deprecated accounts with owner permissions should be removed from your subscription-- External accounts with owner permissions should be removed from your subscription-- External accounts with read permissions should be removed from your subscription-- External accounts with write permissions should be removed from your subscription-
-## AC-2 (7) Account Management | Role-Based Schemes
-
-Azure implements
-[Azure role-based access control (Azure RBAC)](../../../../role-based-access-control/overview.md) to
-help you manage who has access to resources in Azure. Using the Azure portal, you can review who has
-access to Azure resources and their permissions. This blueprint also assigns
-[Azure Policy](../../../policy/overview.md) definitions to audit use of Azure Active Directory
-authentication for SQL Servers and Service Fabric. Using Azure Active Directory authentication
-enables simplified permission management and centralized identity management of database users and
-other Microsoft services. Additionally, this blueprint assigns an Azure Policy definition to audit
-the use of custom Azure RBAC rules. Understanding where custom Azure RBAC rules are implement can
-help you verify need and proper implementation, as custom Azure RBAC rules are error prone.
--- An Azure Active Directory administrator should be provisioned for SQL servers-- Audit usage of custom RBAC rules-- Service Fabric clusters should only use Azure Active Directory for client authentication-
-## AC-2 (12) Account Management | Account Monitoring / Atypical Usage
-
-Just-in-time (JIT) virtual machine access locks down inbound traffic to Azure virtual machines,
-reducing exposure to attacks while providing easy access to connect to VMs when needed. All JIT
-requests to access virtual machines are logged in the Activity Log allowing you to monitor for
-atypical usage. This blueprint assigns an [Azure Policy](../../../policy/overview.md) definition
-that helps you monitor virtual machines that can support just-in-time access but haven't yet been
-configured.
--- Just-In-Time network access control should be applied on virtual machines-
-## AC-4 Information Flow Enforcement
-
-Cross origin resource sharing (CORS) can allow App Services resources to be requested from an
-outside domain. Microsoft recommends that you allow only required domains to interact with your API,
-function, and web applications. This blueprint assigns an
-[Azure Policy](../../../policy/overview.md) definition to help you monitor CORS resources access
-restrictions in Azure Security Center. Understanding CORS implementations can help you verify that
-information flow controls are implemented.
--- CORS should not allow every resource to access your Web Application-
-## AC-5 Separation of Duties
-
-Having only one Azure subscription owner doesn't allow for administrative redundancy. Conversely,
-having too many Azure subscription owners can increase the potential for a breach via a compromised
-owner account. This blueprint helps you maintain an appropriate number of Azure subscription owners
-by assigning [Azure Policy](../../../policy/overview.md) definitions that audit the number of owners
-for Azure subscriptions. This blueprint also assigns Azure Policy definitions that help you control
-membership of the Administrators group on Windows virtual machines. Managing subscription owner and
-virtual machine administrator permissions can help you implement appropriate separation of duties.
--- A maximum of 3 owners should be designated for your subscription-- Audit Windows VMs in which the Administrators group contains any of the specified members-- Audit Windows VMs in which the Administrators group does not contain all of the specified members-- Deploy requirements to audit Windows VMs in which the Administrators group contains any of the
- specified members
-- Deploy requirements to audit Windows VMs in which the Administrators group does not contain all of
- the specified members
-- There should be more than one owner assigned to your subscription-
-## AC-6 (7) Least Privilege | Review of User Privileges
-
-Azure implements
-[Azure role-based access control (Azure RBAC)](../../../../role-based-access-control/overview.md) to
-help you manage who has access to resources in Azure. Using the Azure portal, you can review who has
-access to Azure resources and their permissions. This blueprint assigns
-[Azure Policy](../../../policy/overview.md) definitions to audit accounts that should be prioritized
-for review. Reviewing these account indicators can help you ensure least privilege controls are
-implemented.
--- A maximum of 3 owners should be designated for your subscription-- Audit Windows VMs in which the Administrators group contains any of the specified members-- Audit Windows VMs in which the Administrators group does not contain all of the specified members-- Deploy requirements to audit Windows VMs in which the Administrators group contains any of the
- specified members
-- Deploy requirements to audit Windows VMs in which the Administrators group does not contain all of
- the specified members
-- There should be more than one owner assigned to your subscription-
-## AC-17 (1) Remote Access | Automated Monitoring / Control
-
-This blueprint helps you monitor and control remote access by assigning
-[Azure Policy](../../../policy/overview.md) definitions to monitors that remote debugging for Azure
-App Service application is turned off and policy definitions that audit Linux virtual machines that
-allow remote connections from accounts without passwords. This blueprint also assigns an Azure
-Policy definition that helps you monitor unrestricted access to storage accounts. Monitoring these
-indicators can help you ensure remote access methods comply with your security policy.
--- \[Preview\]: Audit Linux VMs that allow remote connections from accounts without passwords-- \[Preview\]: Deploy requirements to audit Linux VMs that allow remote connections from accounts
- without passwords
-- Audit unrestricted network access to storage accounts-- Remote debugging should be turned off for API App-- Remote debugging should be turned off for Function App-- Remote debugging should be turned off for Web Application-
-## AU-3 (2) Content of Audit Records | Centralized Management of Planned Audit Record Content
-
-Log data collected by Azure Monitor is stored in a Log Analytics workspace enabling centralized
-configuration and management. This blueprint helps you ensure events are logged by assigning
-[Azure Policy](../../../policy/overview.md) definitions that audit and enforce deployment of the Log
-Analytics agent on Azure virtual machines.
--- \[Preview\]: Audit Log Analytics Agent Deployment - VM Image (OS) unlisted-- \[Preview\]: Audit Log Analytics Agent Deployment in VMSS - VM Image (OS) unlisted-- \[Preview\]: Audit Log Analytics Workspace for VM - Report Mismatch-- \[Preview\]: Deploy Log Analytics Agent for Linux VM Scale Sets (VMSS)-- \[Preview\]: Deploy Log Analytics Agent for Linux VMs-- \[Preview\]: Deploy Log Analytics Agent for Windows VM Scale Sets (VMSS)-- \[Preview\]: Deploy Log Analytics Agent for Windows VMs-
-## AU-5 Response to Audit Processing Failures
-
-This blueprint assigns [Azure Policy](../../../policy/overview.md) definitions that monitor audit
-and event logging configurations. Monitoring these configurations can provide an indicator of an
-audit system failure or misconfiguration and help you take corrective action.
--- Audit diagnostic setting-- Auditing should be enabled on advanced data security settings on SQL Server-- Advanced data security should be enabled on your managed instances-- Advanced data security should be enabled on your SQL servers-
-## AU-6 (4) Audit Review, Analysis, and Reporting | Central Review and Analysis
-
-Log data collected by Azure Monitor is stored in a Log Analytics workspace enabling centralized
-reporting and analysis. This blueprint helps you ensure events are logged by assigning
-[Azure Policy](../../../policy/overview.md) definitions that audit and enforce deployment of the Log
-Analytics agent on Azure virtual machines.
--- \[Preview\]: Audit Log Analytics Agent Deployment - VM Image (OS) unlisted-- \[Preview\]: Audit Log Analytics Agent Deployment in VMSS - VM Image (OS) unlisted-- \[Preview\]: Audit Log Analytics Workspace for VM - Report Mismatch-- \[Preview\]: Deploy Log Analytics Agent for Linux VM Scale Sets (VMSS)-- \[Preview\]: Deploy Log Analytics Agent for Linux VMs-- \[Preview\]: Deploy Log Analytics Agent for Windows VM Scale Sets (VMSS)-- \[Preview\]: Deploy Log Analytics Agent for Windows VMs-
-## AU-6 (5) Audit Review, Analysis, and Reporting | Integration / Scanning and Monitoring Capabilities
-
-This blueprint provides policy definitions that audit records with analysis of vulnerability
-assessment on virtual machines, virtual machine scale sets, SQL Database servers, and SQL Managed
-Instance servers. These policy definitions also audit configuration of diagnostic logs to provide
-insight into operations that are performed within Azure resources. These insights provide real-time
-information about the security state of your deployed resources and can help you prioritize
-remediation actions. For detailed vulnerability scanning and monitoring, we recommend you use
-Azure Sentinel and Azure Security Center as well.
--- \[Preview\]: Vulnerability Assessment should be enabled on Virtual Machines-- \[Preview\]: Enable Azure Monitor for VMs-- \[Preview\]: Enable Azure Monitor for VM Scale Sets (VMSS)-- Vulnerability assessment should be enabled on your SQL servers-- Audit diagnostic setting-- Vulnerability assessment should be enabled on your SQL managed instances-- Vulnerability assessment should be enabled on your SQL servers-- Vulnerabilities in security configuration on your machines should be remediated-- Vulnerabilities on your SQL databases should be remediated-- Vulnerabilities should be remediated by a Vulnerability Assessment solution-- Vulnerabilities in security configuration on your virtual machine scale sets should be remediated-
-## AU-12 Audit Generation
-
-This blueprint provides policy definitions that audit and enforce deployment of the Log Analytics
-agent on Azure virtual machines and configuration of audit settings for other Azure resource types.
-These policy definitions also audit configuration of diagnostic logs to provide insight into
-operations that are performed within Azure resources. Additionally, auditing and Advanced Data
-Security are configured on SQL servers.
--- \[Preview\]: Audit Log Analytics Agent Deployment - VM Image (OS) unlisted-- \[Preview\]: Audit Log Analytics Agent Deployment in VMSS - VM Image (OS) unlisted-- \[Preview\]: Audit Log Analytics Workspace for VM - Report Mismatch-- \[Preview\]: Deploy Log Analytics Agent for Linux VM Scale Sets (VMSS)-- \[Preview\]: Deploy Log Analytics Agent for Linux VMs-- \[Preview\]: Deploy Log Analytics Agent for Windows VM Scale Sets (VMSS)-- \[Preview\]: Deploy Log Analytics Agent for Windows VMs-- Audit diagnostic setting-- Auditing should be enabled on advanced data security settings on SQL Server-- Advanced data security should be enabled on your managed instances-- Advanced data security should be enabled on your SQL servers-- Deploy Advanced Data Security on SQL servers-- Deploy Auditing on SQL servers-- Deploy Diagnostic Settings for Network Security Groups-
-## AU-12 (01) Audit Generation | System-Wide / Time-Correlated Audit Trail
-
-This blueprint helps you ensure system events are logged by assigning
-[Azure Policy](../../../policy/overview.md) definitions that audit log settings on Azure resources.
-This built-in policy requires you to specify an array of resource types to check whether diagnostic
-settings are enabled or not.
--- Audit diagnostic setting-
-## CM-7 (2) Least Functionality | Prevent Program Execution
-
-Adaptive application control in Azure Security Center is an intelligent, automated end-to-end
-application filtering solution that can block or prevent specific software from running on your
-virtual machines. Application control can run in an enforcement mode that prohibits non-approved
-application from running. This blueprint assigns an Azure Policy definition that helps you monitor
-virtual machines where an application allowlist is recommended but has not yet been configured.
--- Adaptive Application Controls should be enabled on virtual machines-
-## CM-7 (5) Least Functionality | Authorized Software / Whitelisting
-
-Adaptive application control in Azure Security Center is an intelligent, automated end-to-end
-application filtering solution that can block or prevent specific software from running on your
-virtual machines. Application control helps you create approved application lists for your virtual
-machines. This blueprint assigns an [Azure Policy](../../../policy/overview.md) definition that
-helps you monitor virtual machines where an application allowlist is recommended but has not yet
-been configured.
--- Adaptive Application Controls should be enabled on virtual machines-
-## CM-11 User-Installed Software
-
-Adaptive application control in Azure Security Center is an intelligent, automated end-to-end
-application filtering solution that can block or prevent specific software from running on your
-virtual machines. Application control can help you enforce and monitor compliance with software
-restriction policies. This blueprint assigns an [Azure Policy](../../../policy/overview.md)
-definition that helps you monitor virtual machines where an application allowlist is recommended
-but has not yet been configured.
--- Adaptive Application Controls should be enabled on virtual machines-
-## CP-7 Alternate Processing Site
-
-Azure Site Recovery replicates workloads running on virtual machines from a primary location to a
-secondary location. If an outage occurs at the primary site, the workload fails over the secondary
-location. This blueprint assigns an [Azure Policy](../../../policy/overview.md) definition that
-audits virtual machines without disaster recovery configured. Monitoring this indicator can help you
-ensure necessary contingency controls are in place.
--- Audit virtual machines without disaster recovery configured-
-## CP-9 (05) Information System Backup | Transfer to Alternate Storage Site
-
-This blueprint assigns Azure Policy definitions that audit the organization's system backup
-information to the alternate storage site electronically. For physical shipment of storage metadata,
-consider using Azure Data Box.
--- Geo-redundant storage should be enabled for Storage Accounts-- Geo-redundant backup should be enabled for Azure Database for PostgreSQL-- Geo-redundant backup should be enabled for Azure Database for MySQL-- Geo-redundant backup should be enabled for Azure Database for MariaDB-- Long-term geo-redundant backup should be enabled for Azure SQL Databases-
-## IA-2 (1) Identification and Authentication (Organizational Users) | Network Access to Privileged Accounts
-
-This blueprint helps you restrict and control privileged access by assigning
-[Azure Policy](../../../policy/overview.md) definitions to audit accounts with owner and/or write
-permissions that don't have multi-factor authentication enabled. Multi-factor authentication helps
-keep accounts secure even if one piece of authentication information is compromised. By monitoring
-accounts without multi-factor authentication enabled, you can identify accounts that may be more
-likely to be compromised.
--- MFA should be enabled on accounts with owner permissions on your subscription-- MFA should be enabled on accounts with write permissions on your subscription-
-## IA-2 (2) Identification and Authentication (Organizational Users) | Network Access to Non-Privileged Accounts
-
-This blueprint helps you restrict and control access by assigning an
-[Azure Policy](../../../policy/overview.md) definition to audit accounts with read permissions that
-don't have multi-factor authentication enabled. Multi-factor authentication helps keep accounts
-secure even if one piece of authentication information is compromised. By monitoring accounts
-without multi-factor authentication enabled, you can identify accounts that may be more likely to be
-compromised.
--- MFA should be enabled on accounts with read permissions on your subscription-
-## IA-5 Authenticator Management
-
-This blueprint assigns [Azure Policy](../../../policy/overview.md) definitions that audit Linux
-virtual machines that allow remote connections from accounts without passwords and/or have incorrect
-permissions set on the passwd file. This blueprint also assigns policy definitions that audit the
-configuration of the password encryption type for Windows virtual machines. Monitoring these
-indicators helps you ensure that system authenticators comply with your organization's
-identification and authentication policy.
--- \[Preview\]: Audit Linux VMs that do not have the passwd file permissions set to 0644-- \[Preview\]: Audit Linux VMs that have accounts without passwords-- \[Preview\]: Audit Windows VMs that do not store passwords using reversible encryption-- \[Preview\]: Deploy requirements to audit Linux VMs that do not have the passwd file permissions
- set to 0644
-- \[Preview\]: Deploy requirements to audit Linux VMs that have accounts without passwords-- \[Preview\]: Deploy requirements to audit Windows VMs that do not store passwords using reversible
- encryption
-
-## IA-5 (1) Authenticator Management | Password-Based Authentication
-
-This blueprint helps you enforce strong passwords by assigning
-[Azure Policy](../../../policy/overview.md) definitions that audit Windows virtual machines that
-don't enforce minimum strength and other password requirements. Awareness of virtual machines in
-violation of the password strength policy helps you take corrective actions to ensure passwords for
-all virtual machine user accounts comply with your organization's password policy.
--- \[Preview\]: Audit Windows VMs that allow re-use of the previous 24 passwords-- \[Preview\]: Audit Windows VMs that do not have a maximum password age of 70 days-- \[Preview\]: Audit Windows VMs that do not have a minimum password age of 1 day-- \[Preview\]: Audit Windows VMs that do not have the password complexity setting enabled-- \[Preview\]: Audit Windows VMs that do not restrict the minimum password length to 14 characters-- \[Preview\]: Audit Windows VMs that do not store passwords using reversible encryption-- \[Preview\]: Deploy requirements to audit Windows VMs that allow re-use of the previous 24
- passwords
-- \[Preview\]: Deploy requirements to audit Windows VMs that do not have a maximum password age of
- 70 days
-- \[Preview\]: Deploy requirements to audit Windows VMs that do not have a minimum password age of 1
- day
-- \[Preview\]: Deploy requirements to audit Windows VMs that do not have the password complexity
- setting enabled
-- \[Preview\]: Deploy requirements to audit Windows VMs that do not restrict the minimum password
- length to 14 characters
-- \[Preview\]: Deploy requirements to audit Windows VMs that do not store passwords using reversible
- encryption
-
-## RA-5 Vulnerability Scanning
-
-This blueprint helps you manage information system vulnerabilities by assigning
-[Azure Policy](../../../policy/overview.md) definitions that monitor operating system
-vulnerabilities, SQL vulnerabilities, and virtual machine vulnerabilities in Azure Security Center.
-Azure Security Center provides reporting capabilities that enable you to have real-time insight into
-the security state of deployed Azure resources. This blueprint also assigns policy definitions that
-audit and enforce Advanced Data Security on SQL servers. Advanced data security included
-vulnerability assessment and advanced threat protection capabilities to help you understand
-vulnerabilities in your deployed resources.
--- Advanced data security should be enabled on your managed instances-- Advanced data security should be enabled on your SQL servers-- Deploy Advanced Data Security on SQL servers-- Vulnerabilities in security configuration on your virtual machine scale sets should be remediated-- Vulnerabilities in security configuration on your virtual machines should be remediated-- Vulnerabilities on your SQL databases should be remediated-- Vulnerabilities should be remediated by a Vulnerability Assessment solution-
-## SC-5 Denial of Service Protection
-
-Azure's distributed denial of service (DDoS) Standard tier provides additional features and
-mitigation capabilities over the basic service tier. These additional features include Azure Monitor
-integration and the ability to review post-attack mitigation reports. This blueprint assigns an
-[Azure Policy](../../../policy/overview.md) definition that audits if the DDoS Standard tier is
-enabled. Understanding the capability difference between the service tiers can help you select the
-best solution to address denial of service protections for your Azure environment.
--- DDoS Protection Standard should be enabled-
-## SC-7 Boundary Protection
-
-This blueprint helps you manage and control the system boundary by assigning an
-[Azure Policy](../../../policy/overview.md) definition that monitors for network security group
-hardening recommendations in Azure Security Center. Azure Security Center analyzes traffic patterns
-of Internet facing virtual machines and provides network security group rule recommendations to
-reduce the potential attack surface. Additionally, this blueprint also assigns policy definitions
-that monitor unprotected endpoints, applications, and storage accounts. Endpoints and applications
-that aren't protected by a firewall, and storage accounts with unrestricted access can allow
-unintended access to information contained within the information system.
--- Network Security Group Rules for Internet facing virtual machines should be hardened-- Access through Internet facing endpoint should be restricted-- Web ports should be restricted on Network Security Groups associated to your VM-- Audit unrestricted network access to storage accounts-
-## SC-7 (3) Boundary Protection | Access Points
-
-Just-in-time (JIT) virtual machine access locks down inbound traffic to Azure virtual machines,
-reducing exposure to attacks while providing easy access to connect to VMs when needed. JIT virtual
-machine access helps you limit the number of external connections to your resources in Azure. This
-blueprint assigns an [Azure Policy](../../../policy/overview.md) definition that helps you monitor
-virtual machines that can support just-in-time access but haven't yet been configured.
--- Just-In-Time network access control should be applied on virtual machines-
-## SC-7 (4) Boundary Protection | External Telecommunications Services
-
-Just-in-time (JIT) virtual machine access locks down inbound traffic to Azure virtual machines,
-reducing exposure to attacks while providing easy access to connect to VMs when needed. JIT virtual
-machine access helps you manage exceptions to your traffic flow policy by facilitating the access
-request and approval processes. This blueprint assigns an
-[Azure Policy](../../../policy/overview.md) definition that helps you monitor virtual machines that
-can support just-in-time access but haven't yet been configured.
--- Just-In-Time network access control should be applied on virtual machines-
-## SC-8 (1) Transmission Confidentiality and Integrity | Cryptographic or Alternate Physical Protection
-
-This blueprint helps you protect the confidential and integrity of transmitted information by
-assigning [Azure Policy](../../../policy/overview.md) definitions that help you monitor
-cryptographic mechanism implemented for communications protocols. Ensuring communications are
-properly encrypted can help you meet your organization's requirements or protecting information from
-unauthorized disclosure and modification.
--- API App should only be accessible over HTTPS-- Audit Windows web servers that are not using secure communication protocols-- Deploy requirements to audit Windows web servers that are not using secure communication protocols-- Function App should only be accessible over HTTPS-- Only secure connections to your Redis Cache should be enabled-- Secure transfer to storage accounts should be enabled-- Web Application should only be accessible over HTTPS-
-## SC-28 (1) Protection of Information at Rest | Cryptographic Protection
-
-This blueprint helps you enforce your policy on the use of cryptograph controls to protect
-information at rest by assigning [Azure Policy](../../../policy/overview.md) definitions that
-enforce specific cryptograph controls and audit use of weak cryptographic settings. Understanding
-where your Azure resources may have non-optimal cryptographic configurations can help you take
-corrective actions to ensure resources are configured in accordance with your information security
-policy. Specifically, the policy definitions assigned by this blueprint require encryption for data
-lake storage accounts; require transparent data encryption on SQL databases; and audit missing
-encryption on SQL databases, virtual machine disks, and automation account variables.
--- Advanced data security should be enabled on your managed instances-- Advanced data security should be enabled on your SQL servers-- Deploy Advanced Data Security on SQL servers-- Deploy SQL DB transparent data encryption-- Disk encryption should be applied on virtual machines-- Require encryption on Data Lake Store accounts-- Transparent Data Encryption on SQL databases should be enabled-
-## SI-2 Flaw Remediation
-
-This blueprint helps you manage information system flaws by assigning
-[Azure Policy](../../../policy/overview.md) definitions that monitor missing system updates,
-operating system vulnerabilities, SQL vulnerabilities, and virtual machine vulnerabilities in Azure
-Security Center. Azure Security Center provides reporting capabilities that enable you to have
-real-time insight into the security state of deployed Azure resources. This blueprint also assigns a
-policy definition that ensures patching of the operating system for virtual machine scale sets.
--- Require automatic OS image patching on Virtual Machine Scale Sets-- System updates on virtual machine scale sets should be installed-- System updates should be installed on your virtual machines-- Vulnerabilities in security configuration on your virtual machine scale sets should be remediated-- Vulnerabilities in security configuration on your virtual machines should be remediated-- Vulnerabilities on your SQL databases should be remediated-- Vulnerabilities should be remediated by a Vulnerability Assessment solution-
-## SI-3 Malicious Code Protection
-
-This blueprint helps you manage endpoint protection, including malicious code protection, by
-assigning [Azure Policy](../../../policy/overview.md) definitions that monitor for missing
-endpoint protection on virtual machines in Azure Security Center and enforce the Microsoft
-antimalware solution on Windows virtual machines.
--- Deploy default Microsoft IaaSAntimalware extension for Windows Server-- Endpoint protection solution should be installed on virtual machine scale sets-- Monitor missing Endpoint Protection in Azure Security Center-
-## SI-3 (1) Malicious Code Protection | Central Management
-
-This blueprint helps you manage endpoint protection, including malicious code protection, by
-assigning [Azure Policy](../../../policy/overview.md) definitions that monitor for missing endpoint
-protection on virtual machines in Azure Security Center. Azure Security Center provides centralized
-management and reporting capabilities that enable you to have real-time insight into the security
-state of deployed Azure resources.
--- Endpoint protection solution should be installed on virtual machine scale sets-- Monitor missing Endpoint Protection in Azure Security Center-
-## SI-4 Information System Monitoring
-
-This blueprint helps you monitor your system by auditing and enforcing logging and data security
-across Azure resources. Specifically, the policies assigned audit and enforce deployment of the Log
-Analytics agent, and enhanced security settings for SQL databases, storage accounts and network
-resources. These capabilities can help you detect anomalous behavior and indicators of attacks so
-you can take appropriate action.
--- \[Preview\]: Audit Log Analytics Agent Deployment - VM Image (OS) unlisted-- \[Preview\]: Audit Log Analytics Agent Deployment in VMSS - VM Image (OS) unlisted-- \[Preview\]: Audit Log Analytics Workspace for VM - Report Mismatch-- \[Preview\]: Deploy Log Analytics Agent for Linux VM Scale Sets (VMSS)-- \[Preview\]: Deploy Log Analytics Agent for Linux VMs-- \[Preview\]: Deploy Log Analytics Agent for Windows VM Scale Sets (VMSS)-- \[Preview\]: Deploy Log Analytics Agent for Windows VMs-- Advanced data security should be enabled on your managed instances-- Advanced data security should be enabled on your SQL servers-- Deploy Advanced Data Security on SQL servers-- Deploy Advanced Threat Protection on Storage Accounts-- Deploy Auditing on SQL servers-- Deploy network watcher when virtual networks are created-- Deploy Threat Detection on SQL servers-- Allowed locations-- Allowed locations for resource groups-
-## SI-4 (18) Information System Monitoring | Analyze Traffic / Covert Exfiltration
-
-Advanced Threat Protection for Azure Storage detects unusual and potentially harmful attempts to
-access or exploit storage accounts. Protection alerts include anomalous access patterns, anomalous
-extracts/uploads, and suspicious storage activity. These indicators can help you detect covert
-exfiltration of information.
--- Deploy Advanced Threat Protection on Storage Accounts-
-> [!NOTE]
-> Availability of specific Azure Policy definitions may vary in Azure Government and other national
-> clouds.
-
-## Next steps
-
-Now that you've reviewed the control mapping of the FedRAMP High blueprint, visit the following
-articles to learn about the blueprint and how to deploy this sample:
-
-> [!div class="nextstepaction"]
-> [FedRAMP High blueprint - Overview](./index.md)
-> [FedRAMP High blueprint - Deploy steps](./deploy.md)
-
-Additional articles about blueprints and how to use them:
--- Learn about the [blueprint lifecycle](../../concepts/lifecycle.md).-- Understand how to use [static and dynamic parameters](../../concepts/parameters.md).-- Learn to customize the [blueprint sequencing order](../../concepts/sequencing-order.md).-- Find out how to make use of [blueprint resource locking](../../concepts/resource-locking.md).-- Learn how to [update existing assignments](../../how-to/update-existing-assignments.md).
governance Deploy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/fedramp-h/deploy.md
- Title: Deploy FedRAMP High blueprint sample
-description: Deploy steps for the FedRAMP High blueprint sample including blueprint artifact parameter details.
Previously updated : 04/02/2021--
-# Deploy the FedRAMP High blueprint sample
-
-To deploy the Azure Blueprints FedRAMP High blueprint sample, the following steps must be taken:
-
-> [!div class="checklist"]
-> - Create a new blueprint from the sample
-> - Mark your copy of the sample as **Published**
-> - Assign your copy of the blueprint to an existing subscription
-
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free)
-before you begin.
-
-## Create blueprint from sample
-
-First, implement the blueprint sample by creating a new blueprint in your environment using the
-sample as a starter.
-
-1. Select **All services** in the left pane. Search for and select **Blueprints**.
-
-1. From the **Getting started** page on the left, select the **Create** button under _Create a
- blueprint_.
-
-1. Find the **FedRAMP High** blueprint sample under _Other Samples_ and select **Use this sample**.
-
-1. Enter the _Basics_ of the blueprint sample:
-
- - **Blueprint name**: Provide a name for your copy of the FedRAMP High blueprint sample.
- - **Definition location**: Use the ellipsis and select the management group to save your copy of
- the sample to.
-
-1. Select the _Artifacts_ tab at the top of the page or **Next: Artifacts** at the bottom of the
- page.
-
-1. Review the list of artifacts that make up the blueprint sample. Many of the artifacts have
- parameters that we'll define later. Select **Save Draft** when you've finished reviewing the
- blueprint sample.
-
-## Publish the sample copy
-
-Your copy of the blueprint sample has now been created in your environment. It's created in
-**Draft** mode and must be **Published** before it can be assigned and deployed. The copy of the
-blueprint sample can be customized to your environment and needs, but that modification may move it
-away from alignment with FedRAMP High controls.
-
-1. Select **All services** in the left pane. Search for and select **Blueprints**.
-
-1. Select the **Blueprint definitions** page on the left. Use the filters to find your copy of the
- blueprint sample and then select it.
-
-1. Select **Publish blueprint** at the top of the page. In the new page on the right, provide a
- **Version** for your copy of the blueprint sample. This property is useful for if you make a
- modification later. Provide **Change notes** such as "First version published from the FedRAMP
- High blueprint sample." Then select **Publish** at the bottom of the page.
-
-## Assign the sample copy
-
-Once the copy of the blueprint sample has been successfully **Published**, it can be assigned to a
-subscription within the management group it was saved to. This step is where parameters are
-provided to make each deployment of the copy of the blueprint sample unique.
-
-1. Select **All services** in the left pane. Search for and select **Blueprints**.
-
-1. Select the **Blueprint definitions** page on the left. Use the filters to find your copy of the
- blueprint sample and then select it.
-
-1. Select **Assign blueprint** at the top of the blueprint definition page.
-
-1. Provide the parameter values for the blueprint assignment:
-
- - Basics
-
- - **Subscriptions**: Select one or more of the subscriptions that are in the management group
- you saved your copy of the blueprint sample to. If you select more than one subscription, an
- assignment will be created for each using the parameters entered.
- - **Assignment name**: The name is pre-populated for you based on the name of the blueprint.
- Change as needed or leave as is.
- - **Location**: Select a region for the managed identity to be created in. Azure Blueprint uses
- this managed identity to deploy all artifacts in the assigned blueprint. To learn more, see
- [managed identities for Azure resources](../../../../active-directory/managed-identities-azure-resources/overview.md).
- - **Blueprint definition version**: Pick a **Published** version of your copy of the blueprint
- sample.
-
- - Lock Assignment
-
- Select the blueprint lock setting for your environment. For more information, see
- [blueprints resource locking](../../concepts/resource-locking.md).
-
- - Managed Identity
-
- Leave the default _system assigned_ managed identity option.
-
- - Artifact parameters
-
- The parameters defined in this section apply to the artifact under which it's defined. These
- parameters are [dynamic parameters](../../concepts/parameters.md#dynamic-parameters) since
- they're defined during the assignment of the blueprint. For a full list or artifact parameters
- and their descriptions, see [Artifact parameters table](#artifact-parameters-table).
-
-1. Once all parameters have been entered, select **Assign** at the bottom of the page. The blueprint
- assignment is created and artifact deployment begins. Deployment takes roughly an hour. To check
- on the status of deployment, open the blueprint assignment.
-
-> [!WARNING]
-> The Azure Blueprints service and the built-in blueprint samples are **free of cost**. Azure
-> resources are [priced by product](https://azure.microsoft.com/pricing/). Use the
-> [pricing calculator](https://azure.microsoft.com/pricing/calculator/) to estimate the cost of
-> running resources deployed by this blueprint sample.
-
-## Artifact parameters table
-
-The following table provides a list of the blueprint artifact parameters:
-
-|Artifact name|Artifact type|Parameter name|Description|
-|-|-|-|-|
-|\[Preview\]: Audit FedRAMP High controls and deploy specific VM Extensions to support audit requirements|Policy assignment|Log Analytics workspace ID that VMs should be configured for|This is the ID (GUID) of the Log Analytics workspace that the VMs should be configured for.|
-|\[Preview\]: Audit FedRAMP High controls and deploy specific VM Extensions to support audit requirements|Policy assignment|List of resource types that should have diagnostic logs enabled|List of resource types to audit if diagnostic log setting is not enabled. Acceptable values can be found at [Azure Monitor diagnostic logs schemas](../../../../azure-monitor/essentials/resource-logs-schema.md#service-specific-schemas).|
-|\[Preview\]: Audit FedRAMP High controls and deploy specific VM Extensions to support audit requirements|Policy assignment|List of users that should be excluded from Windows VM Administrators group|A semicolon-separated list of members that should be excluded in the Administrators local group. Ex: Administrator; myUser1; myUser2|
-|\[Preview\]: Audit FedRAMP High controls and deploy specific VM Extensions to support audit requirements|Policy assignment|List of users that should be included in Windows VM Administrators group|A semicolon-separated list of members that should be included in the Administrators local group. Ex: Administrator; myUser1; myUser2|
-|\[Preview\]: Deploy Log Analytics Agent for Linux VM Scale Sets (VMSS)|Policy assignment|Log Analytics workspace for Linux VM Scale Sets (VMSS)|If this workspace is outside of the scope of the assignment you must manually grant 'Log Analytics Contributor' permissions (or similar) to the policy assignment's principal ID.|
-|\[Preview\]: Deploy Log Analytics Agent for Linux VM Scale Sets (VMSS)|Policy assignment|Optional: List of VM images that have supported Linux OS to add to scope|An empty array may be used to indicate no optional parameters: \[\]|
-|\[Preview\]: Deploy Log Analytics Agent for Linux VMs|Policy assignment|Log Analytics workspace for Linux VMs|If this workspace is outside of the scope of the assignment you must manually grant 'Log Analytics Contributor' permissions (or similar) to the policy assignment's principal ID.|
-|\[Preview\]: Deploy Log Analytics Agent for Linux VMs|Policy assignment|Optional: List of VM images that have supported Linux OS to add to scope|An empty array may be used to indicate no optional parameters: \[\]|
-|\[Preview\]: Deploy Log Analytics Agent for Windows VM Scale Sets (VMSS)|Policy assignment|Log Analytics workspace for Windows VM Scale Sets (VMSS)|If this workspace is outside of the scope of the assignment you must manually grant 'Log Analytics Contributor' permissions (or similar) to the policy assignment's principal ID.|
-|\[Preview\]: Deploy Log Analytics Agent for Windows VM Scale Sets (VMSS)|Policy assignment|Optional: List of VM images that have supported Windows OS to add to scope|An empty array may be used to indicate no optional parameters: \[\]|
-|\[Preview\]: Deploy Log Analytics Agent for Windows VMs|Policy assignment|Log Analytics workspace for Windows VMs|If this workspace is outside of the scope of the assignment you must manually grant 'Log Analytics Contributor' permissions (or similar) to the policy assignment's principal ID.|
-|\[Preview\]: Deploy Log Analytics Agent for Windows VMs|Policy assignment|Optional: List of VM images that have supported Windows OS to add to scope|An empty array may be used to indicate no optional parameters: \[\]|
-|Deploy Advanced Threat Protection on Storage Accounts|Policy assignment|Effect|Information about policy effects can be found at [Understand Azure Policy Effects](../../../policy/concepts/effects.md).|
-|Deploy Auditing on SQL servers|Policy assignment|The value in days of the retention period (0 indicates unlimited retention) |Retention days (optional, 180 days if unspecified) |
-|Deploy Auditing on SQL servers|Policy assignment|Resource group name for storage account for SQL server auditing|Auditing writes database events to an audit log in your Azure Storage account (a storage account will be created in each region where a SQL Server is created that will be shared by all servers in that region). Important - for proper operation of Auditing do not delete or rename the resource group or the storage accounts.|
-|Deploy diagnostic settings for Network Security Groups|Policy assignment|Storage account prefix for network security group diagnostics|This prefix will be combined with the network security group location to form the created storage account name.|
-|Deploy diagnostic settings for Network Security Groups|Policy assignment|Resource group name for storage account for network security group diagnostics (must exist) |The resource group that the storage account will be created in. This resource group must already exist.|
-|\[Preview\]: Audit FedRAMP High controls and deploy specific VM Extensions to support audit requirements|Policy assignment|Allowed locations for resources and resource groups|List of Azure locations that your organization can specify when deploying resources. This provided value is also used by the 'Allowed locations' policy within the policy initiative.|
-|\[Preview\]: Audit FedRAMP High controls and deploy specific VM Extensions to support audit requirements|Policy assignment|Vulnerability assessment should be enabled on your SQL managed instances|Information about policy effects can be found at [Understand Azure Policy Effects](../../../policy/concepts/effects.md).|
-|\[Preview\]: Audit FedRAMP High controls and deploy specific VM Extensions to support audit requirements|Policy assignment|Vulnerability assessment should be enabled on your SQL servers|Information about policy effects can be found at [Understand Azure Policy Effects](../../../policy/concepts/effects.md).|
-|\[Preview\]: Audit FedRAMP High controls and deploy specific VM Extensions to support audit requirements|Policy assignment|Vulnerability assessment should be enabled on Virtual Machines|Information about policy effects can be found at [Understand Azure Policy Effects](../../../policy/concepts/effects.md).|
-|\[Preview\]: Audit FedRAMP High controls and deploy specific VM Extensions to support audit requirements|Policy assignment|Geo-redundant storage should be enabled for Storage Accounts|Information about policy effects can be found at [Understand Azure Policy Effects](../../../policy/concepts/effects.md).|
-|\[Preview\]: Audit FedRAMP High controls and deploy specific VM Extensions to support audit requirements|Policy assignment|Geo-redundant backup should be enabled for Azure Database for MariaDB|Information about policy effects can be found at [Understand Azure Policy Effects](../../../policy/concepts/effects.md).|
-|\[Preview\]: Audit FedRAMP High controls and deploy specific VM Extensions to support audit requirements|Policy assignment|Geo-redundant backup should be enabled for Azure Database for MySQL|Information about policy effects can be found at [Understand Azure Policy Effects](../../../policy/concepts/effects.md).|
-|\[Preview\]: Audit FedRAMP High controls and deploy specific VM Extensions to support audit requirements|Policy assignment|Geo-redundant backup should be enabled for Azure Database for PostgreSQL|Information about policy effects can be found at [Understand Azure Policy Effects](../../../policy/concepts/effects.md).|
-|\[Preview\]: Audit FedRAMP High controls and deploy specific VM Extensions to support audit requirements|Policy assignment|Network Security Group rules for internet facing virtual machines should be hardened|Information about policy effects can be found at [Understand Azure Policy Effects](../../../policy/concepts/effects.md).|
-|\[Preview\]: Audit FedRAMP High controls and deploy specific VM Extensions to support audit requirements|Policy assignment|Web Application should only be accessible over HTTPS|Information about policy effects can be found at [Understand Azure Policy Effects](../../../policy/concepts/effects.md).|
-|\[Preview\]: Audit FedRAMP High controls and deploy specific VM Extensions to support audit requirements|Policy assignment|Function App should only be accessible over HTTPS|Information about policy effects can be found at [Understand Azure Policy Effects](../../../policy/concepts/effects.md).|
-|\[Preview\]: Audit FedRAMP High controls and deploy specific VM Extensions to support audit requirements|Policy assignment|External accounts with write permissions should be removed from your subscription|Information about policy effects can be found at [Understand Azure Policy Effects](../../../policy/concepts/effects.md).|
-|\[Preview\]: Audit FedRAMP High controls and deploy specific VM Extensions to support audit requirements|Policy assignment|External accounts with read permissions should be removed from your subscription|Information about policy effects can be found at [Understand Azure Policy Effects](../../../policy/concepts/effects.md).|
-|\[Preview\]: Audit FedRAMP High controls and deploy specific VM Extensions to support audit requirements|Policy assignment|External accounts with owner permissions should be removed from your subscription|Information about policy effects can be found at [Understand Azure Policy Effects](../../../policy/concepts/effects.md).|
-|\[Preview\]: Audit FedRAMP High controls and deploy specific VM Extensions to support audit requirements|Policy assignment|Deprecated accounts with owner permissions should be removed from your subscription|Information about policy effects can be found at [Understand Azure Policy Effects](../../../policy/concepts/effects.md).|
-|\[Preview\]: Audit FedRAMP High controls and deploy specific VM Extensions to support audit requirements|Policy assignment|Deprecated accounts should be removed from your subscription|Information about policy effects can be found at [Understand Azure Policy Effects](../../../policy/concepts/effects.md).|
-|\[Preview\]: Audit FedRAMP High controls and deploy specific VM Extensions to support audit requirements|Policy assignment|CORS shouldn't allow every resource to access your Web Application|Information about policy effects can be found at [Understand Azure Policy Effects](../../../policy/concepts/effects.md).|
-|\[Preview\]: Audit FedRAMP High controls and deploy specific VM Extensions to support audit requirements|Policy assignment|System updates on virtual machine scale sets should be installed|Information about policy effects can be found at [Understand Azure Policy Effects](../../../policy/concepts/effects.md).|
-|\[Preview\]: Audit FedRAMP High controls and deploy specific VM Extensions to support audit requirements|Policy assignment|MFA should be enabled on accounts with read permissions on your subscription|Information about policy effects can be found at [Understand Azure Policy Effects](../../../policy/concepts/effects.md).|
-|\[Preview\]: Audit FedRAMP High controls and deploy specific VM Extensions to support audit requirements|Policy assignment|MFA should be enabled on accounts with owner permissions on your subscription|Information about policy effects can be found at [Understand Azure Policy Effects](../../../policy/concepts/effects.md).|
-|\[Preview\]: Audit FedRAMP High controls and deploy specific VM Extensions to support audit requirements|Policy assignment|MFA should be enabled on accounts with write permissions on your subscription|Information about policy effects can be found at [Understand Azure Policy Effects](../../../policy/concepts/effects.md).|
-|\[Preview\]: Audit FedRAMP High controls and deploy specific VM Extensions to support audit requirements|Policy assignment|Long-term geo-redundant backup should be enabled for Azure SQL Databases|Information about policy effects can be found at [Understand Azure Policy Effects](../../../policy/concepts/effects.md).|
-
-## Next steps
-
-Now that you've reviewed the steps to deploy the FedRAMP High blueprint sample, visit the following
-articles to learn about the blueprint and control mapping:
-
-> [!div class="nextstepaction"]
-> [FedRAMP High blueprint - Overview](./index.md)
-> [FedRAMP High blueprint - Control mapping](./control-mapping.md)
-
-Additional articles about blueprints and how to use them:
--- Learn about the [blueprint lifecycle](../../concepts/lifecycle.md).-- Understand how to use [static and dynamic parameters](../../concepts/parameters.md).-- Learn to customize the [blueprint sequencing order](../../concepts/sequencing-order.md).-- Find out how to make use of [blueprint resource locking](../../concepts/resource-locking.md).-- Learn how to [update existing assignments](../../how-to/update-existing-assignments.md).
governance Index https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/fedramp-h/index.md
- Title: FedRAMP High blueprint sample overview
-description: Overview of the FedRAMP High blueprint sample. This blueprint sample helps customers assess specific FedRAMP High controls.
Previously updated : 04/02/2021--
-# Overview of the FedRAMP High blueprint sample
-
-The FedRAMP High blueprint sample provides governance guardrails using
-[Azure Policy](../../../policy/overview.md) that help you assess specific FedRAMP High controls.
-This blueprint helps customers deploy a core set of policies for any Azure-deployed architecture
-that must implement FedRAMP High controls.
-
-## Control mapping
-
-The control mapping section provides details on policies included within this blueprint and how
-these policies address various controls in FedRAMP High. When assigned to an architecture,
-resources are evaluated by Azure Policy for non-compliance with assigned policies. For more
-information, see [Azure Policy](../../../policy/overview.md).
-
-## Next steps
-
-You've reviewed the overview and of the FedRAMP High blueprint sample. Next, visit the following
-articles to learn about the control mapping and how to deploy this sample:
-
-> [!div class="nextstepaction"]
-> [FedRAMP High blueprint - Control mapping](./control-mapping.md)
-> [FedRamp High blueprint - Deploy steps](./deploy.md)
-
-Additional articles about blueprints and how to use them:
--- Learn about the [blueprint lifecycle](../../concepts/lifecycle.md).-- Understand how to use [static and dynamic parameters](../../concepts/parameters.md).-- Learn to customize the [blueprint sequencing order](../../concepts/sequencing-order.md).-- Find out how to make use of [blueprint resource locking](../../concepts/resource-locking.md).-- Learn how to [update existing assignments](../../how-to/update-existing-assignments.md).
governance Control Mapping https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/fedramp-m/control-mapping.md
- Title: FedRAMP Moderate blueprint sample controls
-description: Control mapping of the FedRAMP Moderate blueprint sample. Each control is mapped to one or more Azure Policy definitions that assist with assessment.
Previously updated : 04/02/2021--
-# Control mapping of the FedRAMP Moderate blueprint sample
-
-The following article details how the Azure Blueprints FedRAMP Moderate blueprint sample maps to the
-FedRAMP Moderate controls. For more information about the controls, see
-[FedRAMP Security Controls Baseline](https://www.fedramp.gov/).
-
-The following mappings are to the **FedRAMP Moderate** controls. Use the navigation on the right to
-jump directly to a specific control mapping. Many of the mapped controls are implemented with an
-[Azure Policy](../../../policy/overview.md) initiative. To review the complete initiative, open
-**Policy** in the Azure portal and select the **Definitions** page. Then, find and select the
-**\[Preview\]: Audit FedRAMP Moderate controls and deploy specific VM Extensions to support audit
-requirements** built-in policy initiative.
-
-> [!IMPORTANT]
-> Each control below is associated with one or more [Azure Policy](../../../policy/overview.md)
-> definitions. These policies may help you
-> [assess compliance](../../../policy/how-to/get-compliance-data.md) with the control; however,
-> there often is not a one-to-one or complete match between a control and one or more policies. As
-> such, **Compliant** in Azure Policy refers only to the policies themselves; this doesn't ensure
-> you're fully compliant with all requirements of a control. In addition, the compliance standard
-> includes controls that aren't addressed by any Azure Policy definitions at this time. Therefore,
-> compliance in Azure Policy is only a partial view of your overall compliance status. The
-> associations between controls and Azure Policy definitions for this compliance blueprint sample
-> may change over time. To view the change history, see the
-> [GitHub Commit History](https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/fedramp-m/control-mapping.md).
-
-## AC-2 Account Management
-
-This blueprint helps you review accounts that may not comply with your organization's account
-management requirements. This blueprint assigns [Azure Policy](../../../policy/overview.md)
-definitions that audit external accounts with read, write, and owner permissions on a subscription
-and deprecated accounts. By reviewing the accounts audited by these policies, you can take
-appropriate action to ensure account management requirements are met.
--- Deprecated accounts should be removed from your subscription-- Deprecated accounts with owner permissions should be removed from your subscription-- External accounts with owner permissions should be removed from your subscription-- External accounts with read permissions should be removed from your subscription-- External accounts with write permissions should be removed from your subscription-
-## AC-2 (7) Account Management | Role-Based Schemes
-
-[Azure role-based access control (Azure RBAC)](../../../../role-based-access-control/overview.md)
-helps you manage who has access to resources in Azure. Using the Azure portal, you can review who
-has access to Azure resources and their permissions. This blueprint also assigns
-[Azure Policy](../../../policy/overview.md) definitions to audit use of Azure Active Directory
-authentication for SQL Servers and Service Fabric. Using Azure Active Directory authentication
-enables simplified permission management and centralized identity management of database users and
-other Microsoft services. Additionally, this blueprint assigns an Azure Policy definition to audit
-the use of custom Azure RBAC rules. Understanding where custom Azure RBAC rules are implement can
-help you verify need and proper implementation, as custom Azure RBAC rules are error prone.
--- An Azure Active Directory administrator should be provisioned for SQL servers-- Audit usage of custom RBAC rules-- Service Fabric clusters should only use Azure Active Directory for client authentication-
-## AC-2 (12) Account Management | Account Monitoring / Atypical Usage
-
-Just-in-time (JIT) virtual machine access locks down inbound traffic to Azure virtual machines,
-reducing exposure to attacks while providing easy access to connect to VMs when needed. All JIT
-requests to access virtual machines are logged in the Activity Log allowing you to monitor for
-atypical usage. This blueprint assigns an [Azure Policy](../../../policy/overview.md) definition
-that helps you monitor virtual machines that can support just-in-time access but haven't yet been
-configured.
--- Just-In-Time network access control should be applied on virtual machines-
-## AC-4 Information Flow Enforcement
-
-Cross origin resource sharing (CORS) can allow App Services resources to be requested from an
-outside domain. Microsoft recommends that you allow only required domains to interact with your API,
-function, and web applications. This blueprint assigns an
-[Azure Policy](../../../policy/overview.md) definition to help you monitor CORS resources access
-restrictions in Azure Security Center. Understanding CORS implementations can help you verify that
-information flow controls are implemented.
--- CORS should not allow every resource to access your Web Application-
-## AC-5 Separation of Duties
-
-Having only one Azure subscription owner doesn't allow for administrative redundancy. Conversely,
-having too many Azure subscription owners can increase the potential for a breach via a compromised
-owner account. This blueprint helps you maintain an appropriate number of Azure subscription owners
-by assigning [Azure Policy](../../../policy/overview.md) definitions that audit the number of owners
-for Azure subscriptions. This blueprint also assigns Azure Policy definitions that help you control
-membership of the Administrators group on Windows virtual machines. Managing subscription owner and
-virtual machine administrator permissions can help you implement appropriate separation of duties.
--- A maximum of 3 owners should be designated for your subscription-- Audit Windows VMs in which the Administrators group contains any of the specified members-- Audit Windows VMs in which the Administrators group does not contain all of the specified members-- Deploy prerequisites to audit Windows VMs in which the Administrators group contains any of the
- specified members
-- Deploy prerequisites to audit Windows VMs in which the Administrators group does not contain all
- of the specified members
-- There should be more than one owner assigned to your subscription-
-## AC-17 (1) Remote Access | Automated Monitoring / Control
-
-This blueprint helps you monitor and control remote access by assigning
-[Azure Policy](../../../policy/overview.md) definitions to monitors that remote debugging for Azure
-App Service application is turned off and policy definitions that audit Linux virtual machines that
-allow remote connections from accounts without passwords. This blueprint also assigns an Azure
-Policy definition that helps you monitor unrestricted access to storage accounts. Monitoring these
-indicators can help you ensure remote access methods comply with your security policy.
--- \[Preview\]: Audit Linux VMs that allow remote connections from accounts without passwords-- \[Preview\]: Deploy requirements to audit Linux VMs that allow remote connections from accounts
- without passwords
-- Audit unrestricted network access to storage accounts-- Remote debugging should be turned off for API App-- Remote debugging should be turned off for Function App-- Remote debugging should be turned off for Web Application-
-## AU-5 Response to Audit Processing Failures
-
-This blueprint assigns [Azure Policy](../../../policy/overview.md) definitions that monitor
-audit and event logging configurations. Monitoring these configurations can provide an indicator of
-an audit system failure or misconfiguration and help you take corrective action.
--- Audit diagnostic setting-- Auditing on SQL server should be enabled-- Advanced data security should be enabled on your managed instances-- Advanced data security should be enabled on your SQL servers-
-## AU-12 Audit Generation
-
-This blueprint helps you ensure system events are logged by assigning
-[Azure Policy](../../../policy/overview.md) definitions that audit log settings on Azure resources.
-These policy definitions audit and enforce deployment of the Log Analytics agent on Azure virtual
-machines and configuration of audit settings for other Azure resource types. These policy
-definitions also audit configuration of diagnostic logs to provide insight into operations that are
-performed within Azure resources. Additionally, auditing and Advanced Data Security are configured
-on SQL servers.
--- \[Preview\]: Audit Log Analytics Agent Deployment - VM Image (OS) unlisted-- \[Preview\]: Audit Log Analytics Agent Deployment in VMSS - VM Image (OS) unlisted-- \[Preview\]: Audit Log Analytics Workspace for VM - Report Mismatch-- \[Preview\]: Deploy Log Analytics Agent for Linux VM Scale Sets (VMSS)-- \[Preview\]: Deploy Log Analytics Agent for Linux VMs-- \[Preview\]: Deploy Log Analytics Agent for Windows VM Scale Sets (VMSS)-- \[Preview\]: Deploy Log Analytics Agent for Windows VMs-- Audit diagnostic setting-- Auditing on SQL server should be enabled-- Advanced data security should be enabled on your managed instances-- Advanced data security should be enabled on your SQL servers-- Deploy Advanced Data Security on SQL servers-- Deploy Auditing on SQL servers-- Deploy Diagnostic Settings for Network Security Groups-
-## CM-7 (2) Least Functionality | Prevent Program Execution
-
-Adaptive application control in Azure Security Center is an intelligent, automated end-to-end
-application filtering solution that can block or prevent specific software from running on your
-virtual machines. Application control can run in an enforcement mode that prohibits non-approved
-application from running. This blueprint assigns an Azure Policy definition that helps you monitor
-virtual machines where an application allowlist is recommended but has not yet been configured.
--- Adaptive Application Controls should be enabled on virtual machines-
-## CM-7 (5) Least Functionality | Authorized Software / Whitelisting
-
-Adaptive application control in Azure Security Center is an intelligent, automated end-to-end
-application filtering solution that can block or prevent specific software from running on your
-virtual machines. Application control helps you create approved application lists for your virtual
-machines. This blueprint assigns an [Azure Policy](../../../policy/overview.md) definition that
-helps you monitor virtual machines where an application allowlist is recommended but has not yet
-been configured.
--- Adaptive Application Controls should be enabled on virtual machines-
-## CM-11 User-Installed Software
-
-Adaptive application control in Azure Security Center is an intelligent, automated end-to-end
-application filtering solution that can block or prevent specific software from running on your
-virtual machines. Application control can help you enforce and monitor compliance with software
-restriction policies. This blueprint assigns an [Azure Policy](../../../policy/overview.md)
-definition that helps you monitor virtual machines where an application allowlist is recommended
-but has not yet been configured.
--- Adaptive Application Controls should be enabled on virtual machines-
-## CP-7 Alternate Processing Site
-
-Azure Site Recovery replicates workloads running on virtual machines from a primary location to a
-secondary location. If an outage occurs at the primary site, the workload fails over the secondary
-location. This blueprint assigns an [Azure Policy](../../../policy/overview.md) definition that
-audits virtual machines without disaster recovery configured. Monitoring this indicator can help you
-ensure necessary contingency controls are in place.
--- Audit virtual machines without disaster recovery configured-
-## IA-2 (1) Identification and Authentication (Organizational Users) | Network Access to Privileged Accounts
-
-This blueprint helps you restrict and control privileged access by assigning
-[Azure Policy](../../../policy/overview.md) definitions to audit accounts with owner and/or write
-permissions that don't have multi-factor authentication enabled. Multi-factor authentication helps
-keep accounts secure even if one piece of authentication information is compromised. By monitoring
-accounts without multi-factor authentication enabled, you can identify accounts that may be more
-likely to be compromised.
--- MFA should be enabled on accounts with owner permissions on your subscription-- MFA should be enabled on accounts with write permissions on your subscription-
-## IA-2 (2) Identification and Authentication (Organizational Users) | Network Access to Non-Privileged Accounts
-
-This blueprint helps you restrict and control access by assigning an
-[Azure Policy](../../../policy/overview.md) definition to audit accounts with read permissions that
-don't have multi-factor authentication enabled. Multi-factor authentication helps keep accounts
-secure even if one piece of authentication information is compromised. By monitoring accounts
-without multi-factor authentication enabled, you can identify accounts that may be more likely to be
-compromised.
--- MFA should be enabled on accounts with read permissions on your subscription-
-## IA-5 Authenticator Management
-
-This blueprint assigns [Azure Policy](../../../policy/overview.md) definitions that audit Linux
-virtual machines that allow remote connections from accounts without passwords and/or have incorrect
-permissions set on the passwd file. This blueprint also assigns policy definitions that audit the
-configuration of the password encryption type for Windows virtual machines. Monitoring these
-indicators helps you ensure that system authenticators comply with your organization's
-identification and authentication policy.
--- \[Preview\]: Audit Linux VMs that do not have the passwd file permissions set to 0644-- \[Preview\]: Audit Linux VMs that have accounts without passwords-- \[Preview\]: Audit Windows VMs that do not store passwords using reversible encryption-- \[Preview\]: Deploy requirements to audit Linux VMs that do not have the passwd file permissions
- set to 0644
-- \[Preview\]: Deploy requirements to audit Linux VMs that have accounts without passwords-- \[Preview\]: Deploy requirements to audit Windows VMs that do not store passwords using reversible
- encryption
-
-## IA-5 (1) Authenticator Management | Password-Based Authentication
-
-This blueprint helps you enforce strong passwords by assigning
-[Azure Policy](../../../policy/overview.md) definitions that audit Windows virtual machines that
-don't enforce minimum strength and other password requirements. Awareness of virtual machines in
-violation of the password strength policy helps you take corrective actions to ensure passwords for
-all virtual machine user accounts comply with your organization's password policy.
--- \[Preview\]: Audit Windows VMs that allow re-use of the previous 24 passwords-- \[Preview\]: Audit Windows VMs that do not have a maximum password age of 70 days-- \[Preview\]: Audit Windows VMs that do not have a minimum password age of 1 day-- \[Preview\]: Audit Windows VMs that do not have the password complexity setting enabled-- \[Preview\]: Audit Windows VMs that do not restrict the minimum password length to 14 characters-- \[Preview\]: Audit Windows VMs that do not store passwords using reversible encryption-- \[Preview\]: Deploy requirements to audit Windows VMs that allow re-use of the previous 24
- passwords
-- \[Preview\]: Deploy requirements to audit Windows VMs that do not have a maximum password age of
- 70 days
-- \[Preview\]: Deploy requirements to audit Windows VMs that do not have a minimum password age of 1
- day
-- \[Preview\]: Deploy requirements to audit Windows VMs that do not have the password complexity
- setting enabled
-- \[Preview\]: Deploy requirements to audit Windows VMs that do not restrict the minimum password
- length to 14 characters
-- \[Preview\]: Deploy requirements to audit Windows VMs that do not store passwords using reversible
- encryption
-
-## RA-5 Vulnerability Scanning
-
-This blueprint helps you manage information system vulnerabilities by assigning
-[Azure Policy](../../../policy/overview.md) definitions that monitor operating system
-vulnerabilities, SQL vulnerabilities, and virtual machine vulnerabilities in Azure Security Center.
-Azure Security Center provides reporting capabilities that enable you to have real-time insight into
-the security state of deployed Azure resources. This blueprint also assigns policy definitions that
-audit and enforce Advanced Data Security on SQL servers. Advanced data security included
-vulnerability assessment and advanced threat protection capabilities to help you understand
-vulnerabilities in your deployed resources.
--- Advanced data security should be enabled on your managed instances-- Advanced data security should be enabled on your SQL servers-- Deploy Advanced Data Security on SQL servers-- Vulnerabilities in security configuration on your virtual machine scale sets should be remediated-- Vulnerabilities in security configuration on your virtual machines should be remediated-- Vulnerabilities on your SQL databases should be remediated-- Vulnerabilities should be remediated by a Vulnerability Assessment solution-
-## SC-5 Denial of Service Protection
-
-Azure's distributed denial of service (DDoS) Standard tier provides additional features and
-mitigation capabilities over the basic service tier. These additional features include Azure Monitor
-integration and the ability to review post-attack mitigation reports. This blueprint assigns an
-[Azure Policy](../../../policy/overview.md) definition that audits if the DDoS Standard tier is
-enabled. Understanding the capability difference between the service tiers can help you select the
-best solution to address denial of service protections for your Azure environment.
--- DDoS Protection Standard should be enabled-
-## SC-7 Boundary Protection
-
-This blueprint helps you manage and control the system boundary by assigning an
-[Azure Policy](../../../policy/overview.md) definition that monitors for network security group
-hardening recommendations in Azure Security Center. Azure Security Center analyzes traffic patterns
-of Internet facing virtual machines and provides network security group rule recommendations to
-reduce the potential attack surface. Additionally, this blueprint also assigns policy definitions
-that monitor unprotected endpoints, applications, and storage accounts. Endpoints and applications
-that aren't protected by a firewall, and storage accounts with unrestricted access can allow
-unintended access to information contained within the information system.
--- Network Security Group Rules for Internet facing virtual machines should be hardened-- Access through Internet facing endpoint should be restricted-- Web ports should be restricted on Network Security Groups associated to your VM-- Audit unrestricted network access to storage accounts-
-## SC-7 (3) Boundary Protection | Access Points
-
-Just-in-time (JIT) virtual machine access locks down inbound traffic to Azure virtual machines,
-reducing exposure to attacks while providing easy access to connect to VMs when needed. JIT virtual
-machine access helps you limit the number of external connections to your resources in Azure. This
-blueprint assigns an [Azure Policy](../../../policy/overview.md) definition that helps you monitor
-virtual machines that can support just-in-time access but haven't yet been configured.
--- Just-In-Time network access control should be applied on virtual machines-
-## SC-7 (4) Boundary Protection | External Telecommunications Services
-
-Just-in-time (JIT) virtual machine access locks down inbound traffic to Azure virtual machines,
-reducing exposure to attacks while providing easy access to connect to VMs when needed. JIT virtual
-machine access helps you manage exceptions to your traffic flow policy by facilitating the access
-request and approval processes. This blueprint assigns an
-[Azure Policy](../../../policy/overview.md) definition that helps you monitor virtual machines that
-can support just-in-time access but haven't yet been configured.
--- Just-In-Time network access control should be applied on virtual machines-
-## SC-8 (1) Transmission Confidentiality and Integrity | Cryptographic or Alternate Physical Protection
-
-This blueprint helps you protect the confidential and integrity of transmitted information by
-assigning [Azure Policy](../../../policy/overview.md) definitions that help you monitor
-cryptographic mechanism implemented for communications protocols. Ensuring communications are
-properly encrypted can help you meet your organization's requirements or protecting information from
-unauthorized disclosure and modification.
--- API App should only be accessible over HTTPS-- Audit Windows web servers that are not using secure communication protocols-- Deploy requirements to audit Windows web servers that are not using secure communication protocols-- Function App should only be accessible over HTTPS-- Only secure connections to your Redis Cache should be enabled-- Secure transfer to storage accounts should be enabled-- Web Application should only be accessible over HTTPS-
-## SC-28 (1) Protection of Information at Rest | Cryptographic Protection
-
-This blueprint helps you enforce your policy on the use of cryptograph controls to protect
-information at rest by assigning [Azure Policy](../../../policy/overview.md) definitions that
-enforce specific cryptograph controls and audit use of weak cryptographic settings. Understanding
-where your Azure resources may have non-optimal cryptographic configurations can help you take
-corrective actions to ensure resources are configured in accordance with your information security
-policy. Specifically, the policy definitions assigned by this blueprint require encryption for data
-lake storage accounts; require transparent data encryption on SQL databases; and audit missing
-encryption on SQL databases, virtual machine disks, and automation account variables.
--- Advanced data security should be enabled on your managed instances-- Advanced data security should be enabled on your SQL servers-- Deploy Advanced Data Security on SQL servers-- Deploy SQL DB transparent data encryption-- Disk encryption should be applied on virtual machines-- Require encryption on Data Lake Store accounts-- Transparent Data Encryption on SQL databases should be enabled-
-## SI-2 Flaw Remediation
-
-This blueprint helps you manage information system flaws by assigning
-[Azure Policy](../../../policy/overview.md) definitions that monitor missing system updates,
-operating system vulnerabilities, SQL vulnerabilities, and virtual machine vulnerabilities in Azure
-Security Center. Azure Security Center provides reporting capabilities that enable you to have
-real-time insight into the security state of deployed Azure resources. This blueprint also assigns a
-policy definition that ensures patching of the operating system for virtual machine scale sets.
--- Require automatic OS image patching on Virtual Machine Scale Sets-- System updates on virtual machine scale sets should be installed-- System updates should be installed on your virtual machines-- Vulnerabilities in security configuration on your virtual machine scale sets should be remediated-- Vulnerabilities in security configuration on your virtual machines should be remediated-- Vulnerabilities on your SQL databases should be remediated-- Vulnerabilities should be remediated by a Vulnerability Assessment solution-
-## SI-3 Malicious Code Protection
-
-This blueprint helps you manage endpoint protection, including malicious code protection, by
-assigning [Azure Policy](../../../policy/overview.md) definitions that monitor for missing endpoint
-protection on virtual machines in Azure Security Center and enforce the Microsoft antimalware
-solution on Windows virtual machines.
--- Deploy default Microsoft IaaSAntimalware extension for Windows Server-- Endpoint protection solution should be installed on virtual machine scale sets-- Monitor missing Endpoint Protection in Azure Security Center-
-## SI-3 (1) Malicious Code Protection | Central Management
-
-This blueprint helps you manage endpoint protection, including malicious code protection, by
-assigning [Azure Policy](../../../policy/overview.md) definitions that monitor for missing endpoint
-protection on virtual machines in Azure Security Center. Azure Security Center provides centralized
-management and reporting capabilities that enable you to have real-time insight into the security
-state of deployed Azure resources.
--- Endpoint protection solution should be installed on virtual machine scale sets-- Monitor missing Endpoint Protection in Azure Security Center-
-## SI-4 Information System Monitoring
-
-This blueprint helps you monitor your system by auditing and enforcing logging and data security
-across Azure resources. Specifically, the policies assigned audit and enforce deployment of the Log
-Analytics agent, and enhanced security settings for SQL databases, storage accounts and network
-resources. These capabilities can help you detect anomalous behavior and indicators of attacks so
-you can take appropriate action.
--- \[Preview\]: Audit Log Analytics Agent Deployment - VM Image (OS) unlisted-- \[Preview\]: Audit Log Analytics Agent Deployment in VMSS - VM Image (OS) unlisted-- \[Preview\]: Audit Log Analytics Workspace for VM - Report Mismatch-- \[Preview\]: Deploy Log Analytics Agent for Linux VM Scale Sets (VMSS)-- \[Preview\]: Deploy Log Analytics Agent for Linux VMs-- \[Preview\]: Deploy Log Analytics Agent for Windows VM Scale Sets (VMSS)-- \[Preview\]: Deploy Log Analytics Agent for Windows VMs-- Advanced data security should be enabled on your managed instances-- Advanced data security should be enabled on your SQL servers-- Deploy Advanced Data Security on SQL servers-- Deploy Advanced Threat Protection on Storage Accounts-- Deploy Auditing on SQL servers-- Deploy network watcher when virtual networks are created-- Deploy Threat Detection on SQL servers-
-> [!NOTE]
-> Availability of specific Azure Policy definitions may vary in Azure Government and other national
-> clouds.
-
-## Next steps
-
-Now that you've reviewed the control mapping of the FedRAMP Moderate blueprint, visit the following
-articles to learn about the blueprint and how to deploy this sample:
-
-> [!div class="nextstepaction"]
-> [FedRAMP Moderate blueprint - Overview](./index.md)
-> [FedRAMP Moderate blueprint - Deploy steps](./deploy.md)
-
-Additional articles about blueprints and how to use them:
--- Learn about the [blueprint lifecycle](../../concepts/lifecycle.md).-- Understand how to use [static and dynamic parameters](../../concepts/parameters.md).-- Learn to customize the [blueprint sequencing order](../../concepts/sequencing-order.md).-- Find out how to make use of [blueprint resource locking](../../concepts/resource-locking.md).-- Learn how to [update existing assignments](../../how-to/update-existing-assignments.md).
governance Deploy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/fedramp-m/deploy.md
- Title: Deploy FedRAMP Moderate blueprint sample
-description: Deploy steps for the FedRAMP Moderate blueprint sample including blueprint artifact parameter details.
Previously updated : 04/02/2021--
-# Deploy the FedRAMP Moderate blueprint sample
-
-To deploy the Azure Blueprints FedRAMP Moderate blueprint sample, the following steps must be taken:
-
-> [!div class="checklist"]
-> - Create a new blueprint from the sample
-> - Mark your copy of the sample as **Published**
-> - Assign your copy of the blueprint to an existing subscription
-
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free)
-before you begin.
-
-## Create blueprint from sample
-
-First, implement the blueprint sample by creating a new blueprint in your environment using the
-sample as a starter.
-
-1. Select **All services** in the left pane. Search for and select **Blueprints**.
-
-1. From the **Getting started** page on the left, select the **Create** button under _Create a
- blueprint_.
-
-1. Find the **FedRAMP Moderate** blueprint sample under _Other Samples_ and select **Use
- this sample**.
-
-1. Enter the _Basics_ of the blueprint sample:
-
- - **Blueprint name**: Provide a name for your copy of the FedRAMP Moderate blueprint sample.
- - **Definition location**: Use the ellipsis and select the management group to save your copy of
- the sample to.
-
-1. Select the _Artifacts_ tab at the top of the page or **Next: Artifacts** at the bottom of the
- page.
-
-1. Review the list of artifacts that make up the blueprint sample. Many of the artifacts have
- parameters that we'll define later. Select **Save Draft** when you've finished reviewing the
- blueprint sample.
-
-## Publish the sample copy
-
-Your copy of the blueprint sample has now been created in your environment. It's created in
-**Draft** mode and must be **Published** before it can be assigned and deployed. The copy of the
-blueprint sample can be customized to your environment and needs, but that modification may move it
-away from alignment with FedRAMP Moderate controls.
-
-1. Select **All services** in the left pane. Search for and select **Blueprints**.
-
-1. Select the **Blueprint definitions** page on the left. Use the filters to find your copy of the
- blueprint sample and then select it.
-
-1. Select **Publish blueprint** at the top of the page. In the new page on the right, provide a
- **Version** for your copy of the blueprint sample. This property is useful for if you make a
- modification later. Provide **Change notes** such as "First version published from the FedRAMP
- Moderate blueprint sample." Then select **Publish** at the bottom of the page.
-
-## Assign the sample copy
-
-Once the copy of the blueprint sample has been successfully **Published**, it can be assigned to a
-subscription within the management group it was saved to. This step is where parameters are
-provided to make each deployment of the copy of the blueprint sample unique.
-
-1. Select **All services** in the left pane. Search for and select **Blueprints**.
-
-1. Select the **Blueprint definitions** page on the left. Use the filters to find your copy of the
- blueprint sample and then select it.
-
-1. Select **Assign blueprint** at the top of the blueprint definition page.
-
-1. Provide the parameter values for the blueprint assignment:
-
- - Basics
-
- - **Subscriptions**: Select one or more of the subscriptions that are in the management group
- you saved your copy of the blueprint sample to. If you select more than one subscription, an
- assignment will be created for each using the parameters entered.
- - **Assignment name**: The name is pre-populated for you based on the name of the blueprint.
- Change as needed or leave as is.
- - **Location**: Select a region for the managed identity to be created in. Azure Blueprint uses
- this managed identity to deploy all artifacts in the assigned blueprint. To learn more, see
- [managed identities for Azure resources](../../../../active-directory/managed-identities-azure-resources/overview.md).
- - **Blueprint definition version**: Pick a **Published** version of your copy of the blueprint
- sample.
-
- - Lock Assignment
-
- Select the blueprint lock setting for your environment. For more information, see [blueprints resource locking](../../concepts/resource-locking.md).
-
- - Managed Identity
-
- Leave the default _system assigned_ managed identity option.
-
- - Artifact parameters
-
- The parameters defined in this section apply to the artifact under which it's defined. These
- parameters are [dynamic parameters](../../concepts/parameters.md#dynamic-parameters) since
- they're defined during the assignment of the blueprint. For a full list or artifact parameters
- and their descriptions, see [Artifact parameters table](#artifact-parameters-table).
-
-1. Once all parameters have been entered, select **Assign** at the bottom of the page. The blueprint
- assignment is created and artifact deployment begins. Deployment takes roughly an hour. To check
- on the status of deployment, open the blueprint assignment.
-
-> [!WARNING]
-> The Azure Blueprints service and the built-in blueprint samples are **free of cost**. Azure
-> resources are [priced by product](https://azure.microsoft.com/pricing/). Use the [pricing calculator](https://azure.microsoft.com/pricing/calculator/)
-> to estimate the cost of running resources deployed by this blueprint sample.
-
-## Artifact parameters table
-
-The following table provides a list of the blueprint artifact parameters:
-
-|Artifact name|Artifact type|Parameter name|Description|
-|-|-|-|-|
-|\[Preview\]: Audit FedRAMP Moderate controls and deploy specific VM Extensions to support audit requirements|Policy assignment|Log Analytics workspace ID that VMs should be configured for|This is the ID (GUID) of the Log Analytics workspace that the VMs should be configured for.|
-|\[Preview\]: Audit FedRAMP Moderate controls and deploy specific VM Extensions to support audit requirements|Policy assignment|List of resource types that should have diagnostic logs enabled|List of resource types to audit if diagnostic log setting is not enabled. Acceptable values can be found at [Azure Monitor diagnostic logs schemas](../../../../azure-monitor/essentials/resource-logs-schema.md#service-specific-schemas).|
-|\[Preview\]: Audit FedRAMP Moderate controls and deploy specific VM Extensions to support audit requirements|Policy assignment|List of users that should be excluded from Windows VM Administrators group|A semicolon-separated list of members that should be excluded in the Administrators local group. Ex: Administrator; myUser1; myUser2|
-|\[Preview\]: Audit FedRAMP Moderate controls and deploy specific VM Extensions to support audit requirements|Policy assignment|List of users that should be included in Windows VM Administrators group|A semicolon-separated list of members that should be included in the Administrators local group. Ex: Administrator; myUser1; myUser2|
-|\[Preview\]: Deploy Log Analytics Agent for Linux VM Scale Sets (VMSS)|Policy assignment|Log Analytics workspace for Linux VM Scale Sets (VMSS)|If this workspace is outside of the scope of the assignment you must manually grant 'Log Analytics Contributor' permissions (or similar) to the policy assignment's principal ID.|
-|\[Preview\]: Deploy Log Analytics Agent for Linux VM Scale Sets (VMSS)|Policy assignment|Optional: List of VM images that have supported Linux OS to add to scope|An empty array may be used to indicate no optional parameters: \[\]|
-|\[Preview\]: Deploy Log Analytics Agent for Linux VMs|Policy assignment|Log Analytics workspace for Linux VMs|If this workspace is outside of the scope of the assignment you must manually grant 'Log Analytics Contributor' permissions (or similar) to the policy assignment's principal ID.|
-|\[Preview\]: Deploy Log Analytics Agent for Linux VMs|Policy assignment|Optional: List of VM images that have supported Linux OS to add to scope|An empty array may be used to indicate no optional parameters: \[\]|
-|\[Preview\]: Deploy Log Analytics Agent for Windows VM Scale Sets (VMSS)|Policy assignment|Log Analytics workspace for Windows VM Scale Sets (VMSS)|If this workspace is outside of the scope of the assignment you must manually grant 'Log Analytics Contributor' permissions (or similar) to the policy assignment's principal ID.|
-|\[Preview\]: Deploy Log Analytics Agent for Windows VM Scale Sets (VMSS)|Policy assignment|Optional: List of VM images that have supported Windows OS to add to scope|An empty array may be used to indicate no optional parameters: \[\]|
-|\[Preview\]: Deploy Log Analytics Agent for Windows VMs|Policy assignment|Log Analytics workspace for Windows VMs|If this workspace is outside of the scope of the assignment you must manually grant 'Log Analytics Contributor' permissions (or similar) to the policy assignment's principal ID.|
-|\[Preview\]: Deploy Log Analytics Agent for Windows VMs|Policy assignment|Optional: List of VM images that have supported Windows OS to add to scope|An empty array may be used to indicate no optional parameters: \[\]|
-|Deploy Advanced Threat Protection on Storage Accounts|Policy assignment|Effect|Information about policy effects can be found at [Understand Azure Policy Effects](../../../policy/concepts/effects.md) |
-|Deploy Auditing on SQL servers|Policy assignment|The value in days of the retention period (0 indicates unlimited retention) |Retention days (optional, 180 days if unspecified) |
-|Deploy Auditing on SQL servers|Policy assignment|Resource group name for storage account for SQL server auditing|Auditing writes database events to an audit log in your Azure Storage account (a storage account will be created in each region where a SQL Server is created that will be shared by all servers in that region). Important - for proper operation of Auditing do not delete or rename the resource group or the storage accounts.|
-|Deploy diagnostic settings for Network Security Groups|Policy assignment|Storage account prefix for network security group diagnostics|This prefix will be combined with the network security group location to form the created storage account name.|
-|Deploy diagnostic settings for Network Security Groups|Policy assignment|Resource group name for storage account for network security group diagnostics (must exist) |The resource group that the storage account will be created in. This resource group must already exist.|
-
-## Next steps
-
-Now that you've reviewed the steps to deploy the FedRAMP Moderate blueprint sample, visit the
-following articles to learn about the blueprint and control mapping:
-
-> [!div class="nextstepaction"]
-> [FedRAMP Moderate blueprint - Overview](./index.md)
-> [FedRAMP Moderate blueprint - Control mapping](./control-mapping.md)
-
-Additional articles about blueprints and how to use them:
--- Learn about the [blueprint lifecycle](../../concepts/lifecycle.md).-- Understand how to use [static and dynamic parameters](../../concepts/parameters.md).-- Learn to customize the [blueprint sequencing order](../../concepts/sequencing-order.md).-- Find out how to make use of [blueprint resource locking](../../concepts/resource-locking.md).-- Learn how to [update existing assignments](../../how-to/update-existing-assignments.md).
governance Index https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/fedramp-m/index.md
- Title: FedRAMP Moderate blueprint sample overview
-description: Overview of the FedRAMP Moderate blueprint sample. This blueprint sample helps customers assess specific FedRAMP Moderate controls.
Previously updated : 04/02/2021--
-# Overview of the FedRAMP Moderate blueprint sample
-
-The FedRAMP Moderate blueprint sample provides governance guardrails using
-[Azure Policy](../../../policy/overview.md) that help you assess specific FedRAMP Moderate controls.
-This blueprint helps customers deploy a core set of policies for any Azure-deployed architecture
-that must implement FedRAMP Moderate controls.
-
-## Control mapping
-
-The control mapping section provides details on policies included within this blueprint and how
-these policies address various controls in FedRAMP Moderate. When assigned to an architecture,
-resources are evaluated by Azure Policy for non-compliance with assigned policies. For more
-information, see [Azure Policy](../../../policy/overview.md).
-
-## Next steps
-
-You've reviewed the overview and of the FedRAMP Moderate blueprint sample. Next, visit the
-following articles to learn about the control mapping and how to deploy this sample:
-
-> [!div class="nextstepaction"]
-> [FedRAMP Moderate blueprint - Control mapping](./control-mapping.md)
-> [FedRAMP Moderate blueprint - Deploy steps](./deploy.md)
-
-Additional articles about blueprints and how to use them:
--- Learn about the [blueprint lifecycle](../../concepts/lifecycle.md).-- Understand how to use [static and dynamic parameters](../../concepts/parameters.md).-- Learn to customize the [blueprint sequencing order](../../concepts/sequencing-order.md).-- Find out how to make use of [blueprint resource locking](../../concepts/resource-locking.md).-- Learn how to [update existing assignments](../../how-to/update-existing-assignments.md).
governance Hipaa Hitrust 9 2 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/hipaa-hitrust-9-2.md
Title: HIPAA HITRUST 9.2 blueprint sample overview description: Overview of the HIPAA HITRUST 9.2 blueprint sample. This blueprint sample helps customers assess specific HIPAA HITRUST 9.2 controls. Previously updated : 04/02/2021 Last updated : 09/08/2021 # HIPAA HITRUST 9.2 blueprint sample
governance Index https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/index.md
quality and ready to deploy today to assist you in meeting your various complian
| Sample | Description | ||| | [Australian Government ISM PROTECTED](./ism-protected/index.md) | Provides guardrails for compliance to Australian Government ISM PROTECTED. |
-| [Azure Security Benchmark](./azure-security-benchmark.md) | Provides guardrails for compliance to [Azure Security Benchmark](../../../security/benchmarks/overview.md). |
| [Azure Security Benchmark Foundation](./azure-security-benchmark-foundation/index.md) | Deploys and configures Azure Security Benchmark Foundation. | | [Canada Federal PBMM](./canada-federal-pbmm.md) | Provides guardrails for compliance to Canada Federal Protected B, Medium Integrity, Medium Availability (PBMM). | | [CIS Microsoft Azure Foundations Benchmark v1.3.0](./cis-azure-1-3-0.md) | Provides a set of policies to help comply with CIS Microsoft Azure Foundations Benchmark v1.3.0 recommendations. | | [CIS Microsoft Azure Foundations Benchmark v1.1.0](./cis-azure-1-1-0.md) | Provides a set of policies to help comply with CIS Microsoft Azure Foundations Benchmark v1.1.0 recommendations. | | [CMMC Level 3](./cmmc-l3.md) | Provides guardrails for compliance with CMMC Level 3. |
-| [DoD Impact Level 4](./dod-impact-level-4/index.md) | Provides a set of policies to help comply with DoD Impact Level 4. |
-| [DoD Impact Level 5](./dod-impact-level-5/index.md) | Provides a set of policies to help comply with DoD Impact Level 5. |
-| [FedRAMP Moderate](./fedramp-m/index.md) | Provides a set of policies to help comply with FedRAMP Moderate. |
-| [FedRAMP High](./fedramp-h/index.md) | Provides a set of policies to help comply with FedRAMP High. |
| [HIPAA HITRUST 9.2](./hipaa-hitrust-9-2.md) | Provides a set of policies to help comply with HIPAA HITRUST. | | [IRS 1075 September 2016](./irs-1075-sept2016.md) | Provides guardrails for compliance with IRS 1075.| | [ISO 27001](./iso-27001-2013.md) | Provides guardrails for compliance with ISO 27001. |
quality and ready to deploy today to assist you in meeting your various complian
| [ISO 27001 App Service Environment/SQL Database workload](./iso27001-ase-sql-workload/index.md) | Provides more infrastructure to the [ISO 27001 Shared Services](./iso27001-shared/index.md) blueprint sample. | | [Media](./medi) | Provides a set of policies to help comply with Media MPAA. | | [New Zealand ISM Restricted](./new-zealand-ism.md) | Assigns policies to address specific New Zealand Information Security Manual controls. |
-| [NIST SP 800-53 R4](./nist-sp-800-53-r4.md) | Provides guardrails for compliance with NIST SP 800-53 R4. |
| [NIST SP 800-171 R2](./nist-sp-800-171-r2.md) | Provides guardrails for compliance with NIST SP 800-171 R2. | | [PCI-DSS v3.2.1](./pci-dss-3.2.1/index.md) | Provides a set of policies to aide in PCI-DSS v3.2.1 compliance. | | [SWIFT CSP-CSCF v2020](./swift-2020/index.md) | Aides in SWIFT CSP-CSCF v2020 compliance. |
governance Irs 1075 Sept2016 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/irs-1075-sept2016.md
Title: IRS 1075 September 2016 blueprint sample description: Overview of the IRS 1075 September 2016 blueprint sample. This blueprint sample helps customers assess specific controls. Previously updated : 05/04/2021 Last updated : 09/08/2021 # IRS 1075 September 2016 blueprint sample
governance Control Mapping https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/ism-protected/control-mapping.md
Title: Australian Government ISM PROTECTED blueprint sample controls description: Control mapping of the Australian Government ISM PROTECTED blueprint sample. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 04/02/2021 Last updated : 09/08/2021 # Control mapping of the Australian Government ISM PROTECTED blueprint sample
governance Deploy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/ism-protected/deploy.md
Title: Deploy Australian Government ISM PROTECTED blueprint sample description: Deploy steps for the Australian Government ISM PROTECTED blueprint sample including blueprint artifact parameter details. Previously updated : 04/02/2021 Last updated : 09/08/2021 # Deploy the Australian Government ISM PROTECTED blueprint sample
governance Index https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/ism-protected/index.md
Title: Australian Government ISM PROTECTED blueprint sample overview description: Overview of the Australian Government ISM PROTECTED blueprint sample. This blueprint sample helps customers assess specific ISM PROTECTED controls. Previously updated : 04/02/2021 Last updated : 09/08/2021 # Overview of the Australian Government ISM PROTECTED blueprint sample
governance Iso 27001 2013 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/iso-27001-2013.md
Title: ISO 27001 blueprint sample overview description: Overview of the ISO 27001 blueprint sample. This blueprint sample helps customers assess specific ISO 27001 controls. Previously updated : 05/01/2021 Last updated : 09/08/2021 # ISO 27001 blueprint sample
governance Control Mapping https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/iso27001-ase-sql-workload/control-mapping.md
Title: ISO 27001 ASE/SQL workload blueprint sample controls description: Control mapping of the ISO 27001 App Service Environment/SQL Database workload blueprint sample to Azure Policy and Azure RBAC. Previously updated : 04/30/2021 Last updated : 09/08/2021 # Control mapping of the ISO 27001 ASE/SQL workload blueprint sample
governance Deploy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/iso27001-ase-sql-workload/deploy.md
Title: Deploy ISO 27001 ASE/SQL workload blueprint sample description: Deploy steps of the ISO 27001 App Service Environment/SQL Database workload blueprint sample including blueprint artifact parameter details. Previously updated : 04/30/2021 Last updated : 09/08/2021 # Deploy the ISO 27001 App Service Environment/SQL Database workload blueprint sample
governance Index https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/iso27001-ase-sql-workload/index.md
Title: ISO 27001 ASE/SQL workload blueprint sample overview description: Overview and architecture of the ISO 27001 App Service Environment/SQL Database workload blueprint sample. Previously updated : 04/30/2021 Last updated : 09/08/2021 # Overview of the ISO 27001 App Service Environment/SQL Database workload blueprint sample
governance Control Mapping https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/iso27001-shared/control-mapping.md
Title: ISO 27001 Shared Services blueprint sample controls description: Control mapping of the ISO 27001 Shared Services blueprint sample. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 04/30/2021 Last updated : 09/08/2021 # Control mapping of the ISO 27001 Shared Services blueprint sample
governance Deploy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/iso27001-shared/deploy.md
Title: Deploy ISO 27001 Shared Services blueprint sample description: Deploy steps for the ISO 27001 Shared Services blueprint sample including blueprint artifact parameter details. Previously updated : 04/30/2021 Last updated : 09/08/2021 # Deploy the ISO 27001 Shared Services blueprint sample
governance Index https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/iso27001-shared/index.md
Title: ISO 27001 Shared Services blueprint sample overview description: Overview and architecture of the ISO 27001 Shared Services blueprint sample. This blueprint sample helps customers assess specific ISO 27001 controls. Previously updated : 04/30/2021 Last updated : 09/08/2021 # Overview of the ISO 27001 Shared Services blueprint sample
governance Control Mapping https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/media/control-mapping.md
Title: Media blueprint sample controls description: Control mapping of the Media blueprint samples. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 04/13/2021 Last updated : 09/08/2021 # Control mapping of the Media blueprint sample
governance Deploy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/media/deploy.md
Title: Deploy Media blueprint sample description: Deploy steps for the Media blueprint sample including blueprint artifact parameter details. Previously updated : 04/02/2021 Last updated : 09/08/2021 # Deploy the Media blueprint sample
governance Index https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/media/index.md
Title: Media blueprint sample overview description: Overview of the Media blueprint sample. This blueprint sample helps customers assess specific Media controls. Previously updated : 04/02/2021 Last updated : 09/08/2021 # Overview of the Media blueprint sample
governance New Zealand Ism https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/new-zealand-ism.md
Title: New Zealand ISM Restricted blueprint sample description: Overview of the New Zealand ISM Restricted blueprint sample. This blueprint sample helps customers assess specific controls. Previously updated : 03/22/2021 Last updated : 09/08/2021 # New Zealand ISM Restricted blueprint sample
governance Nist Sp 800 171 R2 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/nist-sp-800-171-r2.md
Title: NIST SP 800-171 R2 blueprint sample overview description: Overview of the NIST SP 800-171 R2 blueprint sample. This blueprint sample helps customers assess specific NIST SP 800-171 R2 requirements or controls. Previously updated : 04/02/2021 Last updated : 09/08/2021 # NIST SP 800-171 R2 blueprint sample
governance Nist Sp 800 53 R4 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/nist-sp-800-53-r4.md
- Title: NIST SP 800-53 R4 blueprint sample overview
-description: Overview of the NIST SP 800-53 R4 blueprint sample. This blueprint sample helps customers assess specific NIST SP 800-53 R4 controls.
Previously updated : 04/02/2021--
-# NIST SP 800-53 R4 blueprint sample
-
-The NIST SP 800-53 R4 blueprint sample provides governance guardrails using
-[Azure Policy](../../policy/overview.md) that help you assess specific NIST SP 800-53 R4 controls.
-This blueprint helps customers deploy a core set of policies for any Azure-deployed architecture
-that must implement NIST SP 800-53 R4 controls.
-
-## Control mapping
-
-The [Azure Policy control mapping](../../policy/samples/nist-sp-800-53-r4.md) provides details on
-policy definitions included within this blueprint and how these policy definitions map to the
-**compliance domains** and **controls** in NIST SP 800-53 R4. When assigned to an architecture,
-resources are evaluated by Azure Policy for non-compliance with assigned policy definitions. For
-more information, see [Azure Policy](../../policy/overview.md).
-
-## Deploy
-
-To deploy the Azure Blueprints NIST SP 800-53 R4 blueprint sample, the following steps must
-be taken:
-
-> [!div class="checklist"]
-> - Create a new blueprint from the sample
-> - Mark your copy of the sample as **Published**
-> - Assign your copy of the blueprint to an existing subscription
-
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free)
-before you begin.
-
-### Create blueprint from sample
-
-First, implement the blueprint sample by creating a new blueprint in your environment using the
-sample as a starter.
-
-1. Select **All services** in the left pane. Search for and select **Blueprints**.
-
-1. From the **Getting started** page on the left, select the **Create** button under _Create a
- blueprint_.
-
-1. Find the **NIST SP 800-53 R4** blueprint sample under _Other Samples_ and select **Use
- this sample**.
-
-1. Enter the _Basics_ of the blueprint sample:
-
- - **Blueprint name**: Provide a name for your copy of the NIST SP 800-53 R4 blueprint sample.
- - **Definition location**: Use the ellipsis and select the management group to save your copy of
- the sample to.
-
-1. Select the _Artifacts_ tab at the top of the page or **Next: Artifacts** at the bottom of the
- page.
-
-1. Review the list of artifacts that make up the blueprint sample. Many of the artifacts have
- parameters that we'll define later. Select **Save Draft** when you've finished reviewing the
- blueprint sample.
-
-### Publish the sample copy
-
-Your copy of the blueprint sample has now been created in your environment. It's created in
-**Draft** mode and must be **Published** before it can be assigned and deployed. The copy of the
-blueprint sample can be customized to your environment and needs, but that modification may move it
-away from alignment with NIST SP 800-53 controls.
-
-1. Select **All services** in the left pane. Search for and select **Blueprints**.
-
-1. Select the **Blueprint definitions** page on the left. Use the filters to find your copy of the
- blueprint sample and then select it.
-
-1. Select **Publish blueprint** at the top of the page. In the new page on the right, provide a
- **Version** for your copy of the blueprint sample. This property is useful for if you make a
- modification later. Provide **Change notes** such as "First version published from the NIST SP
- 800-53 R4 blueprint sample." Then select **Publish** at the bottom of the page.
-
-### Assign the sample copy
-
-Once the copy of the blueprint sample has been successfully **Published**, it can be assigned to a
-subscription within the management group it was saved to. This step is where parameters are provided
-to make each deployment of the copy of the blueprint sample unique.
-
-1. Select **All services** in the left pane. Search for and select **Blueprints**.
-
-1. Select the **Blueprint definitions** page on the left. Use the filters to find your copy of the
- blueprint sample and then select it.
-
-1. Select **Assign blueprint** at the top of the blueprint definition page.
-
-1. Provide the parameter values for the blueprint assignment:
-
- - Basics
-
- - **Subscriptions**: Select one or more of the subscriptions that are in the management group
- you saved your copy of the blueprint sample to. If you select more than one subscription, an
- assignment will be created for each using the parameters entered.
- - **Assignment name**: The name is pre-populated for you based on the name of the blueprint.
- Change as needed or leave as is.
- - **Location**: Select a region for the managed identity to be created in. Azure Blueprint uses
- this managed identity to deploy all artifacts in the assigned blueprint. To learn more, see
- [managed identities for Azure resources](../../../active-directory/managed-identities-azure-resources/overview.md).
- - **Blueprint definition version**: Pick a **Published** version of your copy of the blueprint
- sample.
-
- - Lock Assignment
-
- Select the blueprint lock setting for your environment. For more information, see
- [blueprints resource locking](../concepts/resource-locking.md).
-
- - Managed Identity
-
- Leave the default _system assigned_ managed identity option.
-
- - Artifact parameters
-
- The parameters defined in this section apply to the artifact under which it's defined. These
- parameters are [dynamic parameters](../concepts/parameters.md#dynamic-parameters) since they're
- defined during the assignment of the blueprint. For a full list or artifact parameters and
- their descriptions, see [Artifact parameters table](#artifact-parameters-table).
-
-1. Once all parameters have been entered, select **Assign** at the bottom of the page. The blueprint
- assignment is created and artifact deployment begins. Deployment takes roughly an hour. To check
- on the status of deployment, open the blueprint assignment.
-
-> [!WARNING]
-> The Azure Blueprints service and the built-in blueprint samples are **free of cost**. Azure
-> resources are [priced by product](https://azure.microsoft.com/pricing/). Use the
-> [pricing calculator](https://azure.microsoft.com/pricing/calculator/) to estimate the cost of
-> running resources deployed by this blueprint sample.
-
-### Artifact parameters table
-
-The following table provides a list of the blueprint artifact parameters:
-
-|Artifact name|Artifact type|Parameter name|Description|
-|-|-|-|-|
-|\[Preview\]: Audit NIST SP 800-53 R4 controls and deploy specific VM Extensions to support audit requirements|Policy assignment|Log Analytics workspace ID that VMs should be configured for|This is the ID (GUID) of the Log Analytics workspace that the VMs should be configured for.|
-|\[Preview\]: Audit NIST SP 800-53 R4 controls and deploy specific VM Extensions to support audit requirements|Policy assignment|List of resource types that should have diagnostic logs enabled|List of resource types to audit if diagnostic log setting is not enabled. Acceptable values can be found at [Azure Monitor diagnostic logs schemas](../../../azure-monitor/essentials/resource-logs-schema.md#service-specific-schemas).|
-|\[Preview\]: Audit NIST SP 800-53 R4 controls and deploy specific VM Extensions to support audit requirements|Policy assignment|List of users that should be excluded from Windows VM Administrators group|A semicolon-separated list of members that should be excluded in the Administrators local group. Ex: Administrator; myUser1; myUser2|
-|\[Preview\]: Audit NIST SP 800-53 R4 controls and deploy specific VM Extensions to support audit requirements|Policy assignment|List of users that should be included in Windows VM Administrators group|A semicolon-separated list of members that should be included in the Administrators local group. Ex: Administrator; myUser1; myUser2|
-|\[Preview\]: Deploy Log Analytics Agent for Linux VM Scale Sets (VMSS)|Policy assignment|Log Analytics workspace for Linux VM Scale Sets (VMSS)|If this workspace is outside of the scope of the assignment you must manually grant 'Log Analytics Contributor' permissions (or similar) to the policy assignment's principal ID.|
-|\[Preview\]: Deploy Log Analytics Agent for Linux VM Scale Sets (VMSS)|Policy assignment|Optional: List of VM images that have supported Linux OS to add to scope|An empty array may be used to indicate no optional parameters: \[\]|
-|\[Preview\]: Deploy Log Analytics Agent for Linux VMs|Policy assignment|Log Analytics workspace for Linux VMs|If this workspace is outside of the scope of the assignment you must manually grant 'Log Analytics Contributor' permissions (or similar) to the policy assignment's principal ID.|
-|\[Preview\]: Deploy Log Analytics Agent for Linux VMs|Policy assignment|Optional: List of VM images that have supported Linux OS to add to scope|An empty array may be used to indicate no optional parameters: \[\]|
-|\[Preview\]: Deploy Log Analytics Agent for Windows VM Scale Sets (VMSS)|Policy assignment|Log Analytics workspace for Windows VM Scale Sets (VMSS)|If this workspace is outside of the scope of the assignment you must manually grant 'Log Analytics Contributor' permissions (or similar) to the policy assignment's principal ID.|
-|\[Preview\]: Deploy Log Analytics Agent for Windows VM Scale Sets (VMSS)|Policy assignment|Optional: List of VM images that have supported Windows OS to add to scope|An empty array may be used to indicate no optional parameters: \[\]|
-|\[Preview\]: Deploy Log Analytics Agent for Windows VMs|Policy assignment|Log Analytics workspace for Windows VMs|If this workspace is outside of the scope of the assignment you must manually grant 'Log Analytics Contributor' permissions (or similar) to the policy assignment's principal ID.|
-|\[Preview\]: Deploy Log Analytics Agent for Windows VMs|Policy assignment|Optional: List of VM images that have supported Windows OS to add to scope|An empty array may be used to indicate no optional parameters: \[\]|
-|Deploy Advanced Threat Protection on Storage Accounts|Policy assignment|Effect|Information about policy effects can be found at [Understand Azure Policy Effects](../../policy/concepts/effects.md) |
-|Deploy Auditing on SQL servers|Policy assignment|The value in days of the retention period (0 indicates unlimited retention) |Retention days (optional, 180 days if unspecified) |
-|Deploy Auditing on SQL servers|Policy assignment|Resource group name for storage account for SQL server auditing|Auditing writes database events to an audit log in your Azure Storage account (a storage account will be created in each region where a SQL Server is created that will be shared by all servers in that region). Important - for proper operation of Auditing do not delete or rename the resource group or the storage accounts.|
-|Deploy diagnostic settings for Network Security Groups|Policy assignment|Storage account prefix for network security group diagnostics|This prefix will be combined with the network security group location to form the created storage account name.|
-|Deploy diagnostic settings for Network Security Groups|Policy assignment|Resource group name for storage account for network security group diagnostics (must exist) |The resource group that the storage account will be created in. This resource group must already exist.|
-
-## Next steps
-
-Additional articles about blueprints and how to use them:
--- Learn about the [blueprint lifecycle](../concepts/lifecycle.md).-- Understand how to use [static and dynamic parameters](../concepts/parameters.md).-- Learn to customize the [blueprint sequencing order](../concepts/sequencing-order.md).-- Find out how to make use of [blueprint resource locking](../concepts/resource-locking.md).-- Learn how to [update existing assignments](../how-to/update-existing-assignments.md).
governance Control Mapping https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/pci-dss-3.2.1/control-mapping.md
Title: PCI-DSS v3.2.1 blueprint sample controls description: Control mapping of the Payment Card Industry Data Security Standard v3.2.1 blueprint sample to Azure Policy and Azure RBAC. Previously updated : 04/02/2021 Last updated : 09/08/2021 # Control mapping of the PCI-DSS v3.2.1 blueprint sample
governance Deploy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/pci-dss-3.2.1/deploy.md
Title: Deploy PCI-DSS v3.2.1 blueprint sample description: Deploy steps for the Payment Card Industry Data Security Standard v3.2.1 blueprint sample including blueprint artifact parameter details. Previously updated : 04/02/2021 Last updated : 09/08/2021 # Deploy the PCI-DSS v3.2.1 blueprint sample
governance Index https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/pci-dss-3.2.1/index.md
Title: PCI-DSS v3.2.1 blueprint sample overview description: Overview of the Payment Card Industry Data Security Standard v3.2.1 blueprint sample. This blueprint sample helps customers assess specific controls. Previously updated : 04/02/2021 Last updated : 09/08/2021 # Overview of the PCI-DSS v3.2.1 blueprint sample
governance Control Mapping https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/swift-2020/control-mapping.md
Title: SWIFT CSP-CSCF v2020 blueprint sample controls description: Control mapping of the SWIFT CSP-CSCF v2020 blueprint sample. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 04/02/2021 Last updated : 09/08/2021 # Control mapping of the SWIFT CSP-CSCF v2020 blueprint sample
governance Deploy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/swift-2020/deploy.md
Title: Deploy SWIFT CSP-CSCF v2020 blueprint sample description: Deploy steps for the SWIFT CSP-CSCF v2020 blueprint sample including blueprint artifact parameter details. Previously updated : 04/02/2021 Last updated : 09/08/2021 # Deploy the SWIFT CSP-CSCF v2020 blueprint sample
governance Index https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/swift-2020/index.md
Title: SWIFT CSP-CSCF v2020 blueprint sample overview description: Overview of the SWIFT CSP-CSCF v2020 blueprint sample. This blueprint sample helps customers assess specific SWIFT CSP-CSCF controls. Previously updated : 04/02/2021 Last updated : 09/08/2021 # Overview of the SWIFT CSP-CSCF v2020 blueprint sample
governance Ukofficial Uknhs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/blueprints/samples/ukofficial-uknhs.md
Title: UK OFFICIAL and UK NHS blueprint sample description: Overview of the UK OFFICIAL and UK NHS blueprint sample. This blueprint sample helps customers assess specific controls. Previously updated : 05/04/2021 Last updated : 09/08/2021 # UK OFFICIAL and UK NHS blueprint sample
governance Index https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/samples/index.md
Title: Index of policy samples description: Index of built-ins for Azure Policy. Categories Tags, Regulatory Compliance, Key Vault, Kubernetes, Guest Configuration, and more. Previously updated : 08/13/2021 Last updated : 09/08/2021 # Azure Policy Samples
The following are the [Regulatory Compliance](../concepts/regulatory-compliance.
Azure: - [Australian Government ISM PROTECTED](./australia-ism.md)-- [Azure Security Benchmark v2](./azure-security-benchmark.md)-- [Azure Security Benchmark v1](./azure-security-benchmarkv1.md)
+- [Azure Security Benchmark](./azure-security-benchmark.md)
- [Canada Federal PBMM](./canada-federal-pbmm.md) - [CIS Microsoft Azure Foundations Benchmark v1.3.0](./cis-azure-1-3-0.md) - [CIS Microsoft Azure Foundations Benchmark v1.1.0](./cis-azure-1-1-0.md)
Azure:
The following are the [Regulatory Compliance](../concepts/regulatory-compliance.md) built-ins in Azure Government: -- [Azure Security Benchmark v2](./gov-azure-security-benchmark.md)
+- [Azure Security Benchmark](./gov-azure-security-benchmark.md)
- [CIS Microsoft Azure Foundations Benchmark v1.3.0](./gov-cis-azure-1-3-0.md) - [CIS Microsoft Azure Foundations Benchmark v1.1.0](./gov-cis-azure-1-1-0.md) - [CMMC Level 3](./gov-cmmc-l3.md)
hdinsight Service Endpoint Policies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/service-endpoint-policies.md
Title: Configure service endpoint policies - Azure HDInsight
description: Learn how to configure service endpoint policies for your virtual network with Azure HDInsight. Previously updated : 07/15/2020 Last updated : 09/13/2021 # Configure virtual network service endpoint policies for Azure HDInsight
Use the following process to create the necessary service endpoint policies:
"/subscriptions/6a853a41-3423-4167-8d9c-bcf37dc72818/resourceGroups/GenevaWarmPathManageRG", "/subscriptions/c8845df8-14d1-4a46-b6dd-e0c44ae400b0/resourceGroups/Default-Storage-CanadaCentral", "/subscriptions/c8845df8-14d1-4a46-b6dd-e0c44ae400b0/resourceGroups/cancstorage",
- "/subscriptions/c8845df8-14d1-4a46-b6dd-e0c44ae400b0/resourceGroups/GenevaWarmPathManageRG"
+ "/subscriptions/c8845df8-14d1-4a46-b6dd-e0c44ae400b0/resourceGroups/GenevaWarmPathManageRG",
+ "/subscriptions/fb3429ab-83d0-4bed-95e9-1a8e9455252c/resourceGroups/DistroStorageRG/providers/Microsoft.Storage/storageAccounts/hdi31distrorelease",
+ "/subscriptions/fb3429ab-83d0-4bed-95e9-1a8e9455252c/resourceGroups/DistroStorageRG/providers/Microsoft.Storage/storageAccounts/bigdatadistro"
], ```
Use the following process to create the necessary service endpoint policies:
az network service-endpoint policy create -g $rgName -n $sepName -l $location # Insert the list of HDInsight owned resources for the region your clusters will be created in.
+ # Be sure to get the most recent list of resource groups from the [list of service endpoint policy resources](https://github.com/Azure-Samples/hdinsight-enterprise-security/blob/main/hdinsight-service-endpoint-policy-resources.json)
[String[]]$resources = @("/subscriptions/235d341f-7fb9-435c-9bdc-034b7306c9b4/resourceGroups/Default-Storage-WestUS",` "/subscriptions/da0c4c68-9283-4f88-9c35-18f7bd72fbdd/resourceGroups/GenevaWarmPathManageRG",` "/subscriptions/6a853a41-3423-4167-8d9c-bcf37dc72818/resourceGroups/GenevaWarmPathManageRG",` "/subscriptions/c8845df8-14d1-4a46-b6dd-e0c44ae400b0/resourceGroups/Default-Storage-CanadaCentral",` "/subscriptions/c8845df8-14d1-4a46-b6dd-e0c44ae400b0/resourceGroups/cancstorage",`
- "/subscriptions/c8845df8-14d1-4a46-b6dd-e0c44ae400b0/resourceGroups/GenevaWarmPathManageRG")
+ "/subscriptions/c8845df8-14d1-4a46-b6dd-e0c44ae400b0/resourceGroups/GenevaWarmPathManageRG",
+ "/subscriptions/fb3429ab-83d0-4bed-95e9-1a8e9455252c/resourceGroups/DistroStorageRG/providers/Microsoft.Storage/storageAccounts/hdi31distrorelease",
+ "/subscriptions/fb3429ab-83d0-4bed-95e9-1a8e9455252c/resourceGroups/DistroStorageRG/providers/Microsoft.Storage/storageAccounts/bigdatadistro")
#Assign service resources to the SEP policy. az network service-endpoint policy-definition create -g $rgName --policy-name $sepName -n $sepDefName --service "Microsoft.Storage" --service-resources $resources
Use the following process to create the necessary service endpoint policies:
$subnet = Get-AzVirtualNetworkSubnetConfig -Name $subnetName -VirtualNetwork $vnet # Insert the list of HDInsight owned resources for the region your clusters will be created in.
+ # Be sure to get the most recent list of resource groups from the [list of service endpoint policy resources](https://github.com/Azure-Samples/hdinsight-enterprise-security/blob/main/hdinsight-service-endpoint-policy-resources.json)
[String[]]$resources = @("/subscriptions/235d341f-7fb9-435c-9bdc-034b7306c9b4/resourceGroups/Default-Storage-WestUS", "/subscriptions/da0c4c68-9283-4f88-9c35-18f7bd72fbdd/resourceGroups/GenevaWarmPathManageRG", "/subscriptions/6a853a41-3423-4167-8d9c-bcf37dc72818/resourceGroups/GenevaWarmPathManageRG", "/subscriptions/c8845df8-14d1-4a46-b6dd-e0c44ae400b0/resourceGroups/Default-Storage-CanadaCentral", "/subscriptions/c8845df8-14d1-4a46-b6dd-e0c44ae400b0/resourceGroups/cancstorage",
- "/subscriptions/c8845df8-14d1-4a46-b6dd-e0c44ae400b0/resourceGroups/GenevaWarmPathManageRG")
+ "/subscriptions/c8845df8-14d1-4a46-b6dd-e0c44ae400b0/resourceGroups/GenevaWarmPathManageRG",
+ "/subscriptions/fb3429ab-83d0-4bed-95e9-1a8e9455252c/resourceGroups/DistroStorageRG/providers/Microsoft.Storage/storageAccounts/hdi31distrorelease",
+ "/subscriptions/fb3429ab-83d0-4bed-95e9-1a8e9455252c/resourceGroups/DistroStorageRG/providers/Microsoft.Storage/storageAccounts/bigdatadistro")
#Declare service endpoint policy definition $sepDef = New-AzServiceEndpointPolicyDefinition -Name "SEPHDICanadaCentral" -Description "Service Endpoint Policy Definition" -Service "Microsoft.Storage" -ServiceResource $resources
Use the following process to create the necessary service endpoint policies:
# Associate a subnet to the service endpoint policy just created. If there is a delay in updating it to subnet, you can use the Azure portal to associate the policy with the subnet. Set-AzVirtualNetworkSubnetConfig -Name $subnetName -VirtualNetwork $vnet -AddressPrefix $subnet.AddressPrefix -ServiceEndpointPolicy $sep ```
+> [!IMPORTANT]
+> It is recommended that you get the latest [list of service endpoint policy resources](https://github.com/Azure-Samples/hdinsight-enterprise-security/blob/main/hdinsight-service-endpoint-policy-resources.json)
+> on a scheduled basis manually or via automation. This will prevent CRUD issues when additional resource groups are added or removed from the JSON file.
+ ## Next steps
healthcare-apis Dicomweb Standard Apis With Dicom Services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/dicom/dicomweb-standard-apis-with-dicom-services.md
Previously updated : 08/04/2021 Last updated : 08/23/2021
> [!IMPORTANT] > Azure Healthcare APIs is currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-This tutorial provides an overview of how to use the DICOMweb&trade; Standard APIs with the DICOM Services.
+This tutorial provides an overview of how to use the DICOMweb&trade; Standard APIs with the DICOM service.
-The DICOM service supports a subset of the DICOMweb&trade; Standard. This support includes the following:
+The DICOM service supports a subset of the DICOMweb&trade; Standard that includes the following:
* Store (STOW-RS) * Retrieve (WADO-RS)
Once deployment is complete, you can use the Azure portal to navigate to the new
Because the DICOM service is exposed as a REST API, you can access it using any modern development language. For language-agnostic information on working with the service, see [DICOM Conformance Statement](dicom-services-conformance-statement.md).
-To see language-specific examples, refer to the examples below. If you open the Postman Collection, you can view examples in several languages including Go, Java, JavaScript, C#, PHP, C, NodeJS, Objective-C, OCaml, PowerShell, Python, Ruby, and Swift.
+To see language-specific examples, refer to the examples below. You can view Postman collection examples in several languages including:
+
+* Go
+* Java
+* JavaScript
+* C#
+* PHP
+* C
+* NodeJS
+* Objective-C
+* OCaml
+* PowerShell
+* Python
+* Ruby
+* Swift.
### C#
cURL is a common command-line tool for calling web endpoints that is available f
To learn how to use cURL with the DICOM service, see [Using DICOMWebΓäó Standard APIs with cURL](dicomweb-standard-apis-curl.md) tutorial.
-### Phyton
+### Python
Refer to the [Using DICOMWebΓäó Standard APIs with Python](dicomweb-standard-apis-python.md) tutorial to learn how to use Python with the DICOM service.
This tutorial provided an overview of the APIs supported by the DICOM service. G
### Next Steps
-For more information about DICOM service, see
+For more information, see
>[!div class="nextstepaction"] >[Overview of the DICOM service](dicom-services-overview.md)
industry Rest Api In Azure Farmbeats https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/industry/agriculture/rest-api-in-azure-farmbeats.md
Here are the most common request headers that you must specify when you make an
**Header** | **Description and example** |
-Content-Type | The request format (Content-Type: application/<format>). For Azure FarmBeats Datahub APIs, the format is JSON. Content-Type: application/json
-Authorization | Specifies the access token required to make an API call. Authorization: Bearer <Access-Token>
+Content-Type | The request format (Content-Type: application/\<format\>). For Azure FarmBeats Datahub APIs, the format is JSON. Content-Type: application/json
+Authorization | Specifies the access token required to make an API call. Authorization: Bearer \<Access-Token\>
Accept | The response format. For Azure FarmBeats Datahub APIs, the format is JSON. Accept: application/json ### API requests To make a REST API request, you combine the HTTP (GET, POST, PUT, or DELETE) method, the URL to the API service, the URI to a resource to query, submit data to, update, or delete, and then add one or more HTTP request headers.
-The URL to the API service is your Datahub URL, for example, https://\<yourdatahub-website-name>.azurewebsites.net.
+The URL to the API service is your Datahub URL, for example, `https://<yourdatahub-website-name>.azurewebsites.net`.
Optionally, you can include query parameters on GET calls to filter, limit the size of, and sort the data in the responses.
Azure FarmBeats APIs can be accessed by a user or an app registration in Azure A
- Go back to **Overview**, and select the link next to **Manage Application in local directory**. - Go to **Properties** to capture the **Object ID**.
-4. Go to your Datahub Swagger (https://<yourdatahub>.azurewebsites.net/swagger/https://docsupdatetracker.net/index.html) and do the following:
+4. Go to your Datahub Swagger (`https://<yourdatahub>.azurewebsites.net/swagger/https://docsupdatetracker.net/index.html`) and do the following:
- Go to the **RoleAssignment API**. - Perform a POST to create a **RoleAssignment** object for the **Object ID** you just created.
iot-central Howto Create Organizations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-create-organizations.md
When you start adding organizations, all existing devices, users, and experience
- You can assign users to a new organization and unassign them from the root. - You can recreate experience such as dashboards, device groups, and jobs and associate them with organizations in the hierarchy.
+## Limits
+
+To following limits apply to organizations:
+
+- The hierarchy can be no more than five levels deep.
+- The total number of organization cannot be more than 200. Each node in the hierarchy counts as an organization.
+ ## Next steps Now that you've learned how to manage Azure IoT Central organizations, the suggested next step is learn how to [Export IoT data to cloud destinations using data export](howto-export-data.md).
iot-hub-device-update Device Update Simulator https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub-device-update/device-update-simulator.md
Start Device Update Agent on your new Software Devices.
Replace `<device connection string>` with your connection string ```shell
-./AducIotAgentSim-microsoft-swupdate -c '<device connection string>'
+sudo ./AducIotAgentSim-microsoft-swupdate "<device connection string>"
``` or
iot-hub Iot Hub Dev Guide Azure Ad Rbac https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-dev-guide-azure-ad-rbac.md
Title: Control access to IoT Hub using Azure Active Directory | Microsoft Docs
-description: Developer guide - how to control access to IoT Hub for back-end apps using Azure AD and Azure RBAC.
+ Title: Control access to IoT Hub by using Azure Active Directory
+description: Developer guide. How to control access to IoT Hub for back-end apps by using Azure AD and Azure RBAC.
Last updated 08/24/2021
-# Control access to IoT Hub using Azure Active Directory
+# Control access to IoT Hub by using Azure Active Directory
-Azure IoT Hub supports using Azure Active Directory (AAD) to authenticate requests to its service APIs like create device identity or invoke direct method. Also, IoT Hub supports authorization of the same service APIs with Azure role-based access control (Azure RBAC). Together, you can grant permissions to access IoT Hub's service APIs to an AAD security principal, which could be a user, group, or application service principal.
+You can use Azure Active Directory (Azure AD) to authenticate requests to Azure IoT Hub service APIs, like create device identity and invoke direct method. You can also use Azure role-based access control (Azure RBAC) to authorize those same service APIs. By using these technologies together, you can grant permissions to access IoT Hub service APIs to an Azure AD security principal. This security principal could be a user, group, or application service principal.
-Authenticating access with Azure AD and controlling permissions with Azure RBAC provides superior security and ease of use over [security tokens](iot-hub-dev-guide-sas.md). To minimize potential security vulnerabilities inherent in security tokens, Microsoft recommends [using Azure AD with your IoT hub whenever possible](#azure-ad-access-and-shared-access-policies).
+Authenticating access by using Azure AD and controlling permissions by using Azure RBAC provides improved security and ease of use over [security tokens](iot-hub-dev-guide-sas.md). To minimize potential security issues inherent in security tokens, we recommend that you [use Azure AD with your IoT hub whenever possible](#azure-ad-access-and-shared-access-policies).
> [!NOTE]
-> Authenticating with Azure AD isn't supported for IoT Hub's *device APIs* (like device-to-cloud messages and update reported properties). Use [symmetric keys](iot-hub-dev-guide-sas.md#use-a-symmetric-key-in-the-identity-registry) or [X.509](iot-hub-x509ca-overview.md) to authenticate devices to IoT hub.
+> Authentication with Azure AD isn't supported for the IoT Hub *device APIs* (like device-to-cloud messages and update reported properties). Use [symmetric keys](iot-hub-dev-guide-sas.md#use-a-symmetric-key-in-the-identity-registry) or [X.509](iot-hub-x509ca-overview.md) to authenticate devices to IoT Hub.
## Authentication and authorization
-When an Azure AD security principal requests to access an IoT Hub service API, the principal's identity is first *authenticated*. This step require the request to contain an OAuth 2.0 access token at runtime. The resource name for requesting the token is `https://iothubs.azure.net`. If the application runs inside an Azure resource like Azure VM, Azure Function app, or an App Service app, it can be represented as a [managed identity](../active-directory/managed-identities-azure-resources/how-managed-identities-work-vm.md).
+When an Azure AD security principal requests access to an IoT Hub service API, the principal's identity is first *authenticated*. For authentication, the request needs to contain an OAuth 2.0 access token at runtime. The resource name for requesting the token is `https://iothubs.azure.net`. If the application runs in an Azure resource like an Azure VM, Azure Functions app, or Azure App Service app, it can be represented as a [managed identity](../active-directory/managed-identities-azure-resources/how-managed-identities-work-vm.md).
-Once Azure AD principal has been authenticated, the second step is *authorization*. In this step, IoT Hub checks with Azure AD's role assignment service to see what permissions the principal has. If the principal's permissions match the requested resource or API, IoT Hub authorizes the request. So, this step requires one or more Azure roles to be assigned to the security principal. IoT Hub provides some built-in roles that have common groups of permissions.
+After the Azure AD principal is authenticated, the next step is *authorization*. In this step, IoT Hub uses the Azure AD role assignment service to determine what permissions the principal has. If the principal's permissions match the requested resource or API, IoT Hub authorizes the request. So this step requires one or more Azure roles to be assigned to the security principal. IoT Hub provides some built-in roles that have common groups of permissions.
-## Manage access to IoT Hub using Azure RBAC role assignment
+## Manage access to IoT Hub by using Azure RBAC role assignment
-With Azure AD and RBAC, IoT Hub requires the principal requesting the API to have the appropriate level of permission for authorization. To give the principal the permission, give that principal a role assignment.
+With Azure AD and RBAC, IoT Hub requires the principal requesting the API to have the appropriate level of permission for authorization. To give the principal the permission, give it a role assignment.
-- If the principal is a user, group, or application service principal, follow [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).-- If the principal is a managed identity, follow [Assign a managed identity access to a resource by using the Azure portal](../active-directory/managed-identities-azure-resources/howto-assign-access-portal.md).
+- If the principal is a user, group, or application service principal, follow the guidance in [Assign Azure roles by using the Azure portal](../role-based-access-control/role-assignments-portal.md).
+- If the principal is a managed identity, follow the guidance in [Assign a managed identity access to a resource by using the Azure portal](../active-directory/managed-identities-azure-resources/howto-assign-access-portal.md).
-To ensure least privilege, always assign appropriate role at the lowest possible [resource scope](#resource-scope), which is likely the IoT Hub scope.
+To ensure least privilege, always assign the appropriate role at the lowest possible [resource scope](#resource-scope), which is probably the IoT Hub scope.
-IoT Hub provides the following Azure built-in roles for authorizing access to IoT Hub service API using Azure AD and RBAC:
+IoT Hub provides the following Azure built-in roles for authorizing access to IoT Hub service APIs by using Azure AD and RBAC:
| Role | Description | | - | -- |
-| [IoT Hub Data Contributor](../role-based-access-control/built-in-roles.md#iot-hub-data-contributor) | Allows for full access to IoT Hub data plane operations. |
-| [IoT Hub Data Reader](../role-based-access-control/built-in-roles.md#iot-hub-data-reader) | Allows for full read access to IoT Hub data plane properties. |
-| [IoT Hub Registry Contributor](../role-based-access-control/built-in-roles.md#iot-hub-registry-contributor) | Allows for full access to IoT Hub device registry. |
-| [IoT Hub Twin Contributor](../role-based-access-control/built-in-roles.md#iot-hub-twin-contributor) | Allows for read and write access to all IoT Hub device and module twins. |
+| [IoT Hub Data Contributor](../role-based-access-control/built-in-roles.md#iot-hub-data-contributor) | Allows full access to IoT Hub data plane operations. |
+| [IoT Hub Data Reader](../role-based-access-control/built-in-roles.md#iot-hub-data-reader) | Allows full read access to IoT Hub data plane properties. |
+| [IoT Hub Registry Contributor](../role-based-access-control/built-in-roles.md#iot-hub-registry-contributor) | Allows full access to the IoT Hub device registry. |
+| [IoT Hub Twin Contributor](../role-based-access-control/built-in-roles.md#iot-hub-twin-contributor) | Allows read and write access to all IoT Hub device and module twins. |
-You can also define custom roles for use with IoT Hub by combining [permissions](#permissions-for-iot-hub-service-apis) that you need. For more information, see [Create custom roles for Azure Role-Based Access Control](../role-based-access-control/custom-roles.md).
+You can also define custom roles to use with IoT Hub by combining the [permissions](#permissions-for-iot-hub-service-apis) that you need. For more information, see [Create custom roles for Azure role-based access control](../role-based-access-control/custom-roles.md).
### Resource scope
-Before you assign an Azure RBAC role to a security principal, determine the scope of access that the security principal should have. Best practices dictate that it's always best to grant only the narrowest possible scope. Azure RBAC roles defined at a broader scope are inherited by the resources beneath them.
+Before you assign an Azure RBAC role to a security principal, determine the scope of access that the security principal should have. It's always best to grant only the narrowest possible scope. Azure RBAC roles defined at a broader scope are inherited by the resources beneath them.
-The following list describes the levels at which you can scope access to IoT Hub, starting with the narrowest scope:
+This list describes the levels at which you can scope access to IoT Hub, starting with the narrowest scope:
-- **The IoT hub.** At this scope, a role assignment applies to the IoT Hub. There's no scope smaller than an individual IoT hub. Role assignment at smaller scopes such as individual device identity or twin section isn't supported.-- **The resource group.** At this scope, a role assignment applies to all of the IoT hubs in the resource group.-- **The subscription.** At this scope, a role assignment applies to all of the IoT hubs in all of the resource groups in the subscription.-- **A management group.** At this scope, a role assignment applies to all of the IoT hubs in all of the resource groups in all of the subscriptions in the management group.
+- **The IoT hub.** At this scope, a role assignment applies to the IoT hub. There's no scope smaller than an individual IoT hub. Role assignment at smaller scopes, like individual device identity or twin section, isn't supported.
+- **The resource group.** At this scope, a role assignment applies to all IoT hubs in the resource group.
+- **The subscription.** At this scope, a role assignment applies to all IoT hubs in all resource groups in the subscription.
+- **A management group.** At this scope, a role assignment applies to all IoT hubs in all resource groups in all subscriptions in the management group.
-## Permissions for IoT hub service APIs
+## Permissions for IoT Hub service APIs
-The following tables describe the permissions available for IoT Hub service API operations. To enable a client to call a particular operation, ensure that the client's assigned RBAC role offers sufficient permissions for that operation.
+The following table describes the permissions available for IoT Hub service API operations. To enable a client to call a particular operation, ensure that the client's assigned RBAC role offers sufficient permissions for the operation.
| RBAC action | Description | |-|-|
-| Microsoft.Devices/IotHubs/devices/read | Read any device or module identity |
-| Microsoft.Devices/IotHubs/devices/write | Create or update any device or module identity |
-| Microsoft.Devices/IotHubs/devices/delete | Delete any device or module identity |
-| Microsoft.Devices/IotHubs/twins/read | Read any device or module twin |
-| Microsoft.Devices/IotHubs/twins/write | Write any device or module twin |
-| Microsoft.Devices/IotHubs/jobs/read | Return a list of jobs |
-| Microsoft.Devices/IotHubs/jobs/write | Create or update any job |
-| Microsoft.Devices/IotHubs/jobs/delete | Delete any job |
-| Microsoft.Devices/IotHubs/cloudToDeviceMessages/send/action | Send cloud-to-device message to any device |
-| Microsoft.Devices/IotHubs/cloudToDeviceMessages/feedback/action | Receive, complete, or abandon cloud-to-device message feedback notification |
-| Microsoft.Devices/IotHubs/cloudToDeviceMessages/queue/purge/action | Deletes all the pending commands for a device |
-| Microsoft.Devices/IotHubs/directMethods/invoke/action | Invokes a direct method on any device or module |
-| Microsoft.Devices/IotHubs/fileUpload/notifications/action | Receive, complete, or abandon file upload notifications |
-| Microsoft.Devices/IotHubs/statistics/read | Read device and service statistics |
-| Microsoft.Devices/IotHubs/configurations/read | Read device management configurations |
-| Microsoft.Devices/IotHubs/configurations/write | Create or update device management configurations |
-| Microsoft.Devices/IotHubs/configurations/delete | Delete any device management configuration |
-| Microsoft.Devices/IotHubs/configurations/applyToEdgeDevice/action | Applies the configuration content to an edge device |
-| Microsoft.Devices/IotHubs/configurations/testQueries/action | Validates target condition and custom metric queries for a configuration |
+| `Microsoft.Devices/IotHubs/devices/read` | Read any device or module identity. |
+| `Microsoft.Devices/IotHubs/devices/write` | Create or update any device or module identity. |
+| `Microsoft.Devices/IotHubs/devices/delete` | Delete any device or module identity. |
+| `Microsoft.Devices/IotHubs/twins/read` | Read any device or module twin. |
+| `Microsoft.Devices/IotHubs/twins/write` | Write any device or module twin. |
+| `Microsoft.Devices/IotHubs/jobs/read` | Return a list of jobs. |
+| `Microsoft.Devices/IotHubs/jobs/write` | Create or update any job. |
+| `Microsoft.Devices/IotHubs/jobs/delete` | Delete any job. |
+| `Microsoft.Devices/IotHubs/cloudToDeviceMessages/send/action` | Send a cloud-to-device message to any device. |
+| `Microsoft.Devices/IotHubs/cloudToDeviceMessages/feedback/action` | Receive, complete, or abandon a cloud-to-device message feedback notification. |
+| `Microsoft.Devices/IotHubs/cloudToDeviceMessages/queue/purge/action` | Delete all the pending commands for a device. |
+| `Microsoft.Devices/IotHubs/directMethods/invoke/action` | Invoke a direct method on any device or module. |
+| `Microsoft.Devices/IotHubs/fileUpload/notifications/action` | Receive, complete, or abandon file upload notifications. |
+| `Microsoft.Devices/IotHubs/statistics/read` | Read device and service statistics. |
+| `Microsoft.Devices/IotHubs/configurations/read` | Read device management configurations. |
+| `Microsoft.Devices/IotHubs/configurations/write` | Create or update device management configurations. |
+| `Microsoft.Devices/IotHubs/configurations/delete` | Delete any device management configuration. |
+| `Microsoft.Devices/IotHubs/configurations/applyToEdgeDevice/action` | Apply the configuration content to an edge device. |
+| `Microsoft.Devices/IotHubs/configurations/testQueries/action` | Validate the target condition and custom metric queries for a configuration. |
> [!TIP]
-> - The [Bulk Registry Update](/rest/api/iothub/service/bulkregistry/updateregistry) operation requires *both* `Microsoft.Devices/IotHubs/devices/write` *and* the `Microsoft.Devices/IotHubs/devices/delete`.
+> - The [Bulk Registry Update](/rest/api/iothub/service/bulkregistry/updateregistry) operation requires both `Microsoft.Devices/IotHubs/devices/write` and `Microsoft.Devices/IotHubs/devices/delete`.
> - The [Twin Query](/rest/api/iothub/service/query/gettwins) operation requires `Microsoft.Devices/IotHubs/twins/read`.
-> - [Get Digital Twin](/rest/api/iothub/service/digitaltwin/getdigitaltwin) requires `Microsoft.Devices/IotHubs/twins/read` while [Update Digital Twin](/rest/api/iothub/service/digitaltwin/updatedigitaltwin) requires `Microsoft.Devices/IotHubs/twins/write`
+> - [Get Digital Twin](/rest/api/iothub/service/digitaltwin/getdigitaltwin) requires `Microsoft.Devices/IotHubs/twins/read`. [Update Digital Twin](/rest/api/iothub/service/digitaltwin/updatedigitaltwin) requires `Microsoft.Devices/IotHubs/twins/write`.
> - Both [Invoke Component Command](/rest/api/iothub/service/digitaltwin/invokecomponentcommand) and [Invoke Root Level Command](/rest/api/iothub/service/digitaltwin/invokerootlevelcommand) require `Microsoft.Devices/IotHubs/directMethods/invoke/action`. > [!NOTE]
-> To get data from IoT Hub using Azure AD, [set up routing to a separate Event Hub](iot-hub-devguide-messages-d2c.md#event-hubs-as-a-routing-endpoint). To access the [the built-in Event Hub compatible endpoint](iot-hub-devguide-messages-read-builtin.md), use the connection string (shared access key) method as before.
+> To get data from IoT Hub by using Azure AD, [set up routing to a separate event hub](iot-hub-devguide-messages-d2c.md#event-hubs-as-a-routing-endpoint). To access the [the built-in Event Hubs compatible endpoint](iot-hub-devguide-messages-read-builtin.md), use the connection string (shared access key) method as before.
## Azure AD access and shared access policies
-By default, IoT Hub supports service API access through both Azure AD as well as [shared access policies and security tokens](iot-hub-dev-guide-sas.md). To minimize potential security vulnerabilities inherent in security tokens, disable access with shared access policies:
+By default, IoT Hub supports service API access through both Azure AD and [shared access policies and security tokens](iot-hub-dev-guide-sas.md). To minimize potential security vulnerabilities inherent in security tokens, disable access with shared access policies:
-1. Ensure that your service clients and users have [sufficient access](#manage-access-to-iot-hub-using-azure-rbac-role-assignment) to your IoT hub following [principle of least privilege](../security/fundamentals/identity-management-best-practices.md).
+1. Ensure that your service clients and users have [sufficient access](#manage-access-to-iot-hub-by-using-azure-rbac-role-assignment) to your IoT hub. Follow the [principle of least privilege](../security/fundamentals/identity-management-best-practices.md).
1. In the [Azure portal](https://portal.azure.com), go to your IoT hub.
-1. On the left, select **Shared access policies**.
+1. On the left pane, select **Shared access policies**.
1. Under **Connect using shared access policies**, select **Deny**.
- :::image type="content" source="media/iot-hub-dev-guide-azure-ad-rbac/disable-local-auth.png" alt-text="Screenshot of Azure portal showing how to turn off IoT Hub shared access policies":::
-1. Review the warning, then select **Save**.
+ :::image type="content" source="media/iot-hub-dev-guide-azure-ad-rbac/disable-local-auth.png" alt-text="Screenshot that shows how to turn off IoT Hub shared access policies.":::
+1. Review the warning, and then select **Save**.
-Your IoT hub service APIs can now only be access using Azure AD and RBAC.
+Your IoT Hub service APIs can now be accessed only through Azure AD and RBAC.
## Azure AD access from the Azure portal
-When you try to access IoT Hub, the Azure portal first checks whether you've been assigned an Azure role with **Microsoft.Devices/iotHubs/listkeys/action**. If so, then the Azure portal uses the keys from shared access policies for accessing IoT Hub. If not, the Azure portal tries to access data using your Azure AD account.
+When you try to access IoT Hub, the Azure portal first checks whether you've been assigned an Azure role with `Microsoft.Devices/iotHubs/listkeys/action`. If you have, the Azure portal uses the keys from shared access policies to access IoT Hub. If not, the Azure portal tries to access data by using your Azure AD account.
-To access IoT Hub from the Azure portal using your Azure AD account, you need permissions to access the IoT hub data resources (like devices and twins), and you also need permissions to navigate to the IoT hub resource in the Azure portal. The built-in roles provided by IoT Hub grant access to resources like devices and twin, but they don't grant access to the IoT Hub resource. So, access to the portal also requires assignment of an Azure Resource Manager (ARM) role like [Reader](../role-based-access-control/built-in-roles.md#reader). The Reader role is a good choice because it's the most restricted role that lets you navigate the portal, and it doesn't include the **Microsoft.Devices/iotHubs/listkeys/action** permission (which gives access to all IoT Hub data resources via shared access policies).
+To access IoT Hub from the Azure portal by using your Azure AD account, you need permissions to access IoT Hub data resources (like devices and twins). You also need permissions to go to the IoT Hub resource in the Azure portal. The built-in roles provided by IoT Hub grant access to resources like devices and twin. But they don't grant access to the IoT Hub resource. So access to the portal also requires the assignment of an Azure Resource Manager role like [Reader](../role-based-access-control/built-in-roles.md#reader). The Reader role is a good choice because it's the most restricted role that lets you navigate the portal. It doesn't include the `Microsoft.Devices/iotHubs/listkeys/action` permission (which provides access to all IoT Hub data resources via shared access policies).
-To ensure an account doesn't have access outside of assigned permissions, *don't* include the **Microsoft.Devices/iotHubs/listkeys/action** permission when creating a custom role. For example, to create a custom role that could read device identities, but cannot create or delete devices, create a custom role that:
-- Has the **Microsoft.Devices/IotHubs/devices/read** data action-- Doesn't have the **Microsoft.Devices/IotHubs/devices/write** data action-- Doesn't have the **Microsoft.Devices/IotHubs/devices/delete** data action-- Doesn't have the **Microsoft.Devices/iotHubs/listkeys/action** action
+To ensure an account doesn't have access outside of the assigned permissions, don't include the `Microsoft.Devices/iotHubs/listkeys/action` permission when you create a custom role. For example, to create a custom role that can read device identities but can't create or delete devices, create a custom role that:
+- Has the `Microsoft.Devices/IotHubs/devices/read` data action.
+- Doesn't have the `Microsoft.Devices/IotHubs/devices/write` data action.
+- Doesn't have the `Microsoft.Devices/IotHubs/devices/delete` data action.
+- Doesn't have the `Microsoft.Devices/iotHubs/listkeys/action` action.
-Then, make sure the account doesn't have any other roles that have the **Microsoft.Devices/iotHubs/listkeys/action** permission - such as [Owner](../role-based-access-control/built-in-roles.md#owner) or [Contributor](../role-based-access-control/built-in-roles.md#contributor). To let the account have resource access and can navigate the portal, assign [Reader](../role-based-access-control/built-in-roles.md#reader).
+Then, make sure the account doesn't have any other roles that have the `Microsoft.Devices/iotHubs/listkeys/action` permission, like [Owner](../role-based-access-control/built-in-roles.md#owner) or [Contributor](../role-based-access-control/built-in-roles.md#contributor). To allow the account to have resource access and navigate the portal, assign [Reader](../role-based-access-control/built-in-roles.md#reader).
## Azure IoT extension for Azure CLI
-Most commands against IoT Hub support Azure AD authentication. The type of auth used to execute commands can be controlled with the `--auth-type` parameter which accepts the values key or login. The value of `key` is set by default.
+Most commands against IoT Hub support Azure AD authentication. You can control the type of authentication used to run commands by using the `--auth-type` parameter, which accepts `key` or `login` values. The `key` value is the default.
-- When `--auth-type` has the value of `key`, like before the CLI automatically discovers a suitable policy when interacting with IoT Hub.
+- When `--auth-type` has the `key` value, as before, the CLI automatically discovers a suitable policy when it interacts with IoT Hub.
-- When `--auth-type` has the value `login`, an access token from the Azure CLI logged in the principal is used for the operation.
+- When `--auth-type` has the `login` value, an access token from the Azure CLI logged in the principal is used for the operation.
-To learn more, see the [Azure IoT extension for Azure CLI release page](https://github.com/Azure/azure-iot-cli-extension/releases/tag/v0.10.12)
+For more information, see the [Azure IoT extension for Azure CLI release page](https://github.com/Azure/azure-iot-cli-extension/releases/tag/v0.10.12).
## SDK samples
iot-hub Iot Hub Devguide Messages D2c https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-devguide-messages-d2c.md
If the device connection flickers, meaning if device connects and disconnects fr
When you create a new route or edit an existing route, you should test the route query with a sample message. You can test individual routes or test all routes at once and no messages are routed to the endpoints during the test. Azure portal, Azure Resource Manager, Azure PowerShell, and Azure CLI can be used for testing. Outcomes help identify whether the sample message matched the query, message did not match the query, or test couldn't run because the sample message or query syntax are incorrect. To learn more, see [Test Route](/rest/api/iothub/iothubresource/testroute) and [Test all routes](/rest/api/iothub/iothubresource/testallroutes).
-## Ordering guarantees with at least once delivery
-
-IoT Hub message routing guarantees ordered and at least once delivery of messages to the endpoints. This means that there can be duplicate messages and a series of messages can be retransmitted honoring the original message ordering. For example, if the original message order is [1,2,3,4], you could receive a message sequence like [1,2,1,2,3,1,2,3,4]. The ordering guarantee is that if you ever receive message [1], it would always be followed by [2,3,4].
-
-For handling message duplicates, we recommend stamping a unique identifier in the application properties of the message at the point of origin, which is usually a device or a module. The service consuming the messages can handle duplicate messages using this identifier.
- ## Latency When you route device-to-cloud telemetry messages using built-in endpoints, there is a slight increase in the end-to-end latency after the creation of the first route.
iot-hub Iot Hub Raspberry Pi Web Simulator Get Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-raspberry-pi-web-simulator-get-started.md
In this tutorial, you begin by learning the basics of working with Raspberry Pi online simulator. You then learn how to seamlessly connect the Pi simulator to the cloud by using [Azure IoT Hub](about-iot-hub.md).
-<p>
-<div id="diag">
-<img src="media/iot-hub-raspberry-pi-web-simulator/3-banner.png" alt="Connect Raspberry Pi web simulator to Azure IoT Hub" width="400">
-</div>
-</p>
-<p>
-<div id="button">
-<a href="https://azure-samples.github.io/raspberry-pi-web-simulator/#getstarted" target="_blank">
-<img src="media/iot-hub-raspberry-pi-web-simulator/6-button-default.png" alt="Start Raspberry Pi simulator" width="400" onmouseover="this.src='media/iot-hub-raspberry-pi-web-simulator/5-button-click.png';" onmouseout="this.src='media/iot-hub-raspberry-pi-web-simulator/6-button-default.png';">
-</a>
-</div>
-</p>
+
+[:::image type="content" source="media/iot-hub-raspberry-pi-web-simulator/6-button-default.png" alt-text="Start Raspberry Pi simulator":::](https://azure-samples.github.io/raspberry-pi-web-simulator/#getstarted)
+ If you have physical devices, visit [Connect Raspberry Pi to Azure IoT Hub](iot-hub-raspberry-pi-kit-node-get-started.md) to get started.
load-balancer Load Balancer Ha Ports Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-balancer/load-balancer-ha-ports-overview.md
For NVA HA scenarios, HA ports offer the following advantages:
The following diagram presents a hub-and-spoke virtual network deployment. The spokes force-tunnel their traffic to the hub virtual network and through the NVA, before leaving the trusted space. The NVAs are behind an internal Standard Load Balancer with an HA ports configuration. All traffic can be processed and forwarded accordingly. When configured as show in the following diagram, an HA Ports load-balancing rule additionally provides flow symmetry for ingress and egress traffic.
-<a node="diagram"></a>
+<a name="diagram"></a>
![Diagram of hub-and-spoke virtual network, with NVAs deployed in HA mode](./media/load-balancer-ha-ports-overview/nvaha.png) >[!NOTE]
logic-apps Create Single Tenant Workflows Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/create-single-tenant-workflows-visual-studio-code.md
ms.suite: integration Previously updated : 07/13/2021 Last updated : 09/13/2021 # Create an integration workflow with single-tenant Azure Logic Apps (Standard) in Visual Studio Code
Before you can create your logic app, create a local project so that you can man
![Screenshot that shows the Explorer pane with project folder, workflow folder, and "workflow.json" file.](./media/create-single-tenant-workflows-visual-studio-code/local-project-created.png)
+ [!INCLUDE [Visual Studio Code - logic app project structure](../../includes/logic-apps-single-tenant-project-structure-visual-studio-code.md)]
+ <a name="enable-built-in-connector-authoring"></a> ## Enable built-in connector authoring
logic-apps Devops Deployment Single Tenant Azure Logic Apps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/devops-deployment-single-tenant-azure-logic-apps.md
ms.suite: integration Previously updated : 05/25/2021 Last updated : 09/13/2021 # As a developer, I want to learn about DevOps deployment support for single-tenant Azure Logic Apps.
When you create logic apps using the **Logic App (Standard)** resource type, you
For example, you can package the redesigned containerized runtime and workflows together as part of your logic app. You can use generic steps or tasks that build, assemble, and zip your logic app resources into ready-to-deploy artifacts. To deploy your apps, copy the artifacts to the host environment and then start your apps to run your workflows. Or, integrate your artifacts into deployment pipelines using the tools and processes that you already know and use. For example, if your scenario requires containers, you can containerize your logic apps and integrate them into your existing pipelines.
-To set up and deploy your infrastructure resources, such as virtual networks and connectivity, you can continue using ARM templates and separately provision those resources along with other processes and pipelines that you use for those purposes.
+To set up and deploy your infrastructure resources, such as virtual networks and connectivity, you can continue using ARM templates and separately provision those resources along with other processes and pipelines that you use for those purposes.
By using standard build and deploy options, you can focus on app development separately from infrastructure deployment. As a result, you get a more generic project model where you can apply many similar or the same deployment options that you use for a generic app. You also benefit from a more consistent experience for building deployment pipelines around your app projects and for running the required tests and validations before publishing to production. No matter which technology stack you use, you can deploy logic apps using your own chosen tools.
Single-tenant Azure Logic Apps inherits many capabilities and benefits from the
### Local development and testing
-When you use Visual Studio Code with the Azure Logic Apps (Standard) extension, you can locally develop, build, and run single-tenant based logic app workflows in your development environment without having to deploy to Azure. You can also run your workflows anywhere that Azure Functions can run. For example, if your scenario requires containers, you can containerize your logic apps and deploy as containers.
+When you use Visual Studio Code with the Azure Logic Apps (Standard) extension, you can locally develop, build, and run single-tenant based logic app workflows in your development environment without having to deploy to Azure. You can also run your workflows where Azure Functions can run. For example, if your scenario requires containers, you can containerize your logic apps and deploy as containers.
This capability is a major improvement and provides a substantial benefit compared to the multi-tenant model, which requires you to develop against an existing and running resource in Azure.
The single-tenant model gives you the capability to separate the concerns betwee
### Container deployment
-Single-tenant Azure Logic Apps supports deployment to containers, which means that you can containerize your logic app workflows and run them anywhere that containers can run. After you containerize your app, deployment works mostly the same as any other container you deploy and manage.
+Single-tenant Azure Logic Apps supports deployment to containers, which means that you can containerize your logic app workflows and run them where containers can run. After you containerize your app, deployment works mostly the same as any other container you deploy and manage.
For examples that include Azure DevOps, review [CI/CD for Containers](https://azure.microsoft.com/solutions/architecture/cicd-for-containers/).
In Visual Studio Code, when you use the designer to develop or make changes to y
### Service provider connections
-When you use a built-in operation for a service such as Azure Service Bus or Azure Event Hubs in single-tenant Azure Logic Apps, you create a service provider connection that runs in the same process as your workflow. This connection infrastructure is hosted and managed as part of your logic app, and your app settings store the connection strings for any service provider-based built-in operation that your workflows use.
+When you use a built-in operation for a service such as Azure Service Bus or Azure Event Hubs in single-tenant Azure Logic Apps, you create a service provider connection that runs in the same process as your workflow. This connection infrastructure is hosted and managed as part of your logic app resource, and your app settings store the connection strings for any service provider-based built-in operation that your workflows use.
In your logic app project, each workflow has a workflow.json file that contains the workflow's underlying JSON definition. This workflow definition then references the necessary connection strings in your project's connections.json file.
The following example shows how the service provider connection for a built-in S
}, "displayName": "{service-bus-connection-name}" },
- ...
+ <...>
} ```
To call functions created and hosted in Azure Functions, you use the built-in Az
## Authentication
-In single-tenant Azure Logic Apps, the hosting model for logic app workflows is a single tenant where your workloads benefit from more isolation than in the multi-tenant model. Plus, the single-tenant Azure Logic Apps runtime is portable, which means you can run your workflows anywhere that Azure Functions can run. Still, this design requires a way for logic apps to authenticate their identity so they can access the managed connector ecosystem in Azure. Your apps also need the correct permissions to run operations when using managed connections.
+In single-tenant Azure Logic Apps, the hosting model for logic app workflows is a single tenant where your workloads benefit from more isolation than in the multi-tenant model. Plus, the single-tenant Azure Logic Apps runtime is portable, which means you can run your workflows where Azure Functions can run. Still, this design requires a way for logic apps to authenticate their identity so they can access the managed connector ecosystem in Azure. Your apps also need the correct permissions to run operations when using managed connections.
By default, each single-tenant based logic app has an automatically enabled system-assigned managed identity. This identity differs from the authentication credentials or connection string used for creating a connection. At runtime, your logic app uses this identity to authenticate its connections through Azure access policies. If you disable this identity, connections won't work at runtime.
logic-apps Logic Apps Securing A Logic App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/logic-apps-securing-a-logic-app.md
ms.suite: integration Previously updated : 07/29/2021 Last updated : 09/13/2021 # Secure access and data in Azure Logic Apps
This example shows a resource definition for a nested logic app that permits inb
## Access to logic app operations
-You can permit only specific users or groups to run specific tasks, such as managing, editing, and viewing logic apps. To control their permissions, use [Azure role-based access control (Azure RBAC)](../role-based-access-control/role-assignments-portal.md) so that you can assign customized or built-in roles to the members in your Azure subscription:
+You can permit only specific users or groups to run specific tasks, such as managing, editing, and viewing logic apps. To control their permissions, use [Azure role-based access control (Azure RBAC)](../role-based-access-control/role-assignments-portal.md). You can assign built-in or customized roles to members who have access to your Azure subscription. Azure Logic Apps has these specific roles:
* [Logic App Contributor](../role-based-access-control/built-in-roles.md#logic-app-contributor): Lets you manage logic apps, but you can't change access to them. * [Logic App Operator](../role-based-access-control/built-in-roles.md#logic-app-operator): Lets you read, enable, and disable logic apps, but you can't edit or update them.
-To prevent others from changing or deleting your logic app, you can use [Azure Resource Lock](../azure-resource-manager/management/lock-resources.md). This capability prevents others from changing or deleting production resources.
+* [Contributor](../role-based-access-control/built-in-roles.md#contributor): Grants full access to manage all resources, but does not allow you to assign roles in Azure RBAC, manage assignments in Azure Blueprints, or share image galleries.
+
+ For example, suppose you have to work with a logic app that you didn't create and authenticate connections used by that logic app's workflow. Your Azure subscription requires Contributor permissions for the resource group that contains that logic app resource. If you create a logic app resource, you automatically have Contributor access.
+
+To prevent others from changing or deleting your logic app, you can use [Azure Resource Lock](../azure-resource-manager/management/lock-resources.md). This capability prevents others from changing or deleting production resources. For more information about connection security, review [Connection configuration in Azure Logic Apps](../connectors/apis-list.md#connection-configuration) and [Connection security and encryption](../connectors/apis-list.md#connection-security-encyrption).
<a name="secure-run-history"></a>
logic-apps Logic Apps Using Sap Connector https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/logic-apps-using-sap-connector.md
Previously updated : 09/09/2021 Last updated : 09/13/2021 tags: connectors
Here are the currently known issues and limitations for the managed (non-ISE) SA
* The SAP connector currently doesn't support SAP router strings. The on-premises data gateway must exist on the same LAN as the SAP system you want to connect.
+* In the **\[BAPI] Call method in SAP** action, the auto-commit feature won't commit the BAPI changes if at least one warning exists in the **CallBapiResponse** object returned by the action. To commit BAPI changes despite any warnings, create a session explicitly with the **\[BAPI - RFC] Create stateful session** action, disable the auto-commit feature in the **\[BAPI] Call method in SAP** action, and call the **\[BAPI] Commit transaction** action instead.
+ * For [logic apps in an ISE](../logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md), this connector's ISE-labeled version uses the [ISE message limits](../logic-apps/logic-apps-limits-and-config.md#message-size-limits) instead. ## Connector reference
logic-apps Single Tenant Overview Compare https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/single-tenant-overview-compare.md
ms.suite: integration Previously updated : 09/10/2021 Last updated : 09/13/2021 # Single-tenant versus multi-tenant and integration service environment for Azure Logic Apps
With the **Logic App (Standard)** resource type, you can create these workflow t
Create a stateful workflow when you need to keep, review, or reference data from previous events. These workflows save and transfer all the inputs and outputs for each action and their states to external storage, which makes reviewing the run details and history possible after each run finishes. Stateful workflows provide high resiliency if outages happen. After services and systems are restored, you can reconstruct interrupted runs from the saved state and rerun the workflows to completion. Stateful workflows can continue running for much longer than stateless workflows.
+ By default, stateful workflows in both multi-tenant and single-tenant Azure Logic Apps run asynchronously. All HTTP-based actions follow the standard [asynchronous operation pattern](/azure/architecture/patterns/async-request-reply). This pattern specifies that after an HTTP action calls or sends a request to an endpoint, service, system, or API, the receiver immediately returns a ["202 ACCEPTED"](https://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.3) response. This code confirms that the receiver accepted the request but hasn't finished processing. The response can include a `location` header that specifies the URI and a refresh ID that the caller can use to poll or check the status for the asynchronous request until the receiver stops processing and returns a ["200 OK"](https://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) success response or other non-202 response. However, the caller doesn't have to wait for the request to finish processing and can continue to run the next action. For more information, see [Asynchronous microservice integration enforces microservice autonomy](/azure/architecture/microservices/design/interservice-communication#synchronous-versus-asynchronous-messaging).
+ * *Stateless*
- Create a stateless workflow when you don't need to keep, review, or reference data from previous events in external storage after each run finishes for later review. These workflows save all the inputs and outputs for each action and their states *in memory only*, not in external storage. As a result, stateless workflows have shorter runs that are typically less than 5 minutes, faster performance with quicker response times, higher throughput, and reduced running costs because the run details and history aren't saved in external storage. However, if outages happen, interrupted runs aren't automatically restored, so the caller needs to manually resubmit interrupted runs. These workflows can only run synchronously.
+ Create a stateless workflow when you don't need to keep, review, or reference data from previous events in external storage after each run finishes for later review. These workflows save all the inputs and outputs for each action and their states *in memory only*, not in external storage. As a result, stateless workflows have shorter runs that are typically less than 5 minutes, faster performance with quicker response times, higher throughput, and reduced running costs because the run details and history aren't saved in external storage. However, if outages happen, interrupted runs aren't automatically restored, so the caller needs to manually resubmit interrupted runs.
> [!IMPORTANT] > A stateless workflow provides the best performance when handling data or content, such as a file, that doesn't exceed 64 KB in *total* size. > Larger content sizes, such as multiple large attachments, might significantly slow your workflow's performance or even cause your workflow to > crash due to out-of-memory exceptions. If your workflow might have to handle larger content sizes, use a stateful workflow instead.
+ Stateless workflows only run synchronously, so they don't use the standard [asynchronous operation pattern](/azure/architecture/patterns/async-request-reply) used by stateful workflows. Instead, all HTTP-based actions that return a ["202 ACCEPTED"](https://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.3) response proceed to the next step in the workflow execution. If the response includes a `location` header, a stateless workflow won't poll the specified URI to check the status. To follow the standard asynchronous operation pattern, use a stateful workflow instead.
+ For easier debugging, you can enable run history for a stateless workflow, which has some impact on performance, and then disable the run history when you're done. For more information, see [Create single-tenant based workflows in Visual Studio Code](create-single-tenant-workflows-visual-studio-code.md#enable-run-history-stateless) or [Create single-tenant based workflows in the Azure portal](create-single-tenant-workflows-visual-studio-code.md#enable-run-history-stateless). > [!NOTE]
machine-learning Vm Do Ten Things https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/data-science-virtual-machine/vm-do-ten-things.md
In addition to the framework-based samples, you can get a set of comprehensive w
- [A how-to guide to build an end-to-end solution to detect products within images](https://github.com/Azure/cortana-intelligence-product-detection-from-images): Image detection is a technique that can locate and classify objects within images. This technology has the potential to bring huge rewards in many real-life business domains. For example, retailers can use this technique to determine which product a customer has picked up from the shelf. This information in turn helps stores manage product inventory. -- [Deep learning for audio](/archive/blogs/machinelearning/hearing-ai-getting-started-with-deep-learning-for-audio-on-azure): This tutorial shows how to train a deep-learning model for audio event detection on the [urban sounds dataset](https://serv.cusp.nyu.edu/projects/urbansounddataset/urbansound8k.html). It also provides an overview of how to work with audio data.
+- [Deep learning for audio](/archive/blogs/machinelearning/hearing-ai-getting-started-with-deep-learning-for-audio-on-azure): This tutorial shows how to train a deep-learning model for audio event detection on the [urban sounds dataset](https://urbansounddataset.weebly.com/). It also provides an overview of how to work with audio data.
- [Classification of text documents](https://github.com/anargyri/lstm_han): This walkthrough demonstrates how to build and train two neural network architectures: Hierarchical Attention Network and Long Short Term Memory (LSTM) network. These neural networks use the Keras API for deep learning to classify text documents.
machine-learning How To Configure Databricks Automl Environment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-configure-databricks-automl-environment.md
Previously updated : 10/21/2020 Last updated : 09/14/2021
Use these settings:
| Setting |Applies to| Value | |-||| | Cluster Name |always| yourclustername |
-| Databricks Runtime Version |always| Runtime 7.1 (scala 2.21, spark 3.0.0) - Not ML|
+| Databricks Runtime Version |always| Runtime 7.3 LTS or lower - Not ML|
| Python version |always| 3 | | Worker Type <br>(determines max # of concurrent iterations) |Automated ML<br>only| Memory optimized VM preferred | | Workers |always| 2 or higher |
To use automated ML, skip to [Add the Azure ML SDK with AutoML](#add-the-azure-m
![Azure Machine Learning SDK for Databricks](./media/how-to-configure-environment/amlsdk-withoutautoml.jpg) ## Add the Azure ML SDK with AutoML to Databricks
-If the cluster was created with Databricks Runtime 7.1 or above (*not* ML), run the following command in the first cell of your notebook to install the AML SDK.
+If the cluster was created with Databricks Runtime 7.1 - 7.3 LTS (*not* ML), run the following command in the first cell of your notebook to install the AML SDK.
``` %pip install --upgrade --force-reinstall -r https://aka.ms/automl_linux_requirements.txt
Try it out:
## Next steps - [Train a model](tutorial-train-models-with-aml.md) on Azure Machine Learning with the MNIST dataset.-- See the [Azure Machine Learning SDK for Python reference](/python/api/overview/azure/ml/intro).
+- See the [Azure Machine Learning SDK for Python reference](/python/api/overview/azure/ml/intro).
marketplace Dynamics 365 Operations Validation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/dynamics-365-operations-validation.md
There are two options for functional validation:
The Microsoft certification team reviews the video and files, then either approves the solution or emails you about next steps.
+> [!NOTE]
+> If the solution/offer you are creating is a connector only, demo the connector working during the call or use one of the video upload options listed below.
+ ### Option 1: 30-minute conference call To schedule a final review call, contact [appsourceCRM@microsoft.com](mailto:appsourceCRM@microsoft.com) with the name of your offer and some potential time slots between 8 AM and 5 PM Pacific Time.
To schedule a final review call, contact [appsourceCRM@microsoft.com](mailto:app
> [!NOTE] > It is acceptable to use an existing marketing video if it meets the guidelines.
-2. Take the following screenshots of the [LCS](https://lcs.dynamics.com/) environment that match the offer or solution you want to publish. They must be clear enough for the certification team to read the text. Save the screenshots as JPG files. You may provide [appSourceCRM@microsoft.com](mailto:appSourceCRM@microsoft.com) permission to your LCS environment so we can verify the setup in lieu of providing screenshots.
+2. Take the following screenshots of the [LCS](https://lcs.dynamics.com/) environment that match the offer or solution you want to publish. They must be clear enough for the certification team to read the text. Save the screenshots as JPG files.
1. Go to **LCS** > **Business Process Modeler** > **Project library**. Take screenshots of all the Process steps. Include the **Diagrams** and **Reviewed** columns, as shown here:
media-services Azure Media Player Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/azure-media-player/azure-media-player-overview.md
Azure Media Player is a web video player that plays media content from [Microsof
Microsoft Azure Media Services allows for content to be served up with DASH, Smooth Streaming and HLS streaming formats to play back content. Azure Media Player takes into account these various formats and automatically plays the best link based on the platform/browser capabilities. Microsoft Azure Media Services also provides dynamic encryption of assets with common encryption (PlayReady or Widevine) or AES-128 bit envelope encryption. Azure Media Player allows for decryption of PlayReady and AES-128 bit encrypted content when appropriately configured. To understand how to configure the player, see the [Protected Content](azure-media-player-protected-content.md) section.
-To request new features, provide ideas or feedback, submit them to [UserVoice for Azure Media Player](https://aka.ms/ampuservoice). If you have and specific issues, questions or find any bugs, drop us a line at ampinfo@microsoft.com.
+If you have and specific issues, questions or find any bugs, please [file a support ticket](https://ms.portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview) using the Client Playback category.
> [!NOTE] > Please note that Azure Media Player only supports media streams from Azure Media Services.
media-services Compliance Concept https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/compliance-concept.md
Title: Media Services regulatory compliance
-description: Azure Media Services complies with Azure Government.
+description: Azure Media Services helps Azure Government customers meet their compliance obligations.
documentationcenter: ''
editor: ''
Previously updated : 08/31/2020 Last updated : 09/11/2021 - # Media Services regulatory compliance [!INCLUDE [media services api v3 logo](./includes/v3-hr.md)]
-Media Services meets the demanding requirements of the US Federal Risk & Authorization Management Program (FedRAMP) and of the US Department of Defense, from information impact levels 2 through 5. By deploying protected services including Azure Government, Office 365 U.S. Government, and Dynamics 365 Government, federal, and defense agencies can use a rich array of compliant services.
+Media Services meets the demanding requirements of the US Federal Risk & Authorization Management Program (FedRAMP) and of the US Department of Defense (DoD) Cloud Computing Security Requirements Guide (SRG) Impact Level (IL) 2, IL4, and IL5. By deploying authorized services in Azure Government, Office 365 GCC High and DoD, and Dynamics 365 US Government, federal and defense agencies can use a rich array of cloud services while meeting their compliance obligations.
+
+## FedRAMP and DoD compliance
-## FedRAMP and US Department of Defense compliance
+Media Services in Azure Public maintains:
-Media Services Public services are compliant with the Department of Defense Cloud Computing Security Requirements Guide 2 (DoD CC SRG IL 2) and FedRAMP High.
+- FedRAMP High Provisional Authorization to Operate (P-ATO)
+- DoD IL2 Provisional Authorization (PA)
-Media Services Government services are compliant with DoD CC SRG IL 2, DoD CC SRG IL 4, DoD CC SRG IL 5, and FedRAMP High.
+Media Services in Azure Government maintains:
-A review of Media Services by 3PAO and JAB isn't planned for 2020.
+- FedRAMP High P-ATO
+- DoD IL2 PA
+- DoD IL4 PA
+- DoD IL5 PA
-Read more about Azure services compliance in the [Azure services by FedRAMP and DoD CC SRG audit scope](../../azure-government/compliance/azure-services-in-fedramp-auditscope.md) article.
+For more information about Azure compliance coverage for US government, see Azure [FedRAMP High](/azure/compliance/offerings/offering-fedramp), [DoD IL2](/azure/compliance/offerings/offering-dod-il2), [DoD IL4](/azure/compliance/offerings/offering-dod-il4), and [DoD IL5](/azure/compliance/offerings/offering-dod-il5) documentation. For FedRAMP and DoD audit scope, see [Cloud services by audit scope](../../azure-government/compliance/azure-services-in-fedramp-auditscope.md).
## Azure compliance documentation
-If your organization needs to comply with legal or regulatory standards for Global, US government, regional, financial services, health, media, and manufacturing, start with the [Azure compliance documentation](../../compliance/index.yml).
+To help you meet your own compliance obligations across regulated industries and markets worldwide, Azure maintains the largest compliance portfolio in the industry both in terms of breadth (total number of [compliance offerings](/azure/compliance/offerings/)) and depth (number of [customer-facing services](https://azure.microsoft.com/services/) in assessment scope). For service availability, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/).
-You will also find additional compliance resources such as audit reports, a checklist for privacy and General Data Protection Regulation (GDPR), compliance blueprints, country and regional guidelines, implementation and mappings, as well as white papers and analyst reports.
+Azure compliance offerings are grouped into four segments - globally applicable, US Government, industry specific, and region/country specific. Compliance offerings are based on various types of assurances, including formal certifications, attestations, validations, authorizations, and assessments produced by independent third-party auditing firms, as well as contractual amendments, self-assessments, and customer guidance documents produced by Microsoft. For more information, see [Azure compliance documentation](../../compliance/index.yml). You will also find additional compliance resources such as audit reports, a checklist for privacy and General Data Protection Regulation (GDPR), compliance blueprints, country and regional guidelines, implementation and mappings, as well as white papers and analyst reports.
## Next steps
-> [Azure Media Services overview](media-services-overview.md)
+> [Azure Media Services overview](media-services-overview.md)
network-watcher Network Watcher Intrusion Detection Open Source Tools https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/network-watcher/network-watcher-intrusion-detection-open-source-tools.md
For all other methods of installation, visit https://suricata.readthedocs.io/en/
``` sudo add-apt-repository ppa:oisf/suricata-stable sudo apt-get update
- sudo sudo apt-get install suricata
+ sudo apt-get install suricata
``` 1. To verify your installation, run the command `suricata -h` to see the full list of commands.
Learn how to visualize your NSG flow logs with Power BI by visiting [Visualize N
[4]: ./media/network-watcher-intrusion-detection-open-source-tools/figure4.png [5]: ./media/network-watcher-intrusion-detection-open-source-tools/figure5.png [6]: ./media/network-watcher-intrusion-detection-open-source-tools/figure6.png
-[7]: ./media/network-watcher-intrusion-detection-open-source-tools/figure7.png
+[7]: ./media/network-watcher-intrusion-detection-open-source-tools/figure7.png
network-watcher Traffic Analytics Policy Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/network-watcher/traffic-analytics-policy-portal.md
Title: Deploy and manage Traffic Analytics using Azure Policy
description: This article explains how to use the built-in policies to manage the deployment of Traffic Analytics - Last updated 07/11/2021
It is same as the above policy except that during remediation, it does not overw
- Storage ID: Full resource ID of the storage account. This storage account should be in the same region as the NSG. - Network Watchers RG: Name of the resource group containing your Network Watcher resource. If you have not renamed it, you can enter 'NetworkWatcherRG' which is the default. - Network Watcher name: Name of the regional network watcher service. Format: NetworkWatcher_RegionName. Example: NetworkWatcher_centralus.-- Workspace resource ID: Resource ID of the workspace where Traffic Analytics has to be enabled. Format is "/subscriptions/<SubscriptionID>/resourceGroups/<ResouceGroupName>/providers/Microsoft.Storage/storageAccounts/<StorageAccountName>"
+- Workspace resource ID: Resource ID of the workspace where Traffic Analytics has to be enabled. Format is `/subscriptions/<SubscriptionID>/resourceGroups/<ResouceGroupName>/providers/Microsoft.Storage/storageAccounts/<StorageAccountName>`
- WorkspaceID: Workspace guid - WorkspaceRegion: Region of the workspace (note that it need not be same as the region of NSG) - TimeInterval: Frequency at which processed logs will be pushed into workspace. Currently allowed values are 60 mins and 10 mins. Default value is 60 mins.
postgresql Concepts Networking https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/flexible-server/concepts-networking.md
Azure Database for PostgreSQL - Flexible Server enforces connecting your client
Azure Database for PostgreSQL supports TLS 1.2 and later. In [RFC 8996](https://datatracker.ietf.org/doc/rfc8996/), the Internet Engineering Task Force (IETF) explicitly states that TLS 1.0 and TLS 1.1 must not be used. Both protocols were deprecated by the end of 2019.
-All incoming connections that use earlier versions of the TLS protocol, such as TLS 1.0 and TLS 1.1, will be denied.
+All incoming connections that use earlier versions of the TLS protocol, such as TLS 1.0 and TLS 1.1, will be denied by default.
+
+> [!NOTE]
+> SSL and TLS certificates certify that your connection is secured with state-of-the-art encryption protocols. By encrypting your connection on the wire, you prevent unauthorized access to your data while in transit. This is why we strongly recommend using latest versions of TLS to encrypt your connections to Azure Database for PostgreSQL - Flexible Server.
+> Although its not recommended, if needed, you have an option to disable TLS\SSL for connections to Azure Database for PostgreSQL - Flexible Server by updating the **require_secure_transport** server parameter to OFF. You can also set TLS version by setting **ssl_min_protocol_version** and **ssl_max_protocol_version** server parameters.
## Next steps
postgresql Concepts Security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/flexible-server/concepts-security.md
Multiple layers of security are available to help protect the data on your Azure
Azure Database for PostgreSQL encrypts data in two ways: - **Data in transit**: Azure Database for PostgreSQL encrypts in-transit data with Secure Sockets Layer and Transport Layer Security (SSL/TLS). Encryption is enforced by default. See this [guide](how-to-connect-tls-ssl.md) for more details. For better security, you may choose to enable [SCRAM authentication](how-to-connect-scram.md).
+ Although its not recommended, if needed, you have an option to disable TLS\SSL for connections to Azure Database for PostgreSQL - Flexible Server by updating the **require_secure_transport** server parameter to OFF. You can also set TLS version by setting **ssl_min_protocol_version** and **ssl_max_protocol_version** server parameters.
+ - **Data at rest**: For storage encryption, Azure Database for PostgreSQL uses the FIPS 140-2 validated cryptographic module. Data is encrypted on disk, including backups and the temporary files created while queries are running. The service uses the AES 256-bit cipher included in Azure storage encryption, and the keys are system managed. This is similar to other at-rest encryption technologies, like transparent data encryption in SQL Server or Oracle databases. Storage encryption is always on and can't be disabled.
postgresql Howto Hyperscale Logging https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/howto-hyperscale-logging.md
Previously updated : 8/20/2021 Last updated : 9/13/2021 # Logs in Azure Database for PostgreSQL - Hyperscale (Citus)
Replace the server name in the above example with the name of your server. The
coordinator node name has the suffix `-c` and worker nodes are named with a suffix of `-w0`, `-w1`, and so on.
+The Azure logs can be filtered in different ways. Here's how to find logs
+within the past day whose messages match a regular expression.
+
+```kusto
+AzureDiagnostics
+| where TimeGenerated > ago(24h)
+| order by TimeGenerated desc
+| where Message matches regex ".*error.*"
+```
+ ## Next steps - [Get started with log analytics queries](../azure-monitor/logs/log-analytics-tutorial.md)
postgresql Howto Hyperscale Table Size https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/howto-hyperscale-table-size.md
of distributed table size is obtained as a sum of shard sizes. Hyperscale
<table> <colgroup>
-<col style="width: 40%" />
-<col style="width: 59%" />
+<col width="40%" />
+<col width="59%" />
</colgroup> <thead> <tr class="header">
purview Catalog Private Link Account Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/catalog-private-link-account-portal.md
Last updated 08/18/2021
# Connect privately and securely to your Purview account
-In this guide, you will learn how to deploy private endpoints for your Purview account to allow you to connect to your Azure Purview account only from VNets and private networks. To achieve this goal, you need to deploy an _account_, a _portal_ and _ingestion_ private endpoints for your Azure Purview account.
+In this guide, you will learn how to deploy private endpoints for your Purview account to allow you to connect to your Azure Purview account only from VNets and private networks. To achieve this goal, you need to deploy _account_ and _portal_ private endpoints for your Azure Purview account.
The Azure Purview _account_ private endpoint is used to add another layer of security by enabling scenarios where only client calls that originate from within the virtual network are allowed to access the Azure Purview account. This private endpoint is also a prerequisite for the portal private endpoint. The Azure Purview _portal_ private endpoint is required to enable connectivity to Azure Purview Studio using a private network.
+> [!NOTE]
+> If you only create _account_ and _portal_ private endpoints, you won't be able to run any scans. To enable scanning on a private network, you will need to [create an ingestion private endpoint also](catalog-private-link-end-to-end.md).
+ :::image type="content" source="media/catalog-private-link/purview-private-link-account-portal.png" alt-text="Diagram that shows Azure Purview and Private Link architecture."::: For more information about Azure Private Link service, see [private links and private endpoints](../private-link/private-endpoint-overview.md) to learn more.
purview Catalog Private Link Ingestion https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/catalog-private-link-ingestion.md
- Title: Scan your data sources privately and securely
-description: This article describes how you can set up a private endpoint to scan data sources from restricted network
----- Previously updated : 08/18/2021
-# Customer intent: As an Azure Purview admin, I want to set up private endpoints for my Azure Purview to scan data sources from restricted network.
--
-# Scan your data sources privately and securely
-
-If you plan to scan your data sources in private networks, virtual networks, and behind private endpoints, Azure Purview ingestion private endpoints must be deployed to ensure network isolation for your metadata flowing from the data sources that are being scanned to the Azure Purview Data Map.
-
-Azure Purview can scan data sources in Azure or an on-premises environment by using _ingestion_ private endpoints. Three private endpoint resources are required to be deployed and linked to Azure Purview managed resources when ingestion private endpoint is deployed:
--- Blob private endpoint is linked to an Azure Purview managed storage account.-- Queue private endpoint is linked to an Azure Purview managed storage account.-- namespace private endpoint is linked to an Azure Purview managed event hub namespace.--
-## Deployment checklist
-Using one of the deployment options from this guide, enable ingestion private endpoints for your Azure Purview account:
-
-1. Choose an appropriate Azure virtual network and a subnet to deploy Azure Purview ingestion private endpoints. Select one of the following options:
- - Deploy a [new virtual network](../virtual-network/quick-create-portal.md) in your Azure subscription.
- - Locate an existing Azure virtual network and a subnet in your Azure subscription.
-
-2. Define an appropriate [DNS name resolution method](./catalog-private-link-name-resolution.md#deployment-options) for ingestion, so Azure Purview can scan data sources using private network. You can use any of the following options:
- - Deploy new Azure DNS zones using the steps explained further in this guide.
- - Add required DNS records to existing Azure DNS zones using the steps explained further in this guide.
- - After completing the steps in this guide, add required DNS A records in your existing DNS servers manually.
-3. Deploy a [new Purview account](#option-1deploy-a-new-azure-purview-account-with-ingestion-private-endpoint) with ingestion private endpoints, or deploy ingestion private endpoints for an [existing Purview account](#option-2enable-ingestion-private-endpoint-on-existing-azure-purview-accounts).
-4. Deploy and register [Self-hosted integration runtime](#deploy-self-hosted-integration-runtime-ir-and-scan-your-data-sources) inside the same VNet where Azure Purview ingestion private endpoints are deployed.
-5. After completing this guide, adjust DNS configurations if needed.
-6. Validate your network and name resolution between management machine, self-hosted IR VM and data sources to Azure Purview.
-
-## Option 1 - Deploy a new Azure Purview account with _ingestion_ private endpoint
-
-1. Go to the [Azure portal](https://portal.azure.com), and then go to the **Purview accounts** page. Select **+ Create** to create a new Azure Purview account.
-
-2. Fill in the basic information, and on the **Networking** tab, set the connectivity method to **Private endpoint**. Set enable private endpoint to **Ingestion only**.
-
- :::image type="content" source="media/catalog-private-link/purview-pe-scenario-2-1.png" alt-text="Screenshot that shows Create private endpoint page selections.":::
-
-3. Set up your ingestion private endpoints by providing details for **Subscription**, **Virtual network**, and **Subnet** that you want to pair with your private endpoint.
-
-4. Optionally, select **Private DNS integration** to use Azure Private DNS Zones.
-
- > [!IMPORTANT]
- > It is important to select correct Azure Private DNS Zones to allow correct name resolution between Azure Purview and data sources. You can also use your existing Azure Private DNS Zones or create DNS records in your DNS Servers manually later. For more information, see [Configure DNS Name Resolution for private endpoints](./catalog-private-link-name-resolution.md)
-
-5. Select **Review + Create**. On the **Review + Create** page, Azure validates your configuration.
-
-6. When you see the "Validation passed" message, select **Create**.
-
- :::image type="content" source="media/catalog-private-link/validation-passed.png" alt-text="Screenshot that shows that validation passed for account creation.":::
-
-## Option 2 - Enable _ingestion_ private endpoint on existing Azure Purview accounts
-
-1. Go to the [Azure portal](https://portal.azure.com), and then click on to your Azure Purview account, under **Settings** select **Networking**, and then select **Ingestion private endpoint connections**.
-
-2. Under Ingestion private endpoint connections, select **+ New** to create a new ingestion private endpoint.
-
-3. Fill in the basic information, selecting your existing virtual network and a subnet details. Optionally, select **Private DNS integration** to use Azure Private DNS Zones.
-
- :::image type="content" source="media/catalog-private-link/ingestion-pe-fill-details.png" alt-text="Screenshot that shows filling in private endpoint details.":::
-
- > [!IMPORTANT]
- > It is important to select correct Azure Private DNS Zones to allow correct name resolution between Azure Purview and data sources. You can also use your existing Azure Private DNS Zones or create DNS records in your DNS Servers manually later.
- >
- >For more information, see [Configure DNS Name Resolution for private endpoints](./catalog-private-link-name-resolution.md).
--
-4. Select **Create** to finish the setup.
-
-> [!NOTE]
-> Ingestion private endpoints can be created only via the Azure Purview portal experience described in the preceding steps. They cannot be created from the Private Link Center.
-
-## Deploy self-hosted integration runtime (IR) and scan your data sources.
-Once you deploy ingestion private endpoints for your Azure Purview, you need to setup and register at least one self-hosted integration runtime (IR):
--- All on-premises source types like Microsoft SQL Server, Oracle, SAP, and others are currently supported only via self-hosted IR-based scans. The self-hosted IR must run within your private network and then be peered with your virtual network in Azure.
-
-- For all Azure source types like Azure Blob Storage and Azure SQL Database, you must explicitly choose to run the scan by using a self-hosted integration runtime that is deployed in the same VNet as Azure Purview ingestion private endpoint. -
-Follow the steps in [Create and manage a self-hosted integration runtime](manage-integration-runtimes.md) to set up a self-hosted IR. Then set up your scan on the Azure source by choosing that self-hosted IR in the **Connect via integration runtime** dropdown list to ensure network isolation.
-
- :::image type="content" source="media/catalog-private-link/shir-for-azure.png" alt-text="Screenshot that shows running an Azure scan by using self-hosted IR.":::
-
-> [!IMPORTANT]
-> If you have created your Azure Purview account after 18th August 2021, make sure you download and install the latest version of self-hosted integration runtime from [Microsoft download center](https://www.microsoft.com/download/details.aspx?id=39717).
->
-## Next steps
--- [Verify resolution for private endpoints](./catalog-private-link-name-resolution.md)-- [Manage data sources in Azure Purview](./manage-data-sources.md)-- [Troubleshooting private endpoint configuration for your Azure Purview account](./catalog-private-link-troubleshoot.md)
purview Catalog Private Link https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/catalog-private-link.md
Before deploying private endpoints for Azure Purview account, ensure you meet th
Use the following recommended checklist to perform deployment of Azure Purview account with private endpoints: - |Scenario |Objectives | |||
-|**Scenario 1** - [Connect privately and securely to your Purview account](./catalog-private-link-account-portal.md) | You need to enable access to your Azure Purview account, including access to _Azure Purview Studio_ and Atlas API through private endpoints. (Deploy _account_ and _portal_ private endpoints). |
-|**Scenario 2** - [Scan your data sources privately and securely](./catalog-private-link-ingestion.md) | You need to scan data sources in on-premises and Azure behind a virtual network using self-hosted integration runtime. (Deploy _ingestion_ private endpoints.) |
-|**Scenario 3** - [Connect to your Azure Purview and scan data sources privately and securely](./catalog-private-link-end-to-end.md) |You need to restrict access to your Azure Purview account only via a private endpoint, including access to Azure Purview Studio, Atlas APIs and scan data sources in on-premises and Azure behind a virtual network using self-hosted integration runtime ensuring end to end network isolation. (Deploy _account_, _portal_ and _ingestion_ private endpoints.) |
+|**Scenario 1** - [Connect to your Azure Purview and scan data sources privately and securely](./catalog-private-link-end-to-end.md) |You need to restrict access to your Azure Purview account only via a private endpoint, including access to Azure Purview Studio, Atlas APIs and scan data sources in on-premises and Azure behind a virtual network using self-hosted integration runtime ensuring end to end network isolation. (Deploy _account_, _portal_ and _ingestion_ private endpoints.) |
+|**Scenario 2** - [Connect privately and securely to your Purview account](./catalog-private-link-account-portal.md) | You need to enable access to your Azure Purview account, including access to _Azure Purview Studio_ and Atlas API through private endpoints. (Deploy _account_ and _portal_ private endpoints). |
## Support matrix for Scanning data sources through _ingestion_ private endpoint
To view list of current limitations related to Azure Purview private endpoints,
## Next steps -- [Deploy ingestion private endpoints](./catalog-private-link-ingestion.md)
+- [Deploy end to end private networking](./catalog-private-link-end-to-end.md)
+- [Deploy private networking for the Purview Studio](./catalog-private-link-account-portal.md)
purview Register Scan Amazon S3 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/register-scan-amazon-s3.md
This procedure describes how to create a new Purview credential to use when scan
|**Authentication method** |Select **Role ARN**, since you're using a role ARN to access your bucket. | |**Microsoft account ID** |Click to copy this value to the clipboard. Use this value as the **Microsoft account ID** when [creating your Role ARN in AWS](#create-a-new-aws-role-for-purview). | |**External ID** |Click to copy this value to the clipboard. Use this value as the **External ID** when [creating your Role ARN in AWS.](#create-a-new-aws-role-for-purview) |
- |**Role ARN** | Once you've [created your Amazon IAM role](#create-a-new-aws-role-for-purview), navigate to your role in the IAM area, copy the **Role ARN** value, and enter it here. For example: `arn:aws:iam::284759281674:role/S3Role`. <br><br>For more information, see [Retrieve your new Role ARN](#retrieve-your-new-role-arn). |
+ |**Role ARN** | Once you've [created your Amazon IAM role](#create-a-new-aws-role-for-purview), navigate to your role in the IAM area, copy the **Role ARN** value, and enter it here. For example: `arn:aws:iam::181328463391:role/S3Role`. <br><br>For more information, see [Retrieve your new Role ARN](#retrieve-your-new-role-arn). |
| | | Select **Create** when you're done to finish creating the credential.
security-center Defender For Kubernetes Azure Arc https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/defender-for-kubernetes-azure-arc.md
Previously updated : 04/06/2021 Last updated : 09/14/2021
The extension can also protect Kubernetes clusters on other cloud providers, alt
|--|| | Release state | **Preview**<br>[!INCLUDE [Legalese](../../includes/security-center-preview-legal-text.md)]| | Required roles and permissions | [Security admin](../role-based-access-control/built-in-roles.md#security-admin) can dismiss alerts<br>[Security reader](../role-based-access-control/built-in-roles.md#security-reader) can view findings |
-| Pricing | Requires [Azure Defender for Kubernetes](defender-for-kubernetes-introduction.md) |
+| Pricing | Free (during preview) |
| Supported Kubernetes distributions | [Azure Kubernetes Service on Azure Stack HCI](/azure-stack/aks-hci/overview)<br>[Kubernetes](https://kubernetes.io/docs/home/)<br> [AKS Engine](https://github.com/Azure/aks-engine)<br> [Azure Red Hat OpenShift](https://azure.microsoft.com/services/openshift/)<br> [Red Hat OpenShift](https://www.openshift.com/learn/topics/kubernetes/) (version 4.6 or newer)<br> [VMware Tanzu Kubernetes Grid](https://tanzu.vmware.com/kubernetes-grid)<br> [Rancher Kubernetes Engine](https://rancher.com/docs/rke/latest/en/) | | Limitations | Azure Arc enabled Kubernetes and the Azure Defender extension **don't support** managed Kubernetes offerings like Google Kubernetes Engine and Elastic Kubernetes Service. [Azure Defender is natively available for Azure Kubernetes Service (AKS)](defender-for-kubernetes-introduction.md) and doesn't require connecting the cluster to Azure Arc. | | Environments and regions | Availability for this extension is the same as [Azure Arc enabled Kubernetes](../azure-arc/kubernetes/overview.md)|
This diagram shows the interaction between Azure Defender for Kubernetes and the
## Prerequisites -- Azure Defender for Kubernetes is [enabled on your subscription](enable-azure-defender.md)-- Your Kubernetes cluster is [connected to Azure Arc](../azure-arc/kubernetes/quickstart-connect-cluster.md)-- You've met the pre-requisites listed under the [generic cluster extensions documentation](../azure-arc/kubernetes/extensions.md#prerequisites).
+Before deploying the extension, ensure you:
+- [Connect the Kubernetes cluster to Azure Arc](../azure-arc/kubernetes/quickstart-connect-cluster.md)
+- Complete the [pre-requisites listed under the generic cluster extensions documentation](../azure-arc/kubernetes/extensions.md#prerequisites).
+- Configure **port 443** on the following endpoints for outbound access:
+ - For clusters on Azure Government cloud:
+ - *.ods.opinsights.azure.us
+ - *.oms.opinsights.azure.us
+ - :::no-loc text="login.microsoftonline.us":::
+ - For clusters on other Azure cloud deployments:
+ - *.ods.opinsights.azure.com
+ - *.oms.opinsights.azure.com
+ - :::no-loc text="login.microsoftonline.com":::
## Deploy the Azure Defender extension
security-center Defender For Sql On Machines Vulnerability Assessment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/defender-for-sql-on-machines-vulnerability-assessment.md
Previously updated : 05/19/2021 Last updated : 09/14/2021 # Scan your SQL servers for vulnerabilities
The integrated [vulnerability assessment scanner](../azure-sql/database/sql-vuln
## Explore vulnerability assessment reports
-The vulnerability assessment service scans your databases once a week. The scans run on the same day of the week on which you enabled the service.
+The vulnerability assessment service scans your databases every 12 hours.
The vulnerability assessment dashboard provides an overview of your assessment results across all your databases, along with a summary of healthy and unhealthy databases, and an overall summary of failing checks according to risk distribution.
security Tls Certificate Changes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security/fundamentals/tls-certificate-changes.md
tags: azure-resource-manager
Previously updated : 11/10/2020 Last updated : 09/13/2021
All Azure services are impacted by this change. Here are some additional details
- [Azure Active Directory](../../active-directory/index.yml) (Azure AD) services began this transition on July 7, 2020. - [Azure IoT Hub](https://azure.microsoft.com/services/iot-hub) and [DPS](../../iot-dps/index.yml) will remain on Baltimore CyberTrust Root CA but their intermediate CAs will change. [Click here for details](https://techcommunity.microsoft.com/t5/internet-of-things/azure-iot-tls-changes-are-coming-and-why-you-should-care/ba-p/1658456).-- [Azure Storage](../../storage/index.yml) will remain on Baltimore CyberTrust Root CA but their intermediate CAs will change. [Click here for details](https://techcommunity.microsoft.com/t5/azure-storage/azure-storage-tls-changes-are-coming-and-why-you-care/ba-p/1705518).
+- For [Azure Storage](../../storage/index.yml), [click here for details](https://techcommunity.microsoft.com/t5/azure-storage/azure-storage-tls-critical-changes-are-almost-here-and-why-you/ba-p/2741581).
- [Azure Cache for Redis](../../azure-cache-for-redis/index.yml) will remain on Baltimore CyberTrust Root CA but their intermediate CAs will change. [Click here for details](../../azure-cache-for-redis/cache-whats-new.md). - Azure Instance Metadata Service will remain on Baltimore CyberTrust Root CA but their intermediate CAs will change. [Click here for details](/answers/questions/172717/action-required-for-attested-data-tls-with-azure-i.html). > [!IMPORTANT]
-> Customers may need to update their application(s) after this change to prevent connectivity failures when attempting to connect to Azure services.
-
+> Customers may need to update their application(s) after this change to prevent connectivity failures when attempting to connect to Azure Storage.
+https://techcommunity.microsoft.com/t5/azure-storage/azure-storage-tls-critical-changes-are-almost-here-and-why-you/ba-p/2741581
## What is changing? Today, most of the TLS certificates used by Azure services chain up to the following Root CA:
sentinel Dns Normalization Schema https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/dns-normalization-schema.md
The following filtering parameters are available:
| **srcipaddr** | string | Filter only DNS queries from this source IP address. | | **domain_has_any**| dynamic | Filter only DNS queries where the `domain` (or `query`) has any of the listed domain names, including as part of the event domain. | **responsecodename** | string | Filter only DNS queries for which the response code name matches the provided value. For example: NXDOMAIN |
-| **response_has** | string | Filter only DNS queries in which the response field starts with the provided IP address or IP address prefix. Use this parameter when you want to filter on a single IP address or prefix. Results are not returned for sources that do not provide a response.|
+| **response_has_ipv4** | string | Filter only DNS queries in which the response field starts with the provided IP address or IP address prefix. Use this parameter when you want to filter on a single IP address or prefix. Results are not returned for sources that do not provide a response.|
| **response_has_any_prefix** | dynamic| Filter only DNS queries in which the response field starts with any of the listed IP addresses or IP address prefixes. Use this parameter when you want to filter on a list of IP addresses or prefixes. Results are not returned for sources that do not provide a response. |
-| **eventtype**| string | Filter only DNQ queries of the specified type. If no value is specified, only lookup queries are returned. |
+| **eventtype**| string | Filter only DNS queries of the specified type. If no value is specified, only lookup queries are returned. |
|||| To filter results using a parameter, you must specify the parameter in your parser.
sentinel