Updates from: 08/02/2022 01:13:34
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Identity Provider Local https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-local.md
To set your local account sign-in options at the tenant level:
## Configure your user flow 1. In the left menu of the Azure portal, select **Azure AD B2C**.
-1. Under **Policies**, select **User flows (policies)**.
+1. Under **Policies**, select **User flows**.
1. Select the user flow for which you'd like to configure the sign-up and sign-in experience. 1. Select **Identity providers** 1. Under the **Local accounts**, select one of the following: **Email signup**, **User ID signup**, **Phone signup**, **Phone/Email signup**, or **None**.
active-directory-b2c Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/whats-new-docs.md
Title: "What's new in Azure Active Directory business-to-customer (B2C)" description: "New and updated documentation for the Azure Active Directory business-to-customer (B2C)." Previously updated : 05/23/2022 Last updated : 08/01/2022
Welcome to what's new in Azure Active Directory B2C documentation. This article lists new docs that have been added and those that have had significant updates in the last three months. To learn what's new with the B2C service, see [What's new in Azure Active Directory](../active-directory/fundamentals/whats-new.md) and [Azure AD B2C developer release notes](custom-policy-developer-notes.md)
+## July 2022
+
+### New articles
+
+- [Configure authentication in a sample React single-page application by using Azure Active Directory B2C](configure-authentication-sample-react-spa-app.md)
+- [Configure authentication options in a React application by using Azure Active Directory B2C](enable-authentication-react-spa-app-options.md)
+- [Enable authentication in your own React Application by using Azure Active Directory B2C](enable-authentication-react-spa-app.md)
+
+### Updated articles
+
+- [Enable custom domains for Azure Active Directory B2C](custom-domain.md)
+- [Set up sign-up and sign-in with a Twitter account using Azure Active Directory B2C](identity-provider-twitter.md)
+- [Page layout versions](page-layout.md)
+- [Monitor Azure AD B2C with Azure Monitor](azure-monitor.md)
+- [Enable JavaScript and page layout versions in Azure Active Directory B2C](javascript-and-page-layout.md)
+- [Localization string IDs](localization-string-ids.md)
+ ## June 2022 ### New articles
Welcome to what's new in Azure Active Directory B2C documentation. This article
- [Billing model for Azure Active Directory B2C](billing.md) - [Configure SAML identity provider options with Azure Active Directory B2C](identity-provider-generic-saml-options.md) - [About claim resolvers in Azure Active Directory B2C custom policies](claim-resolver-overview.md)-- [Add AD FS as a SAML identity provider using custom policies in Azure Active Directory B2C](identity-provider-adfs-saml.md)-
-## December 2021
-
-### New articles
--- [TOTP display control](display-control-time-based-one-time-password.md)-- [Set up sign-up and sign-in with a SwissID account using Azure Active Directory B2C](identity-provider-swissid.md)-- [Set up sign-up and sign-in with a PingOne account using Azure Active Directory B2C](identity-provider-ping-one.md)-- [Tutorial: Configure Haventec with Azure Active Directory B2C for single step, multifactor passwordless authentication](partner-haventec.md)-- [Tutorial: Acquire an access token for calling a web API in Azure AD B2C](tutorial-acquire-access-token.md)-- [Tutorial: Sign in and sign out users with Azure AD B2C in a Node.js web app](tutorial-authenticate-nodejs-web-app-msal.md)-- [Tutorial: Call a web API protected with Azure AD B2C](tutorial-call-api-with-access-token.md)-
-### Updated articles
--- [About claim resolvers in Azure Active Directory B2C custom policies](claim-resolver-overview.md)-- [Azure Active Directory B2C service limits and restrictions](service-limits.md)-- [Add Conditional Access to user flows in Azure Active Directory B2C](conditional-access-user-flow.md)-- [Display controls](display-controls.md)-- ['Azure AD B2C: Frequently asked questions (FAQ)'](faq.yml)-- [Manage Azure AD B2C with Microsoft Graph](microsoft-graph-operations.md)-- [Define an Azure AD MFA technical profile in an Azure AD B2C custom policy](multi-factor-auth-technical-profile.md)-- [Enable multifactor authentication in Azure Active Directory B2C](multi-factor-authentication.md)-- [String claims transformations](string-transformations.md)
+- [Add AD FS as a SAML identity provider using custom policies in Azure Active Directory B2C](identity-provider-adfs-saml.md)
active-directory Concept Authentication Methods https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-methods.md
The following table outlines when an authentication method can be used during a
| Method | Primary authentication | Secondary authentication | |--|:-:|:-:|
-| Windows Hello for Business | Yes | MFA |
+| Windows Hello for Business | Yes | MFA\* |
| Microsoft Authenticator app | Yes | MFA and SSPR | | FIDO2 security key | Yes | MFA | | OATH hardware tokens (preview) | No | MFA and SSPR |
The following table outlines when an authentication method can be used during a
| Voice call | No | MFA and SSPR | | Password | Yes | |
+> \* Windows Hello for Business, by itself, does not serve as a step-up MFA credential. For example, an MFA Challenge from Sign-in Frequency or SAML Request containing forceAuthn=true. Windows Hello for Business can serve as a step-up MFA credential by being used in FIDO2 authentication. This requires users to be enabled for FIDO2 authentication to work sucessfully.
+ All of these authentication methods can be configured in the Azure portal, and increasingly using the [Microsoft Graph REST API](/graph/api/resources/authenticationmethods-overview). To learn more about how each authentication method works, see the following separate conceptual articles:
active-directory Concept Mfa Data Residency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-mfa-data-residency.md
Previously updated : 02/16/2021 Last updated : 08/01/2022
The Azure AD multifactor authentication service has datacenters in the United St
* Multifactor authentication phone calls originate from datacenters in the customer's region and are routed by global providers. Phone calls using custom greetings always originate from data centers in the United States. * General purpose user authentication requests from other regions are currently processed based on the user's location.
-* Push notifications that use the Microsoft Authenticator app are currently processed in regional datacenters based on the user's location. Vendor-specific device services, such as Apple Push Notification Service, might be outside the user's location.
+* Push notifications that use the Microsoft Authenticator app are currently processed in regional datacenters based on the user's location. Vendor-specific device services, such as Apple Push Notification Service or Google Firebase Cloud Messaging, might be outside the user's location.
## Personal data stored by Azure AD multifactor authentication
active-directory Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/overview.md
Organizations have to consider permissions management as a central piece of thei
- IT security teams are under increased pressure to ensure access to their expanding cloud estate is secure and compliant. - The inconsistency of cloud providers' native access management models makes it even more complex for Security and Identity to manage permissions and enforce least privilege access policies across their entire environment. ## Key use cases
active-directory Howto Configure Publisher Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-configure-publisher-domain.md
Title: Configure an app's publisher domain
-description: Learn how to configure an application's publisher domain to let users know where their information is being sent.
+description: Learn how to configure an app's publisher domain to let users know where their information is being sent.
-# Configure an application's publisher domain
+# Configure an app's publisher domain
-An applicationΓÇÖs publisher domain informs the users where their information is being sent and acts as an input/prerequisite for [publisher verification](publisher-verification-overview.md). Depending on whether an app is a [multi-tenant app](/azure/architecture/guide/multitenant/overview), when it was registered and it's verified publisher status, either the publisher domain or the verified publisher status will be displayed to the user on the [application's consent prompt](application-consent-experience.md). Multi-tenant applications are applications that support accounts outside of a single organizational directory; for example, support all Azure AD accounts, or support all Azure AD accounts and personal Microsoft accounts.
+An appΓÇÖs publisher domain informs users where their information is being sent. The publisher domain also acts as an input or prerequisite for [publisher verification](publisher-verification-overview.md).
-## New applications
+In an app's [consent prompt](application-consent-experience.md), either the publisher domain or the publisher verification status appears. Which information is shown depends on whether the app is a [multitenant app](/azure/architecture/guide/multitenant/overview), when the app was registered, and the app's publisher verification status.
-When you register a new app, the publisher domain of your app may be set to a default value. The value depends on where the app is registered, particularly whether the app is registered in a tenant and whether the tenant has tenant verified domains.
+A *multitenant app* is an app that supports user accounts that are outside a single organizational directory. For example, a multitenant app might support all Azure Active Directory (Azure AD) work or school accounts, or it might support both Azure AD work or school accounts and personal Microsoft accounts.
-If there are tenant-verified domains, the appΓÇÖs publisher domain will default to the primary verified domain of the tenant. If there are no tenant verified domains (which is the case when the application is not registered in a tenant), the appΓÇÖs publisher domain will be set to null.
+## Understand default publisher domain values
-The following table summarizes the default behavior of the publisher domain value.
+Several factors determine the default value that's set for an app's publisher domain:
-| Tenant-verified domains | Default value of publisher domain |
+- Whether the app is registered in a tenant.
+- Whether a tenant has tenant-verified domains.
+- The app registration date.
+
+### Tenant registration and tenant-verified domains
+
+When you register a new app, the publisher domain of your app might be set to a default value. The default value depends on where the app is registered. The publisher domain value depends especially on whether the app is registered in a tenant and whether the tenant has tenant-verified domains.
+
+If the app has tenant-verified domains, the appΓÇÖs publisher domain defaults to the primary verified domain of the tenant. If the app doesn't have tenant-verified domains and the app isn't registered in a tenant, the appΓÇÖs default publisher domain is null.
+
+The following table uses example scenarios to describe the default values for publisher domain:
+
+| Tenant-verified domain | Default value of publisher domain |
|-|-| | null | null |
-| *.onmicrosoft.com | *.onmicrosoft.com |
-| - *.onmicrosoft.com<br/>- domain1.com<br/>- domain2.com (primary) | domain2.com |
+| `*.onmicrosoft.com` | `*.onmicrosoft.com` |
+| - `*.onmicrosoft.com`<br/>- `domain1.com`<br/>- `domain2.com` (primary) | `domain2.com` |
+
+### App registration date
+
+An app's registration date also determines the app's default publisher domain values.
+
+If your multitenant app was registered *between May 21, 2019, and November 30, 2020*:
+
+- If the app's publisher domain isn't set, or if it's set to a domain that ends in `.onmicrosoft.com`, the app's consent prompt shows *unverified* for the publisher domain value.
+- If the app has a verified app domain, the consent prompt shows the verified domain.
+- If the app is publisher verified, the publisher domain shows a [blue *verified* badge](publisher-verification-overview.md) that indicates the status.
-1. If your multi-tenant was registered between **May 21, 2019 and November 30, 2020**:
- - If the application's publisher domain isn't set, or if it's set to a domain that ends in .onmicrosoft.com, the app's consent prompt will show **unverified** in place of the publisher domain.
- - If the application has a verified app domain, the consent prompt will show the verified domain.
- - If the application is publisher verified, it will show a [blue "verified" badge](publisher-verification-overview.md) indicating the same
-2. If your multi-tenant was registered after **November 30, 2020**:
- - If the application is not publisher verified, the app will show as "**unverified**" in the consent prompt (i.e, no publisher domain related info is shown)
- - If the application is publisher verified, it will show a [blue "verified" badge](publisher-verification-overview.md) indicating the same
-## Grandfathered applications
+If your multitenant was registered *after November 30, 2020*:
-If your app was registered **before May 21, 2019**, your application's consent prompt will not show **unverified** even if you have not set a publisher domain. We recommend that you set the publisher domain value so that users can see this information on your app's consent prompt.
+- If the app isn't publisher verified, the consent prompt for the app shows *unverified*. No publisher domain-related information appears.
+- If the app is publisher verified, the app consent prompt shows a [blue *verified* badge](publisher-verification-overview.md).
-## Configure publisher domain using the Azure portal
+#### Apps created before May 21, 2019
-To set your app's publisher domain, follow these steps.
+If your app was registered *before May 21, 2019*, your app's consent prompt shows *unverified*, even if you haven't set a publisher domain. We recommend that you set the publisher domain value so that users can see this information in your app's consent prompt.
-1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
-1. If you have access to multiple tenants, use the **Directory + subscription** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to select the tenant in which the app is registered.
-1. Navigate to [Azure Active Directory > App registrations](https://go.microsoft.com/fwlink/?linkid=2083908) to find and select the app that you want to configure.
+## Set a publisher domain in the Azure portal
- Once you've selected the app, you'll see the app's **Overview** page.
-1. Under **Manage**, select the **Branding**.
-1. Find the **Publisher domain** field and select one of the following options:
+To set a publisher domain for your app by using the Azure portal:
- - Select **Configure a domain** if you haven't configured a domain already.
- - Select **Update domain** if a domain has already been configured.
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. If you have access to multiple tenants, use the **Directory + subscription** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the portal global menu to select the tenant where the app is registered.
+1. In Azure Active Directory, go to [App registrations](https://go.microsoft.com/fwlink/?linkid=2083908). Search for and select the app you want to configure.
+1. In **Overview**, in the resource menu under **Manage**, select **Branding**.
+1. In **Publisher domain**, select one of the following options:
-If your app is registered in a tenant, you'll see two tabs to select from: **Select a verified domain** and **Verify a new domain**.
+ - If you haven't already configured a domain, select **Configure a domain**.
+ - If you have configured a domain, select **Update domain**.
-If your domain isn't registered in the tenant, you'll only see the option to verify a new domain for your application.
+1. If your app is registered in a tenant, next, select from two options:
-### To verify a new domain for your app
+ - **Select a verified domain**
+ - **Verify a new domain**
-1. Create a file named `microsoft-identity-association.json` and paste the following JSON code snippet.
+ If your domain isn't registered in the tenant, only the option to verify a new domain for your app appears.
+
+### Verify a new domain for your app
+
+To verify a new publisher domain for your app:
+
+1. Create a file named *microsoft-identity-association.json*. Copy the following JSON and paste it in the *microsoft-identity-association.json* file:
```json { "associatedApplications": [ {
- "applicationId": "{YOUR-APP-ID-HERE}"
+ "applicationId": "<your-app-id>"
}, {
- "applicationId": "{YOUR-OTHER-APP-ID-HERE}"
+ "applicationId": "<another-app-id>"
} ] } ```
-1. Replace the placeholder *{YOUR-APP-ID-HERE}* with the application (client) ID that corresponds to your app.
-1. Host the file at: `https://{YOUR-DOMAIN-HERE}.com/.well-known/microsoft-identity-association.json`. Replace the placeholder *{YOUR-DOMAIN-HERE}* to match the verified domain.
-1. Click the **Verify and save domain** button.
+1. Replace `<your-app-id>` with the application (client) ID for your app. Use all relevant app IDs if you're verifying a new domain for multiple apps.
+1. Host the file at `https://<your-domain>.com/.well-known/microsoft-identity-association.json`. Replace `<your-domain>` with the name of the verified domain.
+1. Select **Verify and save domain**.
-You're not required to maintain the resources that are used for verification after a domain has been verified. When the verification is finished, you can remove the hosted file.
+You're not required to maintain the resources that are used for verification after you verify a domain. When verification is finished, you can remove the hosted file.
-### To select a verified domain
+### Select a verified domain
-If your tenant has verified domains, select one of the domains from the **Select a verified domain** dropdown.
+If your tenant has verified domains, in the **Select a verified domain** dropdown, select one of the domains.
> [!NOTE]
-> The expected `Content-Type` header that should be returned is `application/json`. You may get an error if you use anything else, like `application/json; charset=utf-8`:
+> The expected `Content-Type` header that should return is `application/json`. If you use any other header, like `application/json; charset=utf-8`, you might see this error message:
> > `Verification of publisher domain failed. Error getting JSON file from https:///.well-known/microsoft-identity-association. The server returned an unexpected content type header value.` >
-## Implications on the app consent prompt
+## Publisher domain and the app consent prompt
-Configuring the publisher domain has an impact on what users see on the app consent prompt. To fully understand the components of the consent prompt, see [Understanding the application consent experiences](application-consent-experience.md).
+Configuring the publisher domain affects what users see in the app consent prompt. For more information about the components of the consent prompt, see [Understand the application consent experience](application-consent-experience.md).
-The following table describes the behavior for applications created before May 21, 2019.
+The following figure shows how publisher domain appears in app consent prompts for apps that were created before May 21, 2019:
-![Table that shows consent prompt behavior for apps created before May 21, 2019.](./media/howto-configure-publisher-domain/old-app-behavior-table.png)
-The behavior for applications created between May 21, 2019 and November 30, 2020 will depend on the publisher domain and the type of application. The following table describes what is shown on the consent prompt with the different combinations of configurations.
+For apps that were created between May 21, 2019, and November 30, 2020, how the publisher domain appears in an app's consent prompt depends on the publisher domain and the type of app. The following figure describes what appears on the consent prompt for different combinations of configurations:
-![Table that shows consent prompt behavior for apps created betweeb May 21, 2019 and Nov 30, 2020.](./media/howto-configure-publisher-domain/new-app-behavior-table.png)
-For multi-tenant applications created after November 30, 2020, only publisher verification status is surfaced in the consent prompt. The following table describes what is shown on the consent prompt depending on whether an app is verified or not. Consent prompt for single tenant applications will remain the same as above.
+For multitenant apps that were created after November 30, 2020, only publisher verification status is shown in an app's consent prompt. The following table describes what appears in a consent prompt depending on whether an app is verified. The consent prompt for single-tenant apps remains the same.
-![Table that shows consent prompt behavior for apps created after Nov 30, 2020.](./media/howto-configure-publisher-domain/new-app-behavior-publisher-verification-table.png)
-## Implications on redirect URIs
+## Publisher domain and redirect URIs
-Applications that sign in users with any work or school account, or personal Microsoft accounts (multi-tenant) are subject to few restrictions when specifying redirect URIs.
+Apps that sign in users by using any work or school account or by using a Microsoft account (multitenant) are subject to a few restrictions in redirect URIs.
### Single root domain restriction
-When the publisher domain value for multi-tenant apps is set to null, apps are restricted to share a single root domain for the redirect URIs. For example, the following combination of values isn't allowed because the root domain, contoso.com, doesn't match fabrikam.com.
+When the publisher domain value for a multitenant app is set to null, the app is restricted to sharing a single root domain for the redirect URIs. For example, the following combination of values isn't allowed because the root domain `contoso.com` doesn't match the root domain `fabrikam.com`.
-```
-"https://contoso.com",
+```json
+"https://contoso.com",
"https://fabrikam.com", ``` ### Subdomain restrictions
-Subdomains are allowed, but you must explicitly register the root domain. For example, while the following URIs share a single root domain, the combination isn't allowed.
+Subdomains are allowed, but you must explicitly register the root domain. For example, although the following URIs share a single root domain, the combination isn't allowed:
-```
+```json
"https://app1.contoso.com", "https://app2.contoso.com", ```
-However, if the developer explicitly adds the root domain, the combination is allowed.
+But if the developer explicitly adds the root domain, the combination is allowed:
-```
+```json
"https://contoso.com", "https://app1.contoso.com", "https://app2.contoso.com", ```
-### Exceptions
+### Restriction exceptions
The following cases aren't subject to the single root domain restriction: -- Single tenant apps, or apps that target accounts in a single directory-- Use of localhost as redirect URIs-- Redirect URIs with custom schemes (non-HTTP or HTTPS)
+- Single-tenant apps or apps that target accounts in a single directory.
+- Use of localhost as redirect URIs.
+- Redirect URIs that have custom schemes (non-HTTP or HTTPS).
## Configure publisher domain programmatically
-Currently, there is no REST API or PowerShell support to configure publisher domain programmatically.
+Currently, you can't use REST API or PowerShell to programmatically set a publisher domain.
+
+## Next steps
+
+- Learn how to [mark an app as publisher verified](mark-app-as-publisher-verified.md).
+- [Troubleshoot](troubleshoot-publisher-verification.md) publisher verification.
active-directory Publisher Verification Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/publisher-verification-overview.md
Title: Publisher verification overview
-description: Provides an overview of the publisher verification program for the Microsoft identity platform. Lists the benefits, program requirements, and frequently asked questions. When an application is marked as publisher verified, it means that the publisher has verified their identity using a Microsoft Partner Network account that has completed the verification process and has associated this MPN account with their application registration.
+description: Learn about benefits, program requirements, and frequently asked questions in the publisher verification program for the Microsoft identity platform.
# Publisher verification
-Publisher verification helps admins and end users understand the authenticity of application developers integrating with the Microsoft identity platform.
+Publisher verification gives app users and organization admins information about the authenticity of a developer who publishes an app that integrates with the Microsoft identity platform.
-> [!VIDEO https://www.youtube.com/embed/IYRN2jDl5dc]
+An app that's publisher verified means that the app's publisher has verified their identity with Microsoft. Identity verification includes using a [Microsoft Partner Network (MPN)](https://partner.microsoft.com/membership) account that's been [verified](/partner-center/verification-responses) and associating the MPN account with an app registration.
+
+When the publisher of an app has been verified, a blue *verified* badge appears in the Azure Active Directory (Azure AD) consent prompt for the app and on other webpages:
-When an application is marked as publisher verified, it means that the publisher has verified their identity using a [Microsoft Partner Network](https://partner.microsoft.com/membership) account that has completed the [verification](/partner-center/verification-responses) process and has associated this MPN account with their application registration.
-A blue "verified" badge appears on the Azure AD consent prompt and other screens:
+The following video describes the process:
-![Consent prompt](./media/publisher-verification-overview/consent-prompt.png)
+> [!VIDEO https://www.youtube.com/embed/IYRN2jDl5dc]
-This feature is primarily for developers building multi-tenant apps that leverage [OAuth 2.0 and OpenID Connect](active-directory-v2-protocols.md) with the [Microsoft identity platform](v2-overview.md). These apps can sign users in using OpenID Connect, or they may use OAuth 2.0 to request access to data using APIs like [Microsoft Graph](https://developer.microsoft.com/graph/).
+Publisher verification primarily is for developers who build multitenant apps that use [OAuth 2.0 and OpenID Connect](active-directory-v2-protocols.md) with the [Microsoft identity platform](v2-overview.md). These types of apps can sign in a user by using OpenID Connect, or they can use OAuth 2.0 to request access to data by using APIs like [Microsoft Graph](https://developer.microsoft.com/graph/).
## Benefits
-Publisher verification provides the following benefits:
-- **Increased transparency and risk reduction for customers**- this capability helps customers understand which apps being used in their organizations are published by developers they trust. -- **Improved branding**- a ΓÇ£verifiedΓÇ¥ badge appears on the Azure AD [consent prompt](application-consent-experience.md), Enterprise Apps page, and additional UX surfaces used by end users and admins.
+Publisher verification for an app has the following benefits:
+
+- **Increased transparency and risk reduction for customers**. Publisher verification helps customers identify apps that are published by developers they trust to reduce risk in the organization.
+
+- **Improved branding**. A blue *verified* badge appears in the Azure AD app [consent prompt](application-consent-experience.md), on the enterprise apps page, and in other app elements that users and admins see.
-- **Smoother enterprise adoption**- admins can configure [user consent policies](../manage-apps/configure-user-consent.md), with publisher verification status as one of the primary policy criteria.
+- **Smoother enterprise adoption**. Organization admins can configure [user consent policies](../manage-apps/configure-user-consent.md) that include publisher verification status as primary policy criteria.
> [!NOTE]
-> - Starting in November 2020, end users will no longer be able to grant consent to most newly registered multi-tenant apps without verified publishers if [risk-based step-up consent](../manage-apps/configure-risk-based-step-up-consent.md) is enabled. This will apply to apps that are registered after November 8, 2020, use OAuth2.0 to request permissions beyond basic sign-in and read user profile, and request consent from users in different tenants than the one the app is registered in. A warning will be displayed on the consent screen informing users that these apps are risky and are from unverified publishers.
+> Beginning November 2020, if [risk-based step-up consent](../manage-apps/configure-risk-based-step-up-consent.md) is enabled, users can't consent to most newly registered multitenant apps that *aren't* publisher verified. The policy applies to apps that were registered after November 8, 2020, which use OAuth 2.0 to request permissions that extend beyond the basic sign-in and read user profile, and which request consent from users in tenants that aren't the tenant where the app is registered. In this scenario, a warning appears on the consent screen. The warning informs the user that the app was created by an unverified publisher and that the app is risky to download or install.
## Requirements
-There are a few pre-requisites for publisher verification, some of which will have already been completed by many Microsoft partners. They are:
-- An MPN ID for a valid [Microsoft Partner Network](https://partner.microsoft.com/membership) account that has completed the [verification](/partner-center/verification-responses) process. This MPN account must be the [Partner global account (PGA)](/partner-center/account-structure#the-top-level-is-the-partner-global-account-pga) for your organization. (**NOTE**: It can't be the Partner Location MPN ID. Location MPN IDs aren't currently supported)
+App developers must meet a few requirements to complete the publisher verification process. Many Microsoft partners will have already satisfied these requirements.
-- The application to be publisher verified must be registered using a Azure AD account. Applications registered using a Microsoft personal account aren't supported for publisher verification.
+- The developer must have an MPN ID for a valid [Microsoft Partner Network](https://partner.microsoft.com/membership) account that has completed the [verification](/partner-center/verification-responses) process. The MPN account must be the [partner global account (PGA)](/partner-center/account-structure#the-top-level-is-the-partner-global-account-pga) for the developer's organization.
-- The Azure AD tenant where the app is registered must be associated with the Partner Global account. If it's not the primary tenant associated with the PGA, follow the steps to [set up the MPN partner global account as a multi-tenant account and associate the Azure AD tenant](/partner-center/multi-tenant-account#add-an-azure-ad-tenant-to-your-account).
+ > [!NOTE]
+ > The MPN account you use for publisher verification can't be your partner location MPN ID. Currently, location MPN IDs aren't supported for the publisher verification process.
-- An app registered in an Azure AD tenant, with a [Publisher Domain](howto-configure-publisher-domain.md) configured.
+- The app that's to be publisher verified must be registered by using an Azure AD work or school account. Apps that are registered by using a Microsoft account can't be publisher verified.
-- The domain of the email address used during MPN account verification must either match the publisher domain configured on the app or a DNS-verified [custom domain](../fundamentals/add-custom-domain.md) added to the Azure AD tenant.
+- The Azure AD tenant where the app is registered must be associated with the PGA. If the tenant where the app is registered isn't the primary tenant associated with the PGA, complete the steps to [set up the MPN PGA as a multitenant account and associate the Azure AD tenant](/partner-center/multi-tenant-account#add-an-azure-ad-tenant-to-your-account).
-- The user performing verification must be authorized to make changes to both the app registration in Azure AD and the MPN account in Partner Center.
+- The app must be registered in an Azure AD tenant and have a [publisher domain](howto-configure-publisher-domain.md) set.
- - In Azure AD this user must be a member of one of the following [roles](../roles/permissions-reference.md): Application Admin, Cloud Application Admin, or Global Admin.
+- The domain of the email address that's used during MPN account verification must either match the publisher domain that's set for the app or be a DNS-verified [custom domain](../fundamentals/add-custom-domain.md) that's added to the Azure AD tenant.
- - In Partner Center this user must have of the following [roles](/partner-center/permissions-overview): MPN Partner Admin, Account Admin, or a Global Admin (this is a shared role mastered in Azure AD).
-
-- The user performing verification must sign in using [multi-factor authentication](../authentication/howto-mfa-getstarted.md).
+- The user who initiates verification must be authorized to make changes both to the app registration in Azure AD and to the MPN account in Partner Center.
-- The publisher agrees to the [Microsoft identity platform for developers Terms of Use](/legal/microsoft-identity-platform/terms-of-use).
+ - In Azure AD, this user must be a member of one of the following [roles](../roles/permissions-reference.md): Application Admin, Cloud Application Admin, or Global Admin.
-Developers who have already met these pre-requisites can get verified in a matter of minutes. If the requirements have not been met, getting set up is free.
+ - In Partner Center, this user must have one of the following [roles](/partner-center/permissions-overview): MPN Partner Admin, Account Admin, or Global Admin (a shared role that's mastered in Azure AD).
+
+- The user who initiates verification must sign in by using [multifactor authentication](../authentication/howto-mfa-getstarted.md).
-## National Clouds and Publisher Verification
-Publisher verification is currently not supported in national clouds. Applications registered in national cloud tenants can't be publisher-verified at this time.
+- The publisher must consent to the [Microsoft identity platform for developers Terms of Use](/legal/microsoft-identity-platform/terms-of-use).
-## Frequently asked questions
-Below are some frequently asked questions regarding the publisher verification program. For FAQs related to the requirements and the process, see [mark an app as publisher verified](mark-app-as-publisher-verified.md).
+Developers who have already met these requirements can be verified in minutes. No charges are associated with completing the prerequisites for publisher verification.
-- **What information does publisher verification __not__ provide?** When an application is marked publisher verified this does not indicate whether the application or its publisher has achieved any specific certifications, complies with industry standards, adheres to best practices, etc. Other Microsoft programs do provide this information, including [Microsoft 365 App Certification](/microsoft-365-app-certification/overview).
+## Publisher verification in national clouds
-- **How much does this cost? Does it require any license?** Microsoft does not charge developers for publisher verification and it does not require any specific license.
+Publisher verification currently isn't supported in national clouds. Apps that are registered in national cloud tenants can't be publisher verified at this time.
-- **How does this relate to Microsoft 365 Publisher Attestation? What about Microsoft 365 App Certification?** These are complementary programs that developers can use to create trustworthy apps that can be confidently adopted by customers. Publisher verification is the first step in this process, and should be completed by all developers creating apps that meet the above criteria.
+## Frequently asked questions
- Developers who are also integrating with Microsoft 365 can receive additional benefits from these programs. For more information, refer to [Microsoft 365 Publisher Attestation](/microsoft-365-app-certification/docs/attestation) and [Microsoft 365 App Certification](/microsoft-365-app-certification/docs/certification).
+Review frequently asked questions about the publisher verification program. For common questions about requirements and the process, see [Mark an app as publisher verified](mark-app-as-publisher-verified.md).
-- **Is this the same thing as the Azure AD Application Gallery?** No- publisher verification is a complementary but separate program to the [Azure Active Directory application gallery](../manage-apps/v2-howto-app-gallery-listing.md). Developers who fit the above criteria should complete the publisher verification process independently of participation in that program.
+- **What does publisher verification *not* tell me about the app or its publisher?** The blue *verified* badge doesn't imply or indicate quality criteria you might look for in an app. For example, you might want to know whether the app or its publisher have specific certifications, comply with industry standards, or adhere to best practices. Publisher verification doesn't give you this information. Other Microsoft programs, like [Microsoft 365 App Certification](/microsoft-365-app-certification/overview), do provide this information.
+
+- **How much does publisher verification cost for the app developer? Does it require a license?** Microsoft doesn't charge developers for publisher verification. No license is required to become a verified publisher.
+
+- **How does publisher verification relate to Microsoft 365 Publisher Attestation and Microsoft 365 App Certification?** [Microsoft 365 Publisher Attestation](/microsoft-365-app-certification/docs/attestation) and [Microsoft 365 App Certification](/microsoft-365-app-certification/docs/certification) are complementary programs that help developers publish trustworthy apps that customers can confidently adopt. Publisher verification is the first step in this process. All developers who create apps that meet the criteria for completing Microsoft 365 Publisher Attestation or Microsoft 365 App Certification should complete publisher verification. The combined programs can give developers who integrate their apps with Microsoft 365 even more benefits.
+
+- **Is publisher verification the same as the Azure Active Directory application gallery?** No. Publisher verification complements the [Azure Active Directory application gallery](../manage-apps/v2-howto-app-gallery-listing.md), but it's a separate program. Developers who fit the publisher verification criteria should complete publisher verification independently of participating in the Azure Active Directory application gallery or other programs.
## Next steps
-* Learn how to [mark an app as publisher verified](mark-app-as-publisher-verified.md).
-* [Troubleshoot](troubleshoot-publisher-verification.md) publisher verification.
+
+- Learn how to [mark an app as publisher verified](mark-app-as-publisher-verified.md).
+- [Troubleshoot](troubleshoot-publisher-verification.md) publisher verification.
active-directory Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/whats-new-docs.md
Previously updated : 07/04/2022 Last updated : 08/01/2022
Welcome to what's new in the Microsoft identity platform documentation. This article lists new docs that have been added and those that have had significant updates in the last three months.
+## July 2022
+
+### New articles
+
+- [Configure SAML app multi-instancing for an application in Azure Active Directory](reference-app-multi-instancing.md)
+
+### Updated articles
+
+- [Application and service principal objects in Azure Active Directory](app-objects-and-service-principals.md)
+- [Application configuration options](msal-client-application-configuration.md)
+- [A web API that calls web APIs: Code configuration](scenario-web-api-call-api-app-configuration.md)
+- [Claims mapping policy type](reference-claims-mapping-policy-type.md)
+- [Customize claims issued in the SAML token for enterprise applications](active-directory-saml-claims-customization.md)
+- [Microsoft identity platform access tokens](access-tokens.md)
+- [Single-page application: Sign-in and Sign-out](scenario-spa-sign-in.md)
+- [Tutorial: Add sign-in to Microsoft to an ASP.NET web app](tutorial-v2-asp-webapp.md)
+ ## June 2022 ### Updated articles
Welcome to what's new in the Microsoft identity platform documentation. This art
- [Single sign-on with MSAL.js](msal-js-sso.md) - [Tutorial: Sign in users and acquire a token for Microsoft Graph in a Node.js & Express web app](tutorial-v2-nodejs-webapp-msal.md) - [What's new for authentication?](reference-breaking-changes.md)-
-## March 2022
-
-### New articles
--- [Secure access control using groups in Azure AD](secure-group-access-control.md)-
-### Updated articles
--- [Authentication flow support in MSAL](msal-authentication-flows.md)-- [Claims mapping policy type](reference-claims-mapping-policy-type.md)-- [Configure an app to trust an external identity provider (preview)](workload-identity-federation-create-trust.md)-- [OAuth 2.0 and OpenID Connect in the Microsoft identity platform](active-directory-v2-protocols.md)-- [Signing key rollover in the Microsoft identity platform](active-directory-signing-key-rollover.md)-- [Troubleshoot publisher verification](troubleshoot-publisher-verification.md)
active-directory Howto Vm Sign In Azure Ad Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/howto-vm-sign-in-azure-ad-windows.md
You might get the following error message when you initiate a remote desktop con
![Screenshot of the message that says your account is configured to prevent you from using this device.](./media/howto-vm-sign-in-azure-ad-windows/rbac-role-not-assigned.png)
-Verify that you've [configured Azure RBAC policies](../../virtual-machines/linux/login-using-aad.md) for the VM that grant the user the Virtual Machine Administrator Login or Virtual Machine User Login role.
+Verify that you've [configured Azure RBAC policies](#configure-role-assignments-for-the-vm) for the VM that grant the user the Virtual Machine Administrator Login or Virtual Machine User Login role.
> [!NOTE] > If you're having problems with Azure role assignments, see [Troubleshoot Azure RBAC](../../role-based-access-control/troubleshooting.md#limits).
active-directory Groups Self Service Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-self-service-management.md
Previously updated : 06/24/2022 Last updated : 08/01/2022
Groups created in | Security group default behavior | Microsoft 365 group defaul
## Make a group available for user self-service
-1. Sign in to the [Azure portal](https://portal.azure.com) or [Azure AD admin center](https://aad.portal.azure.com) with an account that's been assigned the Global Administrator or Privileged Role Administrator role for the directory.
+1. Sign in to the [Azure portal](https://portal.azure.com) or [Azure AD admin center](https://aad.portal.azure.com) with an account that's been assigned the Global Administrator or Groups Administrator role for the directory.
1. Select **Groups**, and then select **General** settings.
active-directory Licensing Service Plan Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/licensing-service-plan-reference.md
# Product names and service plan identifiers for licensing
-When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Microsoft_AAD_IAM/LicensesMenuBlade/Products) or the [Microsoft 365 admin center](https://admin.microsoft.com), you see product names that look something like *Office 365 E3*. When you use PowerShell v1.0 cmdlets, the same product is identified using a specific but less friendly name: *ENTERPRISEPACK*. When using PowerShell v2.0 cmdlets or [Microsoft Graph](/graph/api/resources/subscribedsku), the same product is identified using a GUID value: *6fd2c87f-b296-42f0-b197-1e91e994b900*. The following table lists the most commonly used Microsoft online service products and provides their various ID values. These tables are for reference purposes in Azure Active Directory (Azure AD), part of Microsoft Entra, and are accurate only as of the date when this article was last updated. Microsoft does not plan to update them for newly added services periodically.
+When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Microsoft_AAD_IAM/LicensesMenuBlade/Products) or the [Microsoft 365 admin center](https://admin.microsoft.com), you see product names that look something like *Office 365 E3*. When you use PowerShell v1.0 cmdlets, the same product is identified using a specific but less friendly name: *ENTERPRISEPACK*. When using PowerShell v2.0 cmdlets or [Microsoft Graph](/graph/api/resources/subscribedsku), the same product is identified using a GUID value: *6fd2c87f-b296-42f0-b197-1e91e994b900*. The following table lists the most commonly used Microsoft online service products and provides their various ID values. These tables are for reference purposes in Azure Active Directory (Azure AD), part of Microsoft Entra, and are accurate only as of the date when this article was last updated. Microsoft will continue to make periodic updates to this document.
- **Product name**: Used in management portals - **String ID**: Used by PowerShell v1.0 cmdlets when performing operations on licenses or by the **skuPartNumber** property of the **subscribedSku** Microsoft Graph API
active-directory Cross Tenant Access Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/cross-tenant-access-overview.md
Microsoft cloud settings let you collaborate with organizations from different M
- Microsoft Azure global cloud and Microsoft Azure Government - Microsoft Azure global cloud and Microsoft Azure China 21Vianet
+> [!NOTE]
+> Microsoft Azure Government includes the Office GCC-High and DoD clouds.
+ To set up B2B collaboration, both organizations configure their Microsoft cloud settings to enable the partner's cloud. Then each organization uses the partner's tenant ID to find and add the partner to their organizational settings. From there, each organization can allow their default cross-tenant access settings apply to the partner, or they can configure partner-specific inbound and outbound settings. After you establish B2B collaboration with a partner in another cloud, you'll be able to: - Use B2B collaboration to invite a user in the partner tenant to access resources in your organization, including web line-of-business apps, SaaS apps, and SharePoint Online sites, documents, and files.
+- Use B2B collaboration to [share Power BI content to a user in the partner tenant](https://docs.microsoft.com/power-bi/enterprise/service-admin-azure-ad-b2b#cross-cloud-b2b).
- Apply Conditional Access policies to the B2B collaboration user and opt to trust device claims (compliant claims and hybrid Azure AD joined claims) from the userΓÇÖs home tenant. > [!NOTE]
active-directory Redemption Experience https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/redemption-experience.md
When a guest signs in to a resource in a partner organization for the first time
1. The guest reviews the **Review permissions** page describing the inviting organization's privacy statement. A user must **Accept** the use of their information in accordance to the inviting organization's privacy policies to continue.
- ![Screenshot showing the Review permissions page](media/redemption-experience/review-permissions.png)
+ ![Screenshot showing the Review permissions page.](media/redemption-experience/new-review-permissions.png)
> [!NOTE] > For information about how you as a tenant administrator can link to your organization's privacy statement, see [How-to: Add your organization's privacy info in Azure Active Directory](../fundamentals/active-directory-properties-area.md).
active-directory Entitlement Management Access Package Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-resources.md
If you need to add resources to an access package, you should check whether the
![List of resources in a catalog](./media/entitlement-management-access-package-resources/catalog-resources.png)
-1. If the resources aren't already in the catalog, and you're an administrator or a catalog owner, you can [add resources to a catalog](entitlement-management-catalog-create.md#add-resources-to-a-catalog).
+1. If the resources aren't already in the catalog, and you're an administrator or a catalog owner, you can [add resources to a catalog](entitlement-management-catalog-create.md#add-resources-to-a-catalog). The types of resources you can add are groups, applications, and SharePoint Online sites. For example:
+
+* Groups can be cloud-created Microsoft 365 Groups or cloud-created Azure AD security groups. Groups that originate in an on-premises Active Directory can't be assigned as resources because their owner or member attributes can't be changed in Azure AD. To give users access to an application that uses AD security group memberships, create a new group in Azure AD, configure [group writeback to AD](../hybrid/how-to-connect-group-writeback-v2.md), and [enable that group to be written to AD](../enterprise-users/groups-write-back-portal.md). Groups that originate in Exchange Online as Distribution groups can't be modified in Azure AD either.
+* Applications can be Azure AD enterprise applications, which include both software as a service (SaaS) applications and your own applications integrated with Azure AD. If your application has not yet been integrated with Azure AD, see [govern access for applications in your environment](identity-governance-applications-prepare.md) and [integrate an application with Azure AD](identity-governance-applications-integrate.md).
+* Sites can be SharePoint Online sites or SharePoint Online site collections.
1. If you are an access package manager and you need to add resources to the catalog, you can ask the catalog owner to add them.
If you need to add resources to an access package, you should check whether the
A resource role is a collection of permissions associated with a resource. Resources can be made available for users to request if you add resource roles from each of the catalog's resources to your access package. You can add resource roles that are provided by groups, teams, applications, and SharePoint sites. When a user receives an assignment to an access package, they'll be added to all the resource roles in the access package.
-If you don't want users to receive all of the roles, then you'll need to create multiple access packages in the catalog, with separate access packages for each of the resource roles. You can also mark the access packages as [incompatible](entitlement-management-access-package-incompatible.md) with each other so users can't request access to access packages that would give them excessive access.
+If you want some users to receive different roles than others, then you'll need to create multiple access packages in the catalog, with separate access packages for each of the resource roles. You can also mark the access packages as [incompatible](entitlement-management-access-package-incompatible.md) with each other so users can't request access to access packages that would give them excessive access.
**Prerequisite role:** Global administrator, User administrator, Catalog owner, or Access package manager
active-directory Entitlement Management Catalog Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-catalog-create.md
$catalog = New-MgEntitlementManagementAccessPackageCatalog -DisplayName "Marketi
## Add resources to a catalog
-To include resources in an access package, the resources must exist in a catalog. The types of resources you can add are groups, applications, and SharePoint Online sites. For example:
+To include resources in an access package, the resources must exist in a catalog. The types of resources you can add are groups, applications, and SharePoint Online sites.
-* Groups can be cloud-created Microsoft 365 Groups or cloud-created Azure AD security groups. Groups that originate in an on-premises Active Directory can't be assigned as resources because their owner or member attributes can't be changed in Azure AD. Groups that originate in Exchange Online as Distribution groups can't be modified in Azure AD either.
-* Applications can be Azure AD enterprise applications, which include both software as a service (SaaS) applications and your own applications integrated with Azure AD. For more information on how to select appropriate resources for applications with multiple roles, see [Add resource roles](entitlement-management-access-package-resources.md#add-resource-roles).
+* Groups can be cloud-created Microsoft 365 Groups or cloud-created Azure AD security groups.
+
+ * Groups that originate in an on-premises Active Directory can't be assigned as resources because their owner or member attributes can't be changed in Azure AD. To give a user access to an application that uses AD security group memberships, create a new security group in Azure AD, configure [group writeback to AD](../hybrid/how-to-connect-group-writeback-v2.md), and [enable that group to be written to AD](../enterprise-users/groups-write-back-portal.md), so that the cloud-created group can be used by an AD-based application.
+
+ * Groups that originate in Exchange Online as Distribution groups can't be modified in Azure AD either, so cannot be added to catalogs.
+
+* Applications can be Azure AD enterprise applications, which include both software as a service (SaaS) applications and your own applications integrated with Azure AD.
+
+ * If your application has not yet been integrated with Azure AD, see [govern access for applications in your environment](identity-governance-applications-prepare.md) and [integrate an application with Azure AD](identity-governance-applications-integrate.md).
+
+ * For more information on how to select appropriate resources for applications with multiple roles, see [Add resource roles](entitlement-management-access-package-resources.md#add-resource-roles).
* Sites can be SharePoint Online sites or SharePoint Online site collections. > [!NOTE] > Search SharePoint Site by site name or an exact URL as the search box is case sensitive.
active-directory Concept Azure Ad Connect Sync Default Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/concept-azure-ad-connect-sync-default-configuration.md
The following attribute rules apply:
### Contact out-of-box rules A contact object must satisfy the following to be synchronized:
+* Must have mail attribute value.
* The contact must be mail-enabled. It is verified with the following rules: * `IsPresent([proxyAddresses]) = True)`. The proxyAddresses attribute must be populated. * A primary email address can be found in either the proxyAddresses attribute or the mail attribute. The presence of an \@ is used to verify that the content is an email address. One of these two rules must be evaluated to True.
active-directory Configure Authentication For Federated Users Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/configure-authentication-for-federated-users-portal.md
New-AzureADPolicy -Definition @("{`"HomeRealmDiscoveryPolicy`":{`"AccelerateToFe
```json "HomeRealmDiscoveryPolicy": {
-"AccelerateToFederatedDomain": true
+ "AccelerateToFederatedDomain": true
} ``` ::: zone-end
New-AzureADPolicy -Definition @("{`"HomeRealmDiscoveryPolicy`":{`"AccelerateToFe
```json "HomeRealmDiscoveryPolicy": {
-"AccelerateToFederatedDomain": true
-"PreferredDomain": ["federated.example.edu"]
+ "AccelerateToFederatedDomain": true,
+ "PreferredDomain": [
+ "federated.example.edu"
+ ]
} ``` ::: zone-end
The following policy enables username/password authentication for federated user
```json "EnableDirectAuthPolicy": {
-"AllowCloudPasswordValidation": true
+ "AllowCloudPasswordValidation": true
} ```
Set the HRD policy using Microsoft Graph. See [homeRealmDiscoveryPolicy](/graph/
From the Microsoft Graph explorer window:
-1. Grant the Policy.ReadWrite.ApplicationConfiguration permission under the **Modify permissions** tab.
+1. Grant consent to the *Policy.ReadWrite.ApplicationConfiguration* permission.
1. Use the URL https://graph.microsoft.com/v1.0/policies/homeRealmDiscoveryPolicies
-1. POST the new policy to this URL, or PATCH to /policies/homerealmdiscoveryPolicies/{policyID} if overwriting an existing one.
+1. POST the new policy to this URL, or PATCH to https://graph.microsoft.com/v1.0/policies/homeRealmDiscoveryPolicies/{policyID} if overwriting an existing one.
1. POST or PATCH contents: ```json
From the Microsoft Graph explorer window:
1. To see your new policy and get its ObjectID, run the following query: ```http
- GET policies/homeRealmDiscoveryPolicies
+ GET https://graph.microsoft.com/v1.0/policies/homeRealmDiscoveryPolicies
``` 1. To delete the HRD policy you created, run the query: ```http
- DELETE /policies/homeRealmDiscoveryPolicies/{policy objectID}
+ DELETE https://graph.microsoft.com/v1.0/policies/homeRealmDiscoveryPolicies/{policy objectID}
``` ::: zone-end ## Next steps
-[Prevent sign-in auto-acceleration](prevent-domain-hints-with-home-realm-discovery.md).
+[Prevent sign-in auto-acceleration](prevent-domain-hints-with-home-realm-discovery.md).
active-directory Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/whats-new-docs.md
Title: "What's new in Azure Active Directory application management" description: "New and updated documentation for the Azure Active Directory application management." Previously updated : 07/04/2022 Last updated : 08/01/2022
Welcome to what's new in Azure Active Directory (Azure AD) application management documentation. This article lists new docs that have been added and those that have had significant updates in the last three months. To learn what's new with the application management service, see [What's new in Azure AD](../fundamentals/whats-new.md).
+## July 2022
+
+### New articles
+
+- [Create an enterprise application from a multi-tenant application in Azure Active Directory](create-service-principal-cross-tenant.md)
+- [Deletion and recovery of applications FAQ](delete-recover-faq.yml)
+- [Recover deleted applications in Azure Active Directory FAQs](recover-deleted-apps-faq.md)
+- [Restore an enterprise application in Azure AD](restore-application.md)
+- [SAML Request Signature Verification (Preview)](howto-enforce-signed-saml-authentication.md)
+- [Tutorial: Configure Cloudflare with Azure Active Directory for secure hybrid access](cloudflare-azure-ad-integration.md)
+- [Tutorial: Configure Datawiza to enable Azure Active Directory Multi-Factor Authentication and single sign-on to Oracle JD Edwards](datawiza-azure-ad-sso-oracle-jde.md)
+
+### Updated articles
+
+- [Delete an enterprise application](delete-application-portal.md)
+- [Configure Azure Active Directory SAML token encryption](howto-saml-token-encryption.md)
+- [Review permissions granted to applications](manage-application-permissions.md)
+- [Tutorial: Configure Secure Hybrid Access with Azure Active Directory and Datawiza](datawiza-with-azure-ad.md)
+ ## June 2022 ### Updated articles
Welcome to what's new in Azure Active Directory (Azure AD) application managemen
- [Tutorial: Manage certificates for federated single sign-on](tutorial-manage-certificates-for-federated-single-sign-on.md) - [Tutorial: Migrate Okta federation to Azure AD-managed authentication](migrate-okta-federation-to-azure-active-directory.md) - [Tutorial: Migrate Okta sync provisioning to Azure AD Connect-based synchronization](migrate-okta-sync-provisioning-to-azure-active-directory.md)-
-## March 2022
-
-### New articles
--- [Overview of admin consent workflow](admin-consent-workflow-overview.md)-- [Tutorial: Configure F5ΓÇÖs BIG-IP Easy Button for SSO to SAP ERP](f5-big-ip-sap-erp-easy-button.md)-
-### Updated articles
--- [Configure the admin consent workflow](configure-admin-consent-workflow.md)-- [Grant tenant-wide admin consent to an application](grant-admin-consent.md)-- [Integrate F5 BIG-IP with Azure AD](f5-aad-integration.md)-- [Manage app consent policies](manage-app-consent-policies.md)-- [Plan Azure AD My Apps configuration](my-apps-deployment-plan.md)-- [Quickstart: View enterprise applications](view-applications-portal.md)-- [Review admin consent requests](review-admin-consent-requests.md)-- [Tutorial: Configure F5 BIG-IP Easy Button for header-based and LDAP SSO](f5-big-ip-ldap-header-easybutton.md)-- [Tutorial: Configure F5ΓÇÖs BIG-IP Easy Button for header-based SSO](f5-big-ip-headers-easy-button.md)-- [Tutorial: Configure F5 BIG-IP Easy Button for Kerberos SSO](f5-big-ip-kerberos-easy-button.md)-- [Tutorial: Configure F5ΓÇÖs BIG-IP Easy Button for SSO to Oracle EBS](f5-big-ip-oracle-enterprise-business-suite-easy-button.md)-- [Tutorial: Configure F5ΓÇÖs BIG-IP Easy Button for SSO to Oracle JDE](f5-big-ip-oracle-jde-easy-button.md)-- [Tutorial: Configure F5ΓÇÖs BIG-IP Easy Button for SSO to Oracle PeopleSoft](f5-big-ip-oracle-peoplesoft-easy-button.md)-- [Tutorial: Manage certificates for federated single sign-on](tutorial-manage-certificates-for-federated-single-sign-on.md)
active-directory Managed Identities Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/managed-identities-faq.md
ms.devlang:
Previously updated : 02/23/2022 Last updated : 07/27/2022
Managed identities use certificate-based authentication. Each managed identity
In short, yes you can use user assigned managed identities in more than one Azure region. The longer answer is that while user assigned managed identities are created as regional resources the associated [service principal](../develop/app-objects-and-service-principals.md#service-principal-object) (SP) created in Azure AD is available globally. The service principal can be used from any Azure region and its availability is dependent on the availability of Azure AD. For example, if you created a user assigned managed identity in the South-Central region and that region becomes unavailable this issue only impacts [control plane](../../azure-resource-manager/management/control-plane-and-data-plane.md) activities on the managed identity itself. The activities performed by any resources already configured to use the managed identities wouldn't be impacted.
-### Does managed identities for Azure resources work with Azure Cloud Services?
+### Does managed identities for Azure resources work with Azure Cloud Services (Classic)?
-No, there are no plans to support managed identities for Azure resources in Azure Cloud Services.
+Managed identities for Azure resources donΓÇÖt have support for [Azure Cloud Services (classic)](../../cloud-services/cloud-services-choose-me.md) at this time. ΓÇ£
### What is the security boundary of managed identities for Azure resources?
active-directory Aws Single Sign On Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/aws-single-sign-on-provisioning-tutorial.md
Title: 'Tutorial: Configure AWS single sign-On for automatic user provisioning with Azure Active Directory | Microsoft Docs'
-description: Learn how to automatically provision and de-provision user accounts from Azure AD to AWS single sign-On.
+ Title: 'Tutorial: Configure AWS IAM Identity Center(successor to AWS single sign-On) for automatic user provisioning with Azure Active Directory | Microsoft Docs'
+description: Learn how to automatically provision and de-provision user accounts from Azure AD to AWS IAM Identity Center.
documentationcenter: ''
Last updated 02/23/2021
-# Tutorial: Configure AWS single sign-On for automatic user provisioning
+# Tutorial: Configure AWS IAM Identity Center for automatic user provisioning
-This tutorial describes the steps you need to perform in both AWS single sign-On and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [AWS single sign-On](https://console.aws.amazon.com/singlesignon) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
+This tutorial describes the steps you need to perform in both AWS IAM Identity Center(successor to AWS single sign-On) and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [AWS IAM Identity Center](https://console.aws.amazon.com/singlesignon) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
## Capabilities Supported > [!div class="checklist"]
-> * Create users in AWS single sign-On
-> * Remove users in AWS single sign-On when they no longer require access
-> * Keep user attributes synchronized between Azure AD and AWS single sign-On
-> * Provision groups and group memberships in AWS single sign-On
-> * [single sign-On](aws-single-sign-on-tutorial.md) to AWS single sign-On
+> * Create users in AWS IAM Identity Center
+> * Remove users in AWS IAM Identity Center when they no longer require access
+> * Keep user attributes synchronized between Azure AD and AWS IAM Identity Center
+> * Provision groups and group memberships in AWS IAM Identity Center
+> * [IAM Identity Center](aws-single-sign-on-tutorial.md) to AWS IAM Identity Center
## Prerequisites
The scenario outlined in this tutorial assumes that you already have the followi
* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md) * A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
-* A SAML connection from your Azure AD account to AWS single sign-On, as described in Tutorial
+* A SAML connection from your Azure AD account to AWS IAM Identity Center, as described in Tutorial
## Step 1. Plan your provisioning deployment 1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md). 2. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-3. Determine what data to [map between Azure AD and AWS single sign-On](../app-provisioning/customize-application-attributes.md).
+3. Determine what data to [map between Azure AD and AWS IAM Identity Center](../app-provisioning/customize-application-attributes.md).
-## Step 2. Configure AWS single sign-On to support provisioning with Azure AD
+## Step 2. Configure AWS IAM Identity Center to support provisioning with Azure AD
-1. Open the [AWS single sign-On](https://console.aws.amazon.com/singlesignon).
+1. Open the [AWS IAM Identity Center](https://console.aws.amazon.com/singlesignon).
2. Choose **Settings** in the left navigation pane
The scenario outlined in this tutorial assumes that you already have the followi
![Screenshot of enabling automatic provisioning.](media/aws-single-sign-on-provisioning-tutorial/automatic-provisioning.png)
-4. In the Inbound automatic provisioning dialog box, copy and save the **SCIM endpoint** and **Access Token** (visible after clicking on Show Token). These values will be entered in the **Tenant URL** and **Secret Token** field in the Provisioning tab of your AWS single sign-On application in the Azure portal.
+4. In the Inbound automatic provisioning dialog box, copy and save the **SCIM endpoint** and **Access Token** (visible after clicking on Show Token). These values will be entered in the **Tenant URL** and **Secret Token** field in the Provisioning tab of your AWS IAM Identity Center application in the Azure portal.
![Screenshot of extracting provisioning configurations.](media/aws-single-sign-on-provisioning-tutorial/inbound-provisioning.png)
-## Step 3. Add AWS single sign-On from the Azure AD application gallery
+## Step 3. Add AWS IAM Identity Center from the Azure AD application gallery
-Add AWS single sign-On from the Azure AD application gallery to start managing provisioning to AWS single sign-On. If you have previously setup AWS single sign-On for SSO, you can use the same application. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
+Add AWS IAM Identity Center from the Azure AD application gallery to start managing provisioning to AWS IAM Identity Center. If you have previously setup AWS IAM Identity Center for SSO, you can use the same application. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
## Step 4. Define who will be in scope for provisioning
The Azure AD provisioning service allows you to scope who will be provisioned ba
* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
-## Step 5. Configure automatic user provisioning to AWS single sign-On
+## Step 5. Configure automatic user provisioning to AWS IAM Identity Center
This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in TestApp based on user and/or group assignments in Azure AD.
-### To configure automatic user provisioning for AWS single sign-On in Azure AD:
+### To configure automatic user provisioning for AWS IAM Identity Center in Azure AD:
1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**. ![Enterprise applications blade](common/enterprise-applications.png)
-2. In the applications list, select **AWS single sign-On**.
+2. In the applications list, select **AWS IAM Identity Center**.
- ![Screenshot of the AWS single sign-On link in the Applications list.](common/all-applications.png)
+ ![Screenshot of the AWS IAM Identity Center link in the Applications list.](common/all-applications.png)
3. Select the **Provisioning** tab.
This section guides you through the steps to configure the Azure AD provisioning
![Provisioning tab automatic](common/provisioning-automatic.png)
-5. Under the **Admin Credentials** section, input your AWS single sign-On **Tenant URL** and **Secret Token** retrieved earlier in Step 2. Click **Test Connection** to ensure Azure AD can connect to AWS single sign-On.
+5. Under the **Admin Credentials** section, input your AWS IAM Identity Center **Tenant URL** and **Secret Token** retrieved earlier in Step 2. Click **Test Connection** to ensure Azure AD can connect to AWS IAM Identity Center.
![Token](common/provisioning-testconnection-tenanturltoken.png)
This section guides you through the steps to configure the Azure AD provisioning
7. Select **Save**.
-8. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to AWS single sign-On**.
+8. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to AWS IAM Identity Center**.
-9. Review the user attributes that are synchronized from Azure AD to AWS single sign-On in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in AWS single sign-On for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you will need to ensure that the AWS single sign-On API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+9. Review the user attributes that are synchronized from Azure AD to AWS IAM Identity Center in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in AWS IAM Identity Center for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you will need to ensure that the AWS IAM Identity Center API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
|Attribute|Type|Supported for Filtering| ||||
This section guides you through the steps to configure the Azure AD provisioning
|urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:organization|String| |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:manager|Reference|
-10. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to AWS single sign-On**.
+10. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to AWS IAM Identity Center**.
-11. Review the group attributes that are synchronized from Azure AD to AWS single sign-On in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in AWS single sign-On for update operations. Select the **Save** button to commit any changes.
+11. Review the group attributes that are synchronized from Azure AD to AWS IAM Identity Center in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in AWS IAM Identity Center for update operations. Select the **Save** button to commit any changes.
|Attribute|Type|Supported for Filtering| ||||
This section guides you through the steps to configure the Azure AD provisioning
12. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-13. To enable the Azure AD provisioning service for AWS single sign-On, change the **Provisioning Status** to **On** in the **Settings** section.
+13. To enable the Azure AD provisioning service for AWS IAM Identity Center, change the **Provisioning Status** to **On** in the **Settings** section.
![Provisioning Status Toggled On](common/provisioning-toggle-on.png)
-14. Define the users and/or groups that you would like to provision to AWS single sign-On by choosing the desired values in **Scope** in the **Settings** section.
+14. Define the users and/or groups that you would like to provision to AWS IAM Identity Center by choosing the desired values in **Scope** in the **Settings** section.
![Provisioning Scope](common/provisioning-scope.png)
There are two ways to resolve this
2. Remove the duplicate attributes. For example, having two different attributes being mapped from Azure AD both mapped to "phoneNumber___" on the AWS side would result in the error if both attributes have values in Azure AD. Only having one attribute mapped to a "phoneNumber____ " attribute would resolve the error. ### Invalid characters
-Currently AWS single sign-On is not allowing some other characters that Azure AD supports like tab (\t), new line (\n), return carriage (\r), and characters such as " <|>|;|:% ".
+Currently AWS IAM Identity Center is not allowing some other characters that Azure AD supports like tab (\t), new line (\n), return carriage (\r), and characters such as " <|>|;|:% ".
-You can also check the AWS single sign-On troubleshooting tips [here](https://docs.aws.amazon.com/singlesignon/latest/userguide/azure-ad-idp.html#azure-ad-troubleshooting) for more troubleshooting tips
+You can also check the AWS IAM Identity Center troubleshooting tips [here](https://docs.aws.amazon.com/singlesignon/latest/userguide/azure-ad-idp.html#azure-ad-troubleshooting) for more troubleshooting tips
## Additional resources * [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
-* [What is application access and single sign-On with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+* [What is application access and IAM Identity Center with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
## Next steps
active-directory Aws Single Sign On Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/aws-single-sign-on-tutorial.md
Title: 'Tutorial: Azure AD SSO integration with AWS Single Sign-On'
-description: Learn how to configure single sign-on between Azure Active Directory and AWS Single Sign-On.
+ Title: 'Tutorial: Azure AD SSO integration with AWS IAM Identity Center (successor to AWS Single Sign-On)'
+description: Learn how to configure single sign-on between Azure Active Directory and AWS IAM Identity Center (successor to AWS Single Sign-On).
Previously updated : 07/15/2022 Last updated : 07/29/2022
-# Tutorial: Azure AD SSO integration with AWS Single Sign-On
+# Tutorial: Azure AD SSO integration with AWS IAM Identity Center
-In this tutorial, you'll learn how to integrate AWS Single Sign-On with Azure Active Directory (Azure AD). When you integrate AWS Single Sign-On with Azure AD, you can:
+In this tutorial, you'll learn how to integrate AWS IAM Identity Center (successor to AWS Single Sign-On) with Azure Active Directory (Azure AD). When you integrate AWS IAM Identity Center with Azure AD, you can:
-* Control in Azure AD who has access to AWS Single Sign-On.
-* Enable your users to be automatically signed-in to AWS Single Sign-On with their Azure AD accounts.
+* Control in Azure AD who has access to AWS IAM Identity Center.
+* Enable your users to be automatically signed-in to AWS IAM Identity Center with their Azure AD accounts.
* Manage your accounts in one central location - the Azure portal. ## Prerequisites
In this tutorial, you'll learn how to integrate AWS Single Sign-On with Azure Ac
To get started, you need the following items: * An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
-* AWS Single Sign-On enabled subscription.
+* AWS IAM Identity Center enabled subscription.
## Scenario description In this tutorial, you configure and test Azure AD SSO in a test environment.
-* AWS Single Sign-On supports **SP and IDP** initiated SSO.
+* AWS IAM Identity Center supports **SP and IDP** initiated SSO.
-* AWS Single Sign-On supports [**Automated user provisioning**](./aws-single-sign-on-provisioning-tutorial.md).
+* AWS IAM Identity Center supports [**Automated user provisioning**](./aws-single-sign-on-provisioning-tutorial.md).
-## Add AWS Single Sign-On from the gallery
+## Add AWS IAM Identity Center from the gallery
-To configure the integration of AWS Single Sign-On into Azure AD, you need to add AWS Single Sign-On from the gallery to your list of managed SaaS apps.
+To configure the integration of AWS IAM Identity Center into Azure AD, you need to add AWS IAM Identity Center from the gallery to your list of managed SaaS apps.
1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account. 1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**.
-1. In the **Add from the gallery** section, type **AWS Single Sign-On** in the search box.
-1. Select **AWS Single Sign-On** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+1. In the **Add from the gallery** section, type **AWS IAM Identity Center** in the search box.
+1. Select **AWS IAM Identity Center** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD SSO for AWS Single Sign-On
+## Configure and test Azure AD SSO for AWS IAM Identity Center
-Configure and test Azure AD SSO with AWS Single Sign-On using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in AWS Single Sign-On.
+Configure and test Azure AD SSO with AWS IAM Identity Center using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in AWS IAM Identity Center.
-To configure and test Azure AD SSO with AWS Single Sign-On, perform the following steps:
+To configure and test Azure AD SSO with AWS IAM Identity Center, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon. 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
-1. **[Configure AWS Single Sign-On SSO](#configure-aws-single-sign-on-sso)** - to configure the single sign-on settings on application side.
- 1. **[Create AWS Single Sign-On test user](#create-aws-single-sign-on-test-user)** - to have a counterpart of B.Simon in AWS Single Sign-On that is linked to the Azure AD representation of user.
+1. **[Configure AWS IAM Identity Center SSO](#configure-aws-iam-identity-center-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create AWS IAM Identity Center test user](#create-aws-iam-identity-center-test-user)** - to have a counterpart of B.Simon in AWS IAM Identity Center that is linked to the Azure AD representation of user.
1. **[Test SSO](#test-sso)** - to verify whether the configuration works. ## Configure Azure AD SSO Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the Azure portal, on the **AWS Single Sign-On** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **AWS IAM Identity Center** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**. 1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
Follow these steps to enable Azure AD SSO in the Azure portal.
a. Click **Upload metadata file**.
- b. Click on **folder logo** to select metadata file, which is explained to download in **[Configure AWS Single Sign-On SSO](#configure-aws-single-sign-on-sso)** section and click **Add**.
+ b. Click on **folder logo** to select metadata file which is explained to download in **[Configure AWS IAM Identity Center SSO](#configure-aws-iam-identity-center-sso)** section and click **Add**.
![image2](common/browse-upload-metadata.png)
Follow these steps to enable Azure AD SSO in the Azure portal.
`https://portal.sso.<REGION>.amazonaws.com/saml/assertion/<ID>` > [!NOTE]
- > These values are not real. Update these values with the actual Identifier, Reply URL and Sign-on URL. Contact [AWS Single Sign-On Client support team](mailto:aws-sso-partners@amazon.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign-on URL. Contact [AWS IAM Identity Center Client support team](mailto:aws-sso-partners@amazon.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
-1. AWS Single Sign-On application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+1. AWS IAM Identity Center application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
![image](common/edit-attribute.png) > [!NOTE]
- > If ABAC is enabled in AWS Single Sign-On, the additional attributes may be passed as session tags directly into AWS accounts.
+ > If ABAC is enabled in AWS IAM Identity Center, the additional attributes may be passed as session tags directly into AWS accounts.
1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate(Base64)** and select **Download** to download the certificate and save it on your computer. ![The Certificate download link](common/certificatebase64.png)
-1. On the **Set up AWS Single Sign-On** section, copy the appropriate URL(s) based on your requirement.
+1. On the **Set up AWS IAM Identity Center** section, copy the appropriate URL(s) based on your requirement.
![Copy configuration URLs](common/copy-configuration-urls.png)
In this section, you'll create a test user in the Azure portal called B.Simon.
### Assign the Azure AD test user
-In this section, you'll enable B.Simon to use Azure single sign-on by granting access to AWS Single Sign-On.
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to AWS IAM Identity Center.
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
-1. In the applications list, select **AWS Single Sign-On**.
+1. In the applications list, select **AWS IAM Identity Center**.
1. In the app's overview page, find the **Manage** section and select **Users and groups**. 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog. 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. 1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected. 1. In the **Add Assignment** dialog, click the **Assign** button.
-## Configure AWS Single Sign-On SSO
+## Configure AWS IAM Identity Center SSO
-1. To automate the configuration within AWS Single Sign-On, you need to install **My Apps Secure Sign-in browser extension** by clicking **Install the extension**.
+1. To automate the configuration within AWS IAM Identity Center, you need to install **My Apps Secure Sign-in browser extension** by clicking **Install the extension**.
![My apps extension](common/install-myappssecure-extension.png)
-2. After adding extension to the browser, click on **Set up AWS Single Sign-On** will direct you to the AWS Single Sign-On application. From there, provide the admin credentials to sign into AWS Single Sign-On. The browser extension will automatically configure the application for you and automate steps 3-10.
+2. After adding extension to the browser, click on **Set up AWS IAM Identity Center** will direct you to the AWS IAM Identity Center application. From there, provide the admin credentials to sign into AWS IAM Identity Center. The browser extension will automatically configure the application for you and automate steps 3-10.
![Setup configuration](common/setup-sso.png)
-3. If you want to setup AWS Single Sign-On manually, in a different web browser window, sign in to your AWS Single Sign-On company site as an administrator.
+3. If you want to setup AWS IAM Identity Center manually, in a different web browser window, sign in to your AWS IAM Identity Center company site as an administrator.
-1. Go to the **Services -> Security, Identity, & Compliance -> AWS Single Sign-On**.
+1. Go to the **Services -> Security, Identity, & Compliance -> AWS IAM Identity Center**.
2. In the left navigation pane, choose **Settings**.
-3. On the **Settings** page, find **Identity source** and click on **Change**.
+3. On the **Settings** page, find **Identity source**, click on **Actions** pull-down menu, and select Change **identity source**.
![Screenshot for Identity source change service](./media/aws-single-sign-on-tutorial/settings.png)
-4. On the Change identity source, choose **External identity provider**.
+4. On the Change identity source page, choose **External identity provider**.
![Screenshot for selecting external identity provider section](./media/aws-single-sign-on-tutorial/external-identity-provider.png)
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
![Screenshot for download and upload metadata section](./media/aws-single-sign-on-tutorial/upload-metadata.png)
- a. In the **Service provider metadata** section, find **AWS SSO SAML metadata** and select **Download metadata file** to download the metadata file and save it on your computer and use this metadata file to upload on Azure portal.
+ a. In the **Service provider metadata** section, find **AWS SSO SAML metadata**, select **Download metadata file** to download the metadata file and save it on your computer and use this metadata file to upload on Azure portal.
- b. Copy **AWS SSO Sign-in URL** value, paste this value into the **Sign on URL** text box in the **Basic SAML Configuration section** in the Azure portal.
+ b. Copy **AWS access portal sign-in URL** value, paste this value into the **Sign on URL** text box in the **Basic SAML Configuration section** in the Azure portal.
- c. In the **Identity provider metadata** section, choose **Browse** to upload the metadata file, which you have downloaded from the Azure portal.
+ c. In the **Identity provider metadata** section, select **Choose file** to upload the metadata file which you have downloaded from the Azure portal.
d. Choose **Next: Review**.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
9. Click **Change identity source**.
-### Create AWS Single Sign-On test user
+### Create AWS IAM Identity Center test user
-1. Open the **AWS SSO console**.
+1. Open the **AWS IAM Identity Center console**.
2. In the left navigation pane, choose **Users**.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
b. In the **Email address** field, enter the `username@companydomain.extension`. For example, `B.Simon@contoso.com`.
- c. In the **Confirm email address** field, reenter the email address from the previous step.
+ c. In the **Confirm email address** field, re-enter the email address from the previous step.
d. In the First name field, enter `Jane`.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
f. In the Display name field, enter `Jane Doe`.
- g. Choose **Next: Groups**.
+ g. Choose **Next**, and then **Next** again.
> [!NOTE]
- > Make sure the username entered in AWS SSO matches the userΓÇÖs Azure AD sign-in name. This will you help avoid any authentication problems.
+ > Make sure the username entered in AWS IAM Identity Center matches the userΓÇÖs Azure AD sign-in name. This will you help avoid any authentication problems.
5. Choose **Add user**. 6. Next, you will assign the user to your AWS account. To do so, in the left navigation pane of the
-AWS SSO console, choose **AWS accounts**.
+AWS IAM Identity Center console, choose **AWS accounts**.
7. On the AWS Accounts page, select the AWS organization tab, check the box next to the AWS account you want to assign to the user. Then choose **Assign users**. 8. On the Assign Users page, find and check the box next to the user B.Simon. Then choose **Next:
permission set**.
> [!NOTE] > Permission sets define the level of access that users and groups have to an AWS account. To learn more
-about permission sets, see the AWS SSO **Permission Sets** page.
+about permission sets, see the **AWS IAM Identity Center Multi Account Permissions** page.
10. Choose **Finish**. > [!NOTE]
-> AWS Single Sign-On also supports automatic user provisioning, you can find more details [here](./aws-single-sign-on-provisioning-tutorial.md) on how to configure automatic user provisioning.
+> AWS IAM Identity Center also supports automatic user provisioning, you can find more details [here](./aws-single-sign-on-provisioning-tutorial.md) on how to configure automatic user provisioning.
## Test SSO
In this section, you test your Azure AD single sign-on configuration with follow
#### SP initiated:
-* Click on **Test this application** in Azure portal. This will redirect to AWS Single Sign-On sign-in URL where you can initiate the login flow.
+* Click on **Test this application** in Azure portal. This will redirect to AWS IAM Identity Center sign-in URL where you can initiate the login flow.
-* Go to AWS Single Sign-On sign-in URL directly and initiate the login flow from there.
+* Go to AWS IAM Identity Center sign-in URL directly and initiate the login flow from there.
#### IDP initiated:
-* Click on **Test this application** in Azure portal and you should be automatically signed in to the AWS Single Sign-On for which you set up the SSO.
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the AWS IAM Identity Center for which you set up the SSO.
-You can also use Microsoft My Apps to test the application in any mode. When you click the AWS Single Sign-On tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the AWS Single Sign-On for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+You can also use Microsoft My Apps to test the application in any mode. When you click the AWS IAM Identity Center tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the AWS IAM Identity Center for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
## Next steps
-Once you configure AWS Single Sign-On you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
+Once you configure AWS IAM Identity Center you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
active-directory Cisco Umbrella User Management Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cisco-umbrella-user-management-provisioning-tutorial.md
The scenario outlined in this tutorial assumes that you already have the followi
1. Determine what data to [map between Azure AD and Cisco Umbrella User Management](../app-provisioning/customize-application-attributes.md). ## Step 2. Import ObjectGUID attribute via Azure AD Connect (Optional)
-If you have previously provisioned user identities from on-premise AD to Cisco Umbrella and would now like to provision the same users from Azure AD, you will need to synchronize the ObjectGUID attribute so that previously provisioned identities persist in the Umbrella reporting. You will need to reconfigure any Umbrella policy on groups after importing groups from Azure AD.
+If your endpoints are running AnyConnect or the Cisco Secure Client version 4.10 MR5 or earlier, you will need to synchronize the ObjectGUID attribute for user identity attribution. You will need to reconfigure any Umbrella policy on groups after importing groups from Azure AD.
> [!NOTE] > The on-premise Umbrella AD Connector should be turned off before importing the ObjectGUID attribute.
When using Microsoft Azure AD Connect, the ObjectGUID attribute of users is not
![Screenshot that shows the "Directory extensions" selection page](./media/cisco-umbrella-user-management-provisioning-tutorial/active-directory-connect-directory-extensions.png)
+> [!NOTE]
+> This step is not required if all your endpoints are running Cisco Secure Client or AnyConnect version 4.10 MR6 or higher.
+ ## Step 3. Configure Cisco Umbrella User Management to support provisioning with Azure AD 1. Log in to [Cisco Umbrella dashboard](https://login.umbrella.com ). Navigate to **Deployments** > **Core Identities** > **Users and Groups**.
aks Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Kubernetes Service (AKS) description: Lists Azure Policy Regulatory Compliance controls available for Azure Kubernetes Service (AKS). These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 07/26/2022 Last updated : 08/01/2022
aks Use Cvm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-cvm.md
+
+ Title: Use Confidential Virtual Machines (CVM) in Azure Kubernetes Service (AKS) (Preview)
+description: Learn how to create Confidential Virtual Machines (CVM) node pools with Azure Kubernetes Service (AKS)
++ Last updated : 08/01/2022+++
+# Use Confidential Virtual Machines (CVM) in Azure Kubernetes Service (AKS) cluster (Preview)
+
+You can use the generally available [confidential VM sizes (DCav5/ECav5)][cvm-announce] to add a node pool to your AKS cluster with CVM. Confidential VMs with AMD SEV-SNP support bring a new set of security features to protect date-in-use with full VM memory encryption. These features enable node pools with CVM to target the migration of highly sensitive container workloads to AKS without any code refactoring while benefiting from the features of AKS. The nodes in a node pool created with CVM use a customized Ubuntu 20.04 image specially configured for CVM. For more details on CVM, see [Confidential VM node pools support on AKS with AMD SEV-SNP confidential VMs][cvm].
+
+Adding a node pool with CVM to your AKS cluster is currently in preview.
++
+## Before you begin
+
+- An Azure subscription. If you don't have an Azure subscription, you can create a [free account](https://azure.microsoft.com/free).
+- [Azure CLI installed](/cli/azure/install-azure-cli).
+- An existing AKS cluster in the *westus*, *eastus*, *westeurope*, or *northeurope* region.
+- The [DCasv5 and DCadsv5-series][cvm-subs-dc] or [ECasv5 and ECadsv5-series][cvm-subs-ec] SKUs available for your subscription.
+
+## Limitations
+
+The following limitations apply when adding a node pool with CVM to AKS:
+
+- You can't use `--enable-fips-image`, ARM64, or Mariner.
+- You can't upgrade an existing node pool to use CVM.
+- The [DCasv5 and DCadsv5-series][cvm-subs-dc] or [ECasv5 and ECadsv5-series][cvm-subs-ec] SKUs must be available for your subscription in the region where the cluster is created.
+
+## Add a node pool with the CVM to AKS
+
+To add a node pool with the CVM to AKS, use `az aks nodepool add` and set `node-vm-size` to `Standard_DCa4_v5`. For example:
+
+```azurecli-interactive
+az aks nodepool add \
+ --resource-group myResourceGroup \
+ --cluster-name myAKSCluster \
+ --name cvmnodepool \
+ --node-count 3 \
+ --node-vm-size Standard_DC4as_v5
+```
+
+## Verify the node pool uses CVM
+
+To verify a node pool uses CVM, use `az aks nodepool show` and verify the `vmSize` is `Standard_DCa4_v5`. For example:
+
+```azurecli-interactive
+az aks nodepool show \
+ --resource-group myResourceGroup \
+ --cluster-name myAKSCluster \
+ --name cvmnodepool \
+ --query 'vmSize'
+```
+
+The following example command and output shows the node pool uses CVM:
+
+```output
+az aks nodepool show \
+ --resource-group myResourceGroup \
+ --cluster-name myAKSCluster \
+ --name cvmnodepool \
+ --query 'vmSize'
+
+"Standard_DC4as_v5"
+```
+
+## Remove a node pool with CVM from an AKS cluster
+
+To remove a node pool with CVM from an AKS cluster, use `az aks nodepool delete`. For example:
+
+```azurecli-interactive
+az aks nodepool delete \
+ --resource-group myResourceGroup \
+ --cluster-name myAKSCluster \
+ --name cvmnodepool
+```
+
+## Next steps
+
+In this article, you learned how to add a node pool with CVM to an AKS cluster. For more information about CVM, see [Confidential VM node pools support on AKS with AMD SEV-SNP confidential VMs][cvm].
+
+<!-- LINKS - Internal -->
+[cvm]: ../confidential-computing/confidential-node-pool-aks.md
+[cvm-announce]: https://techcommunity.microsoft.com/t5/azure-confidential-computing/azure-confidential-vms-using-sev-snp-dcasv5-ecasv5-are-now/ba-p/3573747
+[cvm-subs-dc]: ../virtual-machines/dcasv5-dcadsv5-series.md
+[cvm-subs-ec]: ../virtual-machines/ecasv5-ecadsv5-series.md
aks Use Multiple Node Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-multiple-node-pools.md
A workload may require splitting a cluster's nodes into separate pools for logic
* All subnets assigned to node pools must belong to the same virtual network. * System pods must have access to all nodes/pods in the cluster to provide critical functionality such as DNS resolution and tunneling kubectl logs/exec/port-forward proxy.
-* If you expand your VNET after creating the cluster you must update your cluster (perform any managed cluster operation but node pool operations don't count) before adding a subnet outside the original cidr. AKS will error out on the agent pool add now though we originally allowed it. The `aks-preview` Azure CLI extension (version 0.5.66+) now supports running `az aks update -g <resourceGroup> -n <clusterName>` without any optional arguments. This command will perform an update operation without making any changes, which can recover a cluster stuck in a failed state.
+* If you expand your VNET after creating the cluster you must update your cluster (perform any managed cluster operation but node pool operations don't count) before adding a subnet outside the original cidr. AKS will error-out on the agent pool add now though we originally allowed it. The `aks-preview` Azure CLI extension (version 0.5.66+) now supports running `az aks update -g <resourceGroup> -n <clusterName>` without any optional arguments. This command will perform an update operation without making any changes, which can recover a cluster stuck in a failed state.
* In clusters with Kubernetes version < 1.23.3, kube-proxy will SNAT traffic from new subnets, which can cause Azure Network Policy to drop the packets. * Windows nodes will SNAT traffic to the new subnets until the node pool is reimaged. * Internal load balancers default to one of the node pool subnets (usually the first subnet of the node pool at cluster creation). To override this behavior, you can [specify the load balancer's subnet explicitly using an annotation][internal-lb-different-subnet].
api-management Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure API Management description: Lists Azure Policy Regulatory Compliance controls available for Azure API Management. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 07/26/2022 Last updated : 08/01/2022
app-service Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/migrate.md
Title: Migrate to App Service Environment v3 by using the migration feature
description: Overview of the migration feature for migration to App Service Environment v3 Previously updated : 5/23/2022 Last updated : 7/29/2022
At this time, App Service Environment migrations to v3 using the migration featu
- East US 2 - France Central - Germany West Central
+- Japan East
- Korea Central - North Central US - North Europe
app-service Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/overview.md
Title: App Service Environment overview
description: This article discusses the Azure App Service Environment feature of Azure App Service. Previously updated : 07/28/2022 Last updated : 07/29/2022
App Service Environment v3 is available in the following regions:
| Region | Normal and dedicated host | Availability zone support | | -- | :-: | :-: |
-| Australia East | x | x |
-| Australia Southeast | x | |
-| Brazil South | x | x |
-| Canada Central | x | x |
-| Canada East | x | |
-| Central India | x | x |
-| Central US | x | x |
-| East Asia | x | x |
-| East US | x | x |
-| East US 2 | x | x |
-| France Central | x | x |
-| Germany West Central | x | x |
-| Japan East | x | x |
-| Korea Central | x | x |
-| North Central US | x | |
-| North Europe | x | x |
-| Norway East | x | x |
-| South Africa North | x | x |
-| South Central US | x | x |
-| Southeast Asia | x | x |
-| Switzerland North | x | |
-| UAE North | x | |
-| UK South | x | x |
-| UK West | x | |
-| West Central US | x | |
-| West Europe | x | x |
-| West US | x | |
-| West US 2 | x | x |
-| West US 3 | x | x |
+| Australia East | x | x |
+| Australia Southeast | x | |
+| Brazil South | x | x |
+| Canada Central | x | x |
+| Canada East | x | |
+| Central India | x | x |
+| Central US | x | x |
+| East Asia | x | x |
+| East US | x | x |
+| East US 2 | x | x |
+| France Central | x | x |
+| Germany West Central | x | x |
+| Japan East | x | x |
+| Korea Central | x | x |
+| North Central US | x | |
+| North Europe | x | x |
+| Norway East | x | x |
+| South Africa North | x | x |
+| South Central US | x | x |
+| Southeast Asia | x | x |
+| Sweden Central | x | x |
+| Switzerland North | x | x |
+| UAE North | x | |
+| UK South | x | x |
+| UK West | x | |
+| West Central US | x | |
+| West Europe | x | x |
+| West US | x | |
+| West US 2 | x | x |
+| West US 3 | x | x |
### Azure Government:
app-service Overview Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-diagnostics.md
# Azure App Service diagnostics overview
-When youΓÇÖre running a web application, you want to be prepared for any issues that may arise, from 500 errors to your users telling you that your site is down. App Service diagnostics is an intelligent and interactive experience to help you troubleshoot your app with no configuration required. When you do run into issues with your app, App Service diagnostics points out whatΓÇÖs wrong to guide you to the right information to more easily and quickly troubleshoot and resolve the issue.
+When youΓÇÖre running a web application, you want to be prepared for any issues that may arise, from 500 errors to your users telling you that your site is down. App Service diagnostics is an intelligent and interactive experience to help you troubleshoot your app with no configuration required. If you do run into issues with your app, App Service diagnostics points out whatΓÇÖs wrong to guide you to the right information to more easily and quickly troubleshoot and resolve the issue.
Although this experience is most helpful when youΓÇÖre having issues with your app within the last 24 hours, all the diagnostic graphs are always available for you to analyze.
To access App Service diagnostics, navigate to your App Service web app or App S
For Azure Functions, navigate to your function app, and in the top navigation, click on **Platform features**, and select **Diagnose and solve problems** from the **Resource management** section.
-In the App Service diagnostics homepage, you can choose the category that best describes the issue with your app by using the keywords in each homepage tile. Also, this page is where you can find **Diagnostic Tools**. See [Diagnostic tools](#diagnostic-tools).
+In the App Service diagnostics homepage, you can peform a search for a symptom with your app, or choose a diagnostic category that best describes the issue with your app. Next, there is a new feature called Risk Alerts that provides an actionable report to improve your App. Finally, this page is where you can find **Diagnostic Tools**. See [Diagnostic tools](#diagnostic-tools).
-![Homepage](./media/app-service-diagnostics/app-service-diagnostics-homepage-1.png)
+![App Service Diagnose and solve problems homepage with diagnostic search box, Risk Alerts assessments, and Troubleshooting categories for discovering diagnostics for the selected Azure Resource.](./media/app-service-diagnostics/app-service-diagnostics-homepage-1.png)
> [!NOTE] > If your app is down or performing slow, you can [collect a profiling trace](https://azure.github.io/AppService/2018/06/06/App-Service-Diagnostics-Profiling-an-ASP.NET-Web-App-on-Azure-App-Service.html) to identify the root cause of the issue. Profiling is light weight and is designed for production scenarios. >
-## Interactive interface
+## Diagnostic Interface
-Once you select a homepage category that best aligns with your app's problem, App Service diagnostics' interactive interface, Genie, can guide you through diagnosing and solving problem with your app. You can use the tile shortcuts provided by Genie to view the full diagnostic report of the problem category that you are interested. The tile shortcuts provide you a direct way of accessing your diagnostic metrics.
+The homepage for App Service diagnostics offers streamlined diagnostics access using four sections:
-![Tile shortcuts](./media/app-service-diagnostics/tile-shortcuts-2.png)
+- **Ask Genie search box**
+- **Risk Alerts**
+- **Troubleshooting categories**
+- **Popular troubleshooting tools**
-After clicking on these tiles, you can see a list of topics related to the issue described in the tile. These topics provide snippets of notable information from the full report. You can click on any of these topics to investigate the issues further. Also, you can click on **View Full Report** to explore all the topics on a single page.
+## Ask Genie search box
-![Topics](./media/app-service-diagnostics/application-logs-insights-3.png)
+The Genie search box is a quick way to find a diagnostic. The same diagnostic can be found through Troubleshooting categories.
-![View Full Report](./media/app-service-diagnostics/view-full-report-4.png)
+![App Service Diagnose and solve problems Genie search box with a search for availability app issues and a dropdown of diagnostics that match the availability search term, such as Best Practices for Availability and Performance, Web App Down, Web App Slow, High CPU Analysis, Web App Restarted.](./media/app-service-diagnostics/app-service-diagnostics-genie-alerts-search-1.png)
-## Diagnostic report
-After you choose to investigate the issue further by clicking on a topic, you can view more details about the topic often supplemented with graphs and markdowns. Diagnostic report can be a powerful tool for pinpointing the problem with your app.
+## Risk Alerts
+
+The App Service diagnostics homepage performs a series of configuration checks and offers recommendations based on your unique application's configuration.
+
+![App Service Diagnose and solve problems Risk Alerts displays proactive App checks in a tile with a count of problems found and a link to view more details.](./media/app-service-diagnostics/app-service-diagnostics-risk-alerts-1.png)
+
+Recommendations and checks performed can be reviewed by clicking "View more details" link.
+
+![App Service Diagnose and solve problems Risk Alerts right hand panel, with actionable insights tailored for the current Azure Resource App, after clicking View more details hyperlink on the homepage.](./media/app-service-diagnostics/app-service-diagnostics-risk-alerts-details-1.png)
+
+## Troubleshooting categories
+
+Troubleshooting categories group diagnostics for ease of discovery. The following are available:
+
+- **Availability and Performance**
+- **Configuration and Management**
+- **SSL and Domains**
+- **Risk Assessments**
+- **Navigator (Preview)**
+- **Diagnostic Tools**
-![Diagnostic report](./media/app-service-diagnostics/full-diagnostic-report-5.png)
-## Health checkup
+![App Service Diagnose and solve problems Troubleshooting categories list displaying Availability and Performance, Configuration and Management, SSL and Domains, Risk Assessments, Navigator (Preview) and Diagnostic Tools.](./media/app-service-diagnostics/app-service-diagnostics-troubleshooting-categories-1.png)
++
+The tiles or the Troubleshoot link show the available diagnostics for the category. If you were interested in investigating Availability and performance the following diagnostics are offered:
+
+- **Overview**
+- **Web App Down**
+- **Web App Slow**
+- **High CPU Analysis**
+- **Memory Analysis**
+- **Web App Restarted**
+- **Application Change (Preview)**
+- **Application Crashes**
+- **HTTP 4xx Errors**
+- **SNAT Failed Connection Endpoints**
+- **SWAP Effects on Availability**
+- **TCP Connections**
+- **Testing in Production**
+- **WebJob Details**
++
+![App Service Diagnose and solve problems Availability and Performance category homepage, with left hand navigation containing Overview, Web App Down, Web App Slow, High CPU Analysis, Memory Analysis, Web App Restarted, Application Change (Preview), Application Crashes, HTTP 4xx Errors, SNAT Failed connection Endpoint, SNAT Port Exhaustion, Swap Effects on Availability, TCP Connections, Testing in Production, WebJob Details and the default availability dashboard for the last 24 hours of App usage, with a date and time selection interface.](./media/app-service-diagnostics/app-service-diagnostics-availability-and-performance-1.png)
+
+## Diagnostic report
-If you don't know whatΓÇÖs wrong with your app or donΓÇÖt know where to start troubleshooting your issues, the health checkup is a good place to start. The health checkup analyzes your applications to give you a quick, interactive overview that points out whatΓÇÖs healthy and whatΓÇÖs wrong, telling you where to look to investigate the issue. Its intelligent and interactive interface provides you with guidance through the troubleshooting process. Health checkup is integrated with the Genie experience for Windows apps and web app down diagnostic report for Linux apps.
+After you choose to investigate the issue further by clicking on a topic, you can view more details about the topic often supplemented with graphs and markdowns. Diagnostic report can be a powerful tool for pinpointing the problem with your app. The following is the Overview for Availability and Performance:
-### Health checkup graphs
+![App Service Diagnose and solve problems Availability and Performance category homepage with Web App Down diagnostic selected, which displays an availability chart, Organic SLA percentage and Observations and Solutions for problems that were detected.](./media/app-service-diagnostics/full-diagnostic-report-5.png)
-There are four different graphs in the health checkup.
+## Resiliency Score
-- **requests and errors:** A graph that shows the number of requests made over the last 24 hours along with HTTP server errors.-- **app performance:** A graph that shows response time over the last 24 hours for various percentile groups.-- **CPU usage:** A graph that shows the overall percent CPU usage per instance over the last 24 hours. -- **memory usage:** A graph that shows the overall percent physical memory usage per instance over the last 24 hours.
+If you don't know whatΓÇÖs wrong with your app or donΓÇÖt know where to start troubleshooting your issues, the Get Resiliency Score report is a good place to start. Once a Troubleshooting category has been selected the Get Resilience Score report link is available and clicking it produces a PDF document with actionable insights.
-![Health checkup](./media/app-service-diagnostics/health-checkup-6.png)
+![App Service Diagnose and solve problems Resiliency Score report, with a gauge indicating App's resilience score and what App Developer can do to improve resilience of the App.](./media/app-service-diagnostics/app-service-diagnostics-resiliency-report-1.png)
### Investigate application code issues (only for Windows app)
Because many app issues are related to issues in your application code, App Serv
To view Application Insights exceptions and dependencies, select the **web app down** or **web app slow** tile shortcuts.
-### Troubleshooting steps (only for Windows app)
+### Troubleshooting steps
If an issue is detected with a specific problem category within the last 24 hours, you can view the full diagnostic report, and App Service diagnostics may prompt you to view more troubleshooting advice and next steps for a more guided experience.
Post your questions or feedback at [UserVoice](https://feedback.azure.com/d365co
## More resources
-[Tutorial: Run a load test to identify performance bottlenecks in a web app](../load-testing/tutorial-identify-bottlenecks-azure-portal.md)
+[Tutorial: Run a load test to identify performance bottlenecks in a web app](../load-testing/tutorial-identify-bottlenecks-azure-portal.md)
app-service Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure App Service description: Lists Azure Policy Regulatory Compliance controls available for Azure App Service. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 07/26/2022 Last updated : 08/01/2022
automation Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Automation description: Lists Azure Policy Regulatory Compliance controls available for Azure Automation. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 07/26/2022 Last updated : 08/01/2022
azure-app-configuration Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure App Configuration description: Lists Azure Policy Regulatory Compliance controls available for Azure App Configuration. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 07/26/2022 Last updated : 08/01/2022
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/overview.md
Many of the services such as self-service provisioning, automated backups/restor
## Supported regions
-The following table describes the scenarios that are currently supported for Azure Arc-enabled data services.
-
-|Azure Regions |Direct connected mode |Indirect connected mode |
-||||
-|East US | Available | Available
-|East US 2|Available|Available
-|West US|Available|Available
-|West US 2|Available|Available
-|West US 3|Available|Available
-|North Central US | Available | Available
-|Central US|Available|Available
-|South Central US|Available|Available
-|UK South|Available|Available
-|France Central|Available|Available
-|West Europe |Available |Available
-|North Europe|Available|Available
-|Japan East|Available|Available
-|Korea Central|Available|Available
-|Southeast Asia|Available|Available
-|Australia East|Available|Available
-|Canada Central|Available|Available
+To see the regions that currently support Azure Arc-enabled data services, go to [Azure Products by Region - Azure Arc](https://azure.microsoft.com/global-infrastructure/services/?cdn=disable&products=azure-arc).
## Next steps
azure-arc Validation Program https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/validation-program.md
To see how all Azure Arc-enabled components are validated, see [Validation progr
|Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL engine version | PostgreSQL Hyperscale version |--|--|--|--|--|
-|Kublr |1.22.0 / 1.20.12 |v1.1.0_2021-11-02 |15.0.2195.191 |PostgreSQL 12.3 (Ubuntu 12.3-1) |
+|Kublr |1.22.3 / 1.22.10 | v1.9.0_2022-07-12 |15.0.2195.191 |PostgreSQL 12.3 (Ubuntu 12.3-1) |
### Lenovo
To see how all Azure Arc-enabled components are validated, see [Validation progr
|--|--|--|--|--| | TKGm v1.5.1 | 1.20.5 | v1.4.1_2022-03-08 |15.0.2255.119|postgres 12.3 (Ubuntu 12.3-1)|
-### WindRiver
+### Wind River
|Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL engine version | PostgreSQL Hyperscale version |--|--|--|--|--|
-|WindRiver| v1.23.1|v1.9.0_2022-07-12 |16.0.312.4243|postgres 12.3 (Ubuntu 12.3-1) |
+|Wind River Cloud Platform 22.06 | v1.23.1|v1.9.0_2022-07-12 |16.0.312.4243|postgres 12.3 (Ubuntu 12.3-1) |
## Data services validation process
azure-arc Quickstart Connect Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/quickstart-connect-cluster.md
Title: "Quickstart: Connect an existing Kubernetes cluster to Azure Arc" description: In this quickstart, you learn how to connect an Azure Arc-enabled Kubernetes cluster. Previously updated : 07/28/2022 Last updated : 08/01/2022 ms.devlang: azurecli
For a conceptual look at connecting clusters to Azure Arc, see [Azure Arc-enable
* [Kubernetes in Docker (KIND)](https://kind.sigs.k8s.io/) * Create a Kubernetes cluster using Docker for [Mac](https://docs.docker.com/docker-for-mac/#kubernetes) or [Windows](https://docs.docker.com/docker-for-windows/#kubernetes) * Self-managed Kubernetes cluster using [Cluster API](https://cluster-api.sigs.k8s.io/user/quick-start.html)
+ * If you want to connect an OpenShift cluster to Azure Arc, you need to execute the following command just once on your cluster before running `New-AzConnectedKubernetes`:
+
+ ```azurecli-interactive
+ oc adm policy add-scc-to-user privileged -z <service account name> -n <service account namespace>
+ ```
>[!NOTE] > The cluster needs to have at least one node of operating system and architecture type `linux/amd64`. Clusters with only `linux/arm64` nodes aren't yet supported. * A [kubeconfig file](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/) and context pointing to your cluster.
+* Install [Helm 3](https://helm.sh/docs/intro/install). Ensure that the Helm 3 version is &lt; 3.7.0.
+ ### [Azure PowerShell](#tab/azure-powershell) * An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
For a conceptual look at connecting clusters to Azure Arc, see [Azure Arc-enable
* [Kubernetes in Docker (KIND)](https://kind.sigs.k8s.io/) * Create a Kubernetes cluster using Docker for [Mac](https://docs.docker.com/docker-for-mac/#kubernetes) or [Windows](https://docs.docker.com/docker-for-windows/#kubernetes) * Self-managed Kubernetes cluster using [Cluster API](https://cluster-api.sigs.k8s.io/user/quick-start.html)
- * If you want to connect a OpenShift cluster to Azure Arc, you need to execute the following command just once on your cluster before running `New-AzConnectedKubernetes`:
+ * If you want to connect an OpenShift cluster to Azure Arc, you need to execute the following command just once on your cluster before running `New-AzConnectedKubernetes`:
```bash oc adm policy add-scc-to-user privileged system:serviceaccount:azure-arc:azure-arc-kube-aad-proxy-sa
azure-arc Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Arc-enabled servers (preview) description: Lists Azure Policy Regulatory Compliance controls available for Azure Arc-enabled servers (preview). These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 07/26/2022 Last updated : 08/01/2022
azure-cache-for-redis Cache Redis Modules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-redis-modules.md
+
+ Title: Using Redis modules with Azure Cache for Redis
+description: You can use Redis modules with your Azure Cache for Redis instances.
+++++ Last updated : 07/26/2022+++
+# Use Redis modules with Azure Cache for Redis
+
+With Azure Cache for Redis, you can use Redis modules as libraries to add more data structures and functionality to the core Redis software. You add the modules at the time you're creating your Enterprise tier cache.
+
+For more information on creating an Enterprise cache, see [Quickstart: Create a Redis Enterprise cache](quickstart-create-redis-enterprise.md).
+
+Modules were introduced in open-source Redis 4.0. The modules extend the use-cases of Redis by adding functionality like search capabilities and data structures like **bloom and cuckoo filters**.
+
+## Scope of Redis modules
+
+Some popular modules are available for use in the Enterprise tier of Azure Cache for Redis:
+
+| Module |Basic, Standard, and Premium |Enterprise |Enterprise Flash |
+|||||
+|RediSearch | No | Yes | Yes (preview) |
+|RedisBloom | No | Yes | No |
+|RedisTimeSeries | No | Yes | No |
+|RedisJSON | No | Yes (preview) | Yes (preview) |
+
+Currently, `RediSearch` is the only module that can be used concurrently with active geo-replication.
+
+> [!NOTE]
+> Currently, you can't manually load any modules into Azure Cache for Redis. Manually updating modules version is also not possible.
+>
+
+## Client library support
+
+The standard Redis client libraries have a varying amounts of support for each module. Some modules have specific libraries that add client support. Check the Redis [documentation pages](#modules) for each module to see more detail on which client libraries support them.
+
+## Adding modules to your cache
+
+You must add modules when you create your Enterprise tier cache. To add a module or modules when creating a new cache, use the settings in the Advanced tab of the Enterprise tier caches.
+
+You can add all the available modules or to select only specific modules to install.
++
+> [!IMPORTANT]
+> Modules must be enabled at the time you create an Azure Cache for Redis instance.
+
+For more information, see [Quickstart: Create a Redis Enterprise cache](quickstart-create-redis-enterprise.md).
+
+## Modules
+
+The following modules are available when creating a new Enterprise cache.
+
+- [RediSearch](#redisearch)
+- [RedisBloom](#redisbloom)
+- [RedisTimeSeries](#redistimeseries)
+- [RedisJSON](#redisjson)
+
+### RediSearch
+
+The **RediSearch** module adds a real-time search engine to your cache combining low latency performance with powerful search features.
+
+Features include:
+
+- Multi-field queries
+- Aggregation
+- Prefix, fuzzy, and phonetic-based searches
+- Auto-complete suggestions
+- Geo-filtering
+- Boolean queries
+
+Additionally, **RediSearch** can function as a secondary index, expanding your cache beyond a key-value structure and offering more sophisticated queries.
+
+You can use **RediSearch** is used in a wide variety of use-cases, including real-time inventory, enterprise search, and in indexing external databases. [For more information, see the RediSearch documentation page](https://redis.io/docs/stack/search/).
+
+>[!IMPORTANT]
+> The RediSearch module can only be used with the `Enterprise` clustering policy. For more information, see [Clustering Policy](quickstart-create-redis-enterprise.md#clustering-policy).
+
+>[!NOTE]
+> The RediSearch module is the only module that can be used with active geo-replication.
+
+### RedisBloom
+
+RedisBloom adds four probabilistic data structures to a Redis server: **bloom filter**, **cuckoo filter**, **count-min sketch**, and **top-k**. Each of these data structures offers a way to sacrifice perfect accuracy in return for higher speed and better memory efficiency.
+
+| **Data structure** | **Description** | **Example application**|
+| ||-|
+| **Bloom and Cuckoo filters** | Tells you if an item is either (a) certainly not in a set or (b) potentially in a set. | Checking if an email has already been sent to a user|
+|**Count-min sketch** | Determines the frequency of events in a stream | Counting how many times an IoT device reported a temperature under 0 degrees Celsius. |
+|**Top-k** | Finds the `k` most frequently seen items | Determine the most frequent words used in War and Peace. (for example, setting k = 50 will return the 50 most common words in the book) |
+
+**Bloom and Cuckoo** filters are similar to each other, but each has a unique set of advantages and disadvantages that are beyond the scope of
+this documentation.
+
+For more information, see [RedisBloom](https://redis.io/docs/stack/bloom/).
+
+### RedisTimeSeries
+
+The **RedisTimeSeries** module adds high-throughput time series capabilities to your cache. This data structure is optimized for high volumes of incoming data and contains features to work with time series data, including:
+
+- Aggregated queries (for example, average, maximum, standard deviation, etc.)
+- Time-based queries (for example, start-time and end-time)
+- Downsampling/decimation
+- Data labeling for secondary indexing
+- Configurable retention period
+
+This module is useful for many applications that involve monitoring streaming data, such as IoT telemetry, application monitoring, and anomaly detection.
+
+For more information, see [RedisTimeSeries](https://redis.io/docs/stack/timeseries/).
+
+### RedisJSON
+
+The **RedisJSON** module adds the capability to store, query, and search JSON-formatted data. This functionality is useful for storing document-like data within your cache.
+
+Features include:
+
+- Full support for the JSON standard
+- Wide range of operations for all JSON data types, including objects, numbers, arrays, and strings
+- Dedicated syntax and fast access to select and update elements inside documents
+
+The **RedisJSON** module is also designed for use with the **RediSearch** module to provide integrated indexing and querying of data within a Redis server. Using both modules together can be a powerful tool to quickly retrieve specific data points within JSON objects.
+
+Some common use-cases for **RedisJSON** include applications such as searching product catalogs, managing user profiles, and caching JSON-structured data.
+
+For more information, see [RedisJSON](https://redis.io/docs/stack/json/).
+
+## Next steps
+
+- [Quickstart: Create a Redis Enterprise cache](quickstart-create-redis-enterprise.md)
+- [Client libraries](cache-best-practices-client-libraries.md)
azure-cache-for-redis Cache Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-whats-new.md
Previously updated : 07/27/2022 Last updated : 08/01/2022 # What's New in Azure Cache for Redis
+## August 2022
+
+### RedisJSON module available in Azure Cache for Redis Enterprise
+
+The Enterprise and Enterprise Flash tiers of Azure Cache for Redis now support the **RedisJSON** module. This module adds native functionality to store, query, and search JSON-formatted data that allows you to store data more easily in a document-style format in Redis. By using this module, you simplify common use cases like storing product catalog or user profile data.
+
+The **RedisJSON** module implements the community version of the module so you can use your existing knowledge and workstreams. **RedisJSON** is designed for use with the search functionality of **RediSearch**. Using both modules provides integrated indexing and querying of data. For more information, see [RedisJSON](https://aka.ms/redisJSON).
+
+The **RediSearch** module is also now available for Azure Cache for Redis. For more information on using Redis modules in Azure Cache for Redis, see [Use Redis modules with Azure Cache for Redis](cache-redis-modules.md).
+ ## July 2022 ### Redis 6 becomes default for new cache instances
azure-cache-for-redis Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Cache for Redis description: Lists Azure Policy Regulatory Compliance controls available for Azure Cache for Redis. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 07/26/2022 Last updated : 08/01/2022
azure-functions Functions App Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-app-settings.md
To avoid excessive module upgrades on frequent Worker restarts, checking for mod
To learn more, see [Dependency management](functions-reference-powershell.md#dependency-management).
+## PIP\_INDEX\_URL
+
+This setting lets you override the base URL of the Python Package Index, which by default is `https://pypi.org/simple`. Use this setting when you need to run a remote build using custom dependencies that are found in a package index repository compliant with PEP 503 (the simple repository API) or in a local directory that follows the same format.
+
+|Key|Sample value|
+|||
+|PIP\_INDEX\_URL|`http://my.custom.package.repo/simple` |
+
+To learn more, see [`pip` documentation for `--index-url`](https://pip.pypa.io/en/stable/cli/pip_wheel/?highlight=index%20url#cmdoption-i) and using [Custom dependencies](functions-reference-python.md#remote-build-with-extra-index-url) in the Python developer reference.
+ ## PIP\_EXTRA\_INDEX\_URL
-The value for this setting indicates a custom package index URL for Python apps. Use this setting when you need to run a remote build using custom dependencies that are found in an extra package index.
+The value for this setting indicates a extra index URL for custom packages for Python apps, to use in addition to the `--index-url`. Use this setting when you need to run a remote build using custom dependencies that are found in an extra package index. Should follow the same rules as --index-url.
|Key|Sample value| ||| |PIP\_EXTRA\_INDEX\_URL|`http://my.custom.package.repo/simple` |
-To learn more, see [Custom dependencies](functions-reference-python.md#remote-build-with-extra-index-url) in the Python developer reference.
+To learn more, see [`pip` documentation for `--extra-index-url`](https://pip.pypa.io/en/stable/cli/pip_wheel/?highlight=index%20url#cmdoption-extra-index-url) and [Custom dependencies](functions-reference-python.md#remote-build-with-extra-index-url) in the Python developer reference.
## PYTHON\_ISOLATE\_WORKER\_DEPENDENCIES (Preview)
azure-functions Functions Reference Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-python.md
When your packages are available from an accessible custom package index, use a
You can also use basic authentication credentials with your extra package index URLs. To learn more, see [Basic authentication credentials](https://pip.pypa.io/en/stable/user_guide/#basic-authentication-credentials) in Python documentation.
+> [!NOTE]
+> If you need to change the base URL of the Python Package Index from the default of `https://pypi.org/simple`, you can do this by [creating an app setting](functions-how-to-use-azure-function-app-settings.md#settings) named [`PIP_INDEX_URL`](functions-app-settings.md#pip_index_url) that points to a different package index URL. Like [`PIP_EXTRA_INDEX_URL`](functions-app-settings.md#pip_extra_index_url), [`PIP_INDEX_URL`](functions-app-settings.md#pip_index_url) is a pip-specific application setting that changes the source for pip to use.
++ #### Installing local packages If your project uses packages that aren't publicly available, you can make them available to your app by putting them in the *\_\_app\_\_/.python_packages* directory. Before publishing, run the following command to install the dependencies locally:
azure-monitor Alerts Classic Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-classic-portal.md
Title: Create and manage classic metric alerts using Azure Monitor
-description: Learn how to use Azure portal, CLI or PowerShell to create, view and manage classic metric alert rules.
+description: Learn how to use Azure portal or PowerShell to create, view and manage classic metric alert rules.
> This article describes how to create older classic metric alerts. Azure Monitor now supports [newer near-real time metric alerts and a new alerts experience](./alerts-overview.md). Classic alerts are [retired](./monitoring-classic-retirement.md) for public cloud users. Classic alerts for Azure Government cloud and Azure China 21Vianet will retire on **29 February 2024**. >
-Classic metric alerts in Azure Monitor provide a way to get notified when one of your metrics cross a threshold. Classic metric alerts is an older functionality that allows for alerting only on non-dimensional metrics. There is an existing newer functionality called Metric alerts which has improved functionality over classic metric alerts. You can learn more about the new metric alerts functionality in [metric alerts overview](./alerts-metric-overview.md). In this article, we will describe how to create, view and manage classic metric alert rules through Azure portal, Azure CLI and PowerShell.
+Classic metric alerts in Azure Monitor provide a way to get notified when one of your metrics crosses a threshold. Classic metric alerts is an older functionality that allows for alerting only on non-dimensional metrics. There is an existing newer functionality called Metric alerts, which has improved functionality over classic metric alerts. You can learn more about the new metric alerts functionality in [metric alerts overview](./alerts-metric-overview.md). In this article, we will describe how to create, view and manage classic metric alert rules through Azure portal and PowerShell.
## With Azure portal
After you create an alert, you can select it and do one of the following tasks:
* Edit or delete it. * **Disable** or **Enable** it if you want to temporarily stop or resume receiving notifications for that alert.
-## With Azure CLI
-
-The previous sections described how to create, view and manage metric alert rules using Azure portal. This section will describe how to do the same using cross-platform [Azure CLI](/cli/azure/get-started-with-azure-cli). Quickest way to start using Azure CLI is through [Azure Cloud Shell](../../cloud-shell/overview.md).
-
-### Get all classic metric alert rules in a resource group
-
-```azurecli
-az monitor alert list --resource-group <group name>
-```
-
-### See details of a particular classic metric alert rule
-
-```azurecli
-az monitor alert show --resource-group <group name> --name <alert name>
-```
-
-### Create a classic metric alert rule
-
-```azurecli
-az monitor alert create --name <alert name> --resource-group <group name> \
- --action email <email1 email2 ...> \
- --action webhook <URI> \
- --target <target object ID> \
- --condition "<METRIC> {>,>=,<,<=} <THRESHOLD> {avg,min,max,total,last} ##h##m##s"
-```
-
-### Delete a classic metric alert rule
-
-```azurecli
-az monitor alert delete --name <alert name> --resource-group <group name>
-```
- ## With PowerShell [!INCLUDE [updated-for-az](../../../includes/updated-for-az.md)]
azure-monitor Alerts Prepare Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-prepare-migration.md
The following table is a reference to the programmatic interfaces for both class
| Deployment script type | Classic alerts | New metric alerts | | - | -- | -- | |REST API | [microsoft.insights/alertrules](/rest/api/monitor/alertrules) | [microsoft.insights/metricalerts](/rest/api/monitor/metricalerts) |
-|Azure CLI | [az monitor alert](/cli/azure/monitor/metrics/alert) | [az monitor metrics alert](/cli/azure/monitor/metrics/alert) |
+|Azure CLI | `az monitor alert` | [az monitor metrics alert](/cli/azure/monitor/metrics/alert) |
|PowerShell | [Reference](/powershell/module/az.monitor/add-azmetricalertrule) | [Reference](/powershell/module/az.monitor/add-azmetricalertrulev2) | | Azure Resource Manager template | [For classic alerts](./alerts-enable-template.md)|[For new metric alerts](./alerts-metric-create-templates.md)|
azure-monitor Autoscale Common Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-common-metrics.md
Title: Autoscale common metrics
-description: Learn which metrics are commonly used for autoscaling your Cloud Services, Virtual Machines and Web Apps.
+description: Learn which metrics are commonly used for autoscaling your cloud services, virtual machines, and web apps.
Last updated 04/22/2022
[!INCLUDE [updated-for-az](../../../includes/updated-for-az.md)]
-Azure Monitor autoscaling allows you to scale the number of running instances up or down, based on telemetry data (metrics). This document describes common metrics that you might want to use. In the Azure portal, you can choose the metric of the resource to scale by. However, you can also choose any metric from a different resource to scale by.
+Azure Monitor autoscaling allows you to scale the number of running instances up or down, based on telemetry data, also known as metrics. This article describes common metrics that you might want to use. In the Azure portal, you can choose the metric of the resource to scale by. You can also choose any metric from a different resource to scale by.
-Azure Monitor autoscale applies only to [Virtual Machine Scale Sets](https://azure.microsoft.com/services/virtual-machine-scale-sets/), [Cloud Services](https://azure.microsoft.com/services/cloud-services/), [App Service - Web Apps](https://azure.microsoft.com/services/app-service/web/), and [API Management services](../../api-management/api-management-key-concepts.md). Other Azure services use different scaling methods.
+Azure Monitor autoscale applies only to [Azure Virtual Machine Scale Sets](https://azure.microsoft.com/services/virtual-machine-scale-sets/), [Azure Cloud Services](https://azure.microsoft.com/services/cloud-services/), [Azure App Service - Web Apps](https://azure.microsoft.com/services/app-service/web/), and [Azure API Management](../../api-management/api-management-key-concepts.md). Other Azure services use different scaling methods.
## Compute metrics for Resource Manager-based VMs
-By default, Resource Manager-based Virtual Machines and Virtual Machine Scale Sets emit basic (host-level) metrics. In addition, when you configure diagnostics data collection for an Azure VM and VMSS, the Azure diagnostic extension also emits guest-OS performance counters (commonly known as "guest-OS metrics"). You use all these metrics in autoscale rules.
+By default, Azure Resource Manager-based virtual machines and virtual machine scale sets emit basic (host-level) metrics. In addition, when you configure diagnostics data collection for an Azure VM and virtual machine scale sets, the Azure Diagnostics extension also emits guest-OS performance counters. These counters are commonly known as "guest-OS metrics." You use all these metrics in autoscale rules.
-You can use the `Get MetricDefinitions` API/PoSH/CLI to view the metrics available for your VMSS resource.
+You can use the `Get MetricDefinitions` API/PoSH/CLI to view the metrics available for your Virtual Machine Scale Sets resource.
-If you're using VM scale sets and you don't see a particular metric listed, then it is likely *disabled* in your diagnostics extension.
+If you're using virtual machine scale sets and you don't see a particular metric listed, it's likely *disabled* in your Diagnostics extension.
-If a particular metric is not being sampled or transferred at the frequency you want, you can update the diagnostics configuration.
+If a particular metric isn't being sampled or transferred at the frequency you want, you can update the diagnostics configuration.
-If either preceding case is true, then review [Use PowerShell to enable Azure Diagnostics in a virtual machine running Windows](../../virtual-machines/extensions/diagnostics-windows.md) about PowerShell to configure and update your Azure VM Diagnostics extension to enable the metric. That article also includes a sample diagnostics configuration file.
+If either preceding case is true, see [Use PowerShell to enable Azure Diagnostics in a virtual machine running Windows](../../virtual-machines/extensions/diagnostics-windows.md) to configure and update your Azure VM Diagnostics extension to enable the metric. The article also includes a sample diagnostics configuration file.
### Host metrics for Resource Manager-based Windows and Linux VMs
-The following host-level metrics are emitted by default for Azure VM and VMSS in both Windows and Linux instances. These metrics describe your Azure VM, but are collected from the Azure VM host rather than via agent installed on the guest VM. You may use these metrics in autoscaling rules.
+The following host-level metrics are emitted by default for Azure VM and virtual machine scale sets in both Windows and Linux instances. These metrics describe your Azure VM but are collected from the Azure VM host rather than via agent installed on the guest VM. You can use these metrics in autoscaling rules.
- [Host metrics for Resource Manager-based Windows and Linux VMs](../essentials/metrics-supported.md#microsoftcomputevirtualmachines)-- [Host metrics for Resource Manager-based Windows and Linux VM Scale Sets](../essentials/metrics-supported.md#microsoftcomputevirtualmachinescalesets)
+- [Host metrics for Resource Manager-based Windows and Linux virtual machine scale sets](../essentials/metrics-supported.md#microsoftcomputevirtualmachinescalesets)
### Guest OS metrics for Resource Manager-based Windows VMs
-When you create a VM in Azure, diagnostics is enabled by using the Diagnostics extension. The diagnostics extension emits a set of metrics taken from inside of the VM. This means you can autoscale off of metrics that are not emitted by default.
+When you create a VM in Azure, diagnostics is enabled by using the Diagnostics extension. The Diagnostics extension emits a set of metrics taken from inside of the VM. This means you can autoscale off of metrics that aren't emitted by default.
You can generate a list of the metrics by using the following command in PowerShell.
Get-AzMetricDefinition -ResourceId <resource_id> | Format-Table -Property Name,U
You can create an alert for the following metrics:
-| Metric Name | Unit |
+| Metric name | Unit |
| | | | \Processor(_Total)\% Processor Time |Percent | | \Processor(_Total)\% Privileged Time |Percent |
You can create an alert for the following metrics:
### Guest OS metrics Linux VMs
-When you create a VM in Azure, diagnostics is enabled by default by using Diagnostics extension.
+When you create a VM in Azure, diagnostics is enabled by default by using the Diagnostics extension.
You can generate a list of the metrics by using the following command in PowerShell.
Get-AzMetricDefinition -ResourceId <resource_id> | Format-Table -Property Name,U
You can create an alert for the following metrics:
-| Metric Name | Unit |
+| Metric name | Unit |
| | | | \Memory\AvailableMemory |Bytes | | \Memory\PercentAvailableMemory |Percent |
Get-AzMetricDefinition -ResourceId <resource_id> | Format-Table -Property Name,U
| \NetworkInterface\TotalTxErrors |Count | | \NetworkInterface\TotalCollisions |Count |
-## Commonly used App Service (Server Farm) metrics
+## Commonly used App Service (server farm) metrics
-You can also perform autoscale based on common web server metrics such as the Http queue length. Its metric name is **HttpQueueLength**. The following section lists available server farm (App Service) metrics.
+You can also perform autoscale based on common web server metrics such as the HTTP queue length. Its metric name is **HttpQueueLength**. The following section lists available server farm (App Service) metrics.
### Web Apps metrics
-You can generate a list of the Web Apps metrics by using the following command in PowerShell.
+You can generate a list of the Web Apps metrics by using the following command in PowerShell:
``` Get-AzMetricDefinition -ResourceId <resource_id> | Format-Table -Property Name,Unit
Get-AzMetricDefinition -ResourceId <resource_id> | Format-Table -Property Name,U
You can alert on or scale by these metrics.
-| Metric Name | Unit |
+| Metric name | Unit |
| | | | CpuPercentage |Percent | | MemoryPercentage |Percent |
You can alert on or scale by these metrics.
## Commonly used Storage metrics
-You can scale by Storage queue length, which is the number of messages in the storage queue. Storage queue length is a special metric and the threshold is the number of messages per instance. For example, if there are two instances and if the threshold is set to 100, scaling occurs when the total number of messages in the queue is 200. That can be 100 messages per instance, 120 and 80, or any other combination that adds up to 200 or more.
+You can scale by Azure Storage queue length, which is the number of messages in the Storage queue. Storage queue length is a special metric, and the threshold is the number of messages per instance. For example, if there are two instances and if the threshold is set to 100, scaling occurs when the total number of messages in the queue is 200. That amount can be 100 messages per instance, 120 plus 80, or any other combination that adds up to 200 or more.
-Configure this setting in the Azure portal in the **Settings** blade. For VM scale sets, you can update the Autoscale setting in the Resource Manager template to use *metricName* as *ApproximateMessageCount* and pass the ID of the storage queue as *metricResourceUri*.
+Configure this setting in the Azure portal in the **Settings** pane. For virtual machine scale sets, you can update the autoscale setting in the Resource Manager template to use `metricName` as `ApproximateMessageCount` and pass the ID of the storage queue as `metricResourceUri`.
-For example, with a Classic Storage Account the autoscale setting metricTrigger would include:
+For example, with a Classic Storage account, the autoscale setting `metricTrigger` would include:
``` "metricName": "ApproximateMessageCount",
For example, with a Classic Storage Account the autoscale setting metricTrigger
"metricResourceUri": "/subscriptions/SUBSCRIPTION_ID/resourceGroups/RES_GROUP_NAME/providers/Microsoft.ClassicStorage/storageAccounts/STORAGE_ACCOUNT_NAME/services/queue/queues/QUEUE_NAME" ```
-For a (non-classic) storage account, the metricTrigger would include:
+For a (non-classic) Storage account, the `metricTrigger` setting would include:
``` "metricName": "ApproximateMessageCount",
For a (non-classic) storage account, the metricTrigger would include:
## Commonly used Service Bus metrics
-You can scale by Service Bus queue length, which is the number of messages in the Service Bus queue. Service Bus queue length is a special metric and the threshold is the number of messages per instance. For example, if there are two instances and if the threshold is set to 100, scaling occurs when the total number of messages in the queue is 200. That can be 100 messages per instance, 120 and 80, or any other combination that adds up to 200 or more.
+You can scale by Azure Service Bus queue length, which is the number of messages in the Service Bus queue. Service Bus queue length is a special metric, and the threshold is the number of messages per instance. For example, if there are two instances and if the threshold is set to 100, scaling occurs when the total number of messages in the queue is 200. That amount can be 100 messages per instance, 120 plus 80, or any other combination that adds up to 200 or more.
-For VM scale sets, you can update the Autoscale setting in the Resource Manager template to use *metricName* as *ApproximateMessageCount* and pass the ID of the storage queue as *metricResourceUri*.
+For virtual machine scale sets, you can update the autoscale setting in the Resource Manager template to use `metricName` as `ApproximateMessageCount` and pass the ID of the storage queue as `metricResourceUri`.
``` "metricName": "ApproximateMessageCount",
For VM scale sets, you can update the Autoscale setting in the Resource Manager
``` > [!NOTE]
-> For Service Bus, the resource group concept does not exist but Azure Resource Manager creates a default resource group per region. The resource group is usually in the 'Default-ServiceBus-[region]' format. For example, 'Default-ServiceBus-EastUS', 'Default-ServiceBus-WestUS', 'Default-ServiceBus-AustraliaEast' etc.
-
+> For Service Bus, the resource group concept doesn't exist, but Azure Resource Manager creates a default resource group per region. The resource group is usually in the Default-ServiceBus-[region] format. Examples are Default-ServiceBus-EastUS, Default-ServiceBus-WestUS, and Default-ServiceBus-AustraliaEast.
azure-monitor Autoscale Custom Metric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-custom-metric.md
Title: How to autoscale in Azure using a custom metric
-description: Learn how to scale your web app is custom metric in the Azure portal
+ Title: Autoscale in Azure using a custom metric
+description: Learn how to scale your web app by using custom metrics in the Azure portal.
Last updated 06/22/2022
-# Customer intent: As a user or dev ops administrator I want to use the portal to set up autoscale so I can scale my resources.
+# Customer intent: As a user or dev ops administrator, I want to use the portal to set up autoscale so I can scale my resources.
-# How to autoscale a web app using custom metrics.
+# Autoscale a web app by using custom metrics
-This article describes how to set up autoscale for a web app using a custom metric in the Azure portal.
+This article describes how to set up autoscale for a web app by using a custom metric in the Azure portal.
-Autoscale allows you to add and remove resources to handle increases and decreases in load. In this article we'll show you how to set up autoscale for a web app, using one of the Application Insights metrics to scale the web app in and out.
+Autoscale allows you to add and remove resources to handle increases and decreases in load. In this article, we'll show you how to set up autoscale for a web app by using one of the Application Insights metrics to scale the web app in and out.
Azure Monitor autoscale applies to:
-+ [Virtual Machine Scale Sets](https://azure.microsoft.com/services/virtual-machine-scale-sets/)
-+ [Cloud Services](https://azure.microsoft.com/services/cloud-services/)
-+ [App Service - Web Apps](https://azure.microsoft.com/services/app-service/web/)
-+ [Azure Data Explorer Cluster](https://azure.microsoft.com/services/data-explorer/)
-+ Integration Service Environment and [API Management services](../../api-management/api-management-key-concepts.md).
-## Prerequisites
-An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).
++ [Azure Virtual Machine Scale Sets](https://azure.microsoft.com/services/virtual-machine-scale-sets/)++ [Azure Cloud Services](https://azure.microsoft.com/services/cloud-services/)++ [Azure App Service - Web Apps](https://azure.microsoft.com/services/app-service/web/)++ [Azure Data Explorer cluster](https://azure.microsoft.com/services/data-explorer/) ++ Integration service environment and [Azure API Management](../../api-management/api-management-key-concepts.md)+
+## Prerequisite
+
+You need an Azure account with an active subscription. You can [create an account for free](https://azure.microsoft.com/free).
## Overview
-To create an autoscaled web app, follow the steps below.
-1. If you do not already have one, [Create an App Service Plan](#create-an-app-service-plan). Note that you can't set up autoscale for free or basic tiers.
-1. If you do not already have one, [Create a web app](#create-a-web-app) using your service plan.
+
+To create an autoscaled web app:
+
+1. If you don't already have one, [create an App Service plan](#create-an-app-service-plan). You can't set up autoscale for free or basic tiers.
+1. If you don't already have one, [create a web app](#create-a-web-app) by using your service plan.
1. [Configure autoscaling](#configure-autoscale) for your service plan.
-
-## Create an App Service Plan
+## Create an App Service plan
-An App Service plan defines a set of compute resources for a web app to run on.
+An App Service plan defines a set of compute resources for a web app to run on.
1. Open the [Azure portal](https://portal.azure.com). 1. Search for and select **App Service plans**.
- :::image type="content" source="media\autoscale-custom-metric\search-app-service-plan.png" alt-text="Screenshot of the search bar, searching for app service plans.":::
+ :::image type="content" source="media\autoscale-custom-metric\search-app-service-plan.png" alt-text="Screenshot that shows searching for App Service plans.":::
-1. Select **Create** from the **App Service plan** page.
+1. On the **App Service plan** page, select **Create**.
1. Select a **Resource group** or create a new one. 1. Enter a **Name** for your plan. 1. Select an **Operating system** and **Region**.
-1. Select an **Sku and size**.
+1. Select an **SKU** and **size**.
+ > [!NOTE]
- > You cannot use autoscale with free or basic tiers.
+ > You can't use autoscale with free or basic tiers.
-1. Select **Review + create**, then **Create**.
+1. Select **Review + create** > **Create**.
- :::image type="content" source="media\autoscale-custom-metric\create-app-service-plan.png" alt-text="Screenshot of the Basics tab of the Create App Service Plan screen that you configure the App Service plan on.":::
+ :::image type="content" source="media\autoscale-custom-metric\create-app-service-plan.png" alt-text="Screenshot that shows the Basics tab of the Create App Service Plan screen on which you configure the App Service plan.":::
## Create a web app
-1. Search for and select *App services*.
+1. Search for and select **App services**.
- :::image type="content" source="media\autoscale-custom-metric\search-app-services.png" alt-text="Screenshot of the search bar, searching for app service.":::
+ :::image type="content" source="media\autoscale-custom-metric\search-app-services.png" alt-text="Screenshot that shows searching for App Services.":::
-1. Select **Create** from the **App Services** page.
+1. On the **App Services** page, select **Create**.
1. On the **Basics** tab, enter a **Name** and select a **Runtime stack**.
-1. Select the **Operating System** and **Region** that you chose when defining your App Service plan.
+1. Select the **Operating System** and **Region** that you chose when you defined your App Service plan.
1. Select the **App Service plan** that you created earlier.
-1. Select the **Monitoring** tab from the menu bar.
+1. Select the **Monitoring** tab.
- :::image type="content" source="media\autoscale-custom-metric\create-web-app.png" alt-text="Screenshot of the Basics tab of the Create web app page where you set up a web app.":::
+ :::image type="content" source="media\autoscale-custom-metric\create-web-app.png" alt-text="Screenshot that shows the Basics tab of the Create Web App page where you set up a web app.":::
1. On the **Monitoring** tab, select **Yes** to enable Application Insights.
-1. Select **Review + create**, then **Create**.
-
- :::image type="content" source="media\autoscale-custom-metric\enable-application-insights.png"alt-text="Screenshot of the Monitoring tab of the Create web app page where you enable Application Insights.":::
+1. Select **Review + create** > **Create**.
+ :::image type="content" source="media\autoscale-custom-metric\enable-application-insights.png"alt-text="Screenshot that shows the Monitoring tab of the Create Web App page where you enable Application Insights.":::
## Configure autoscale+ Configure the autoscale settings for your App Service plan.
-1. Search and select *autoscale* in the search bar or select **Autoscale** under **Monitor** in the side menu bar.
+1. Search and select **autoscale** in the search bar or select **Autoscale** under **Monitor** in the menu bar on the left.
1. Select your App Service plan. You can only configure production plans.
- :::image type="content" source="media\autoscale-custom-metric\autoscale-overview-page.png" alt-text="A screenshot of the autoscale landing page where you select the resource to set up autoscale for.":::
+ :::image type="content" source="media\autoscale-custom-metric\autoscale-overview-page.png" alt-text="Screenshot that shows the Autoscale page where you select the resource to set up autoscale.":::
+
+### Set up a scale-out rule
-### Set up a scale out rule
-Set up a scale out rule so that Azure spins up an additional instance of the web app, when your web app is handling more than 70 sessions per instance.
+Set up a scale-out rule so that Azure spins up another instance of the web app when your web app is handling more than 70 sessions per instance.
1. Select **Custom autoscale**.
-1. In the **Rules** section of the default scale condition, select **Add a rule**.
+1. In the **Rules** section of the default scale condition, select **Add a rule**.
- :::image type="content" source="media/autoscale-custom-metric/autoscale-settings.png" alt-text="A screenshot of the autoscale settings page where you set up the basic autoscale settings.":::
+ :::image type="content" source="media/autoscale-custom-metric/autoscale-settings.png" alt-text="Screenshot that shows the Autoscale setting page where you set up the basic autoscale settings.":::
1. From the **Metric source** dropdown, select **Other resource**.
-1. From **Resource Type**, select **Application Insights**.
+1. From **Resource type**, select **Application Insights**.
1. From the **Resource** dropdown, select your web app.
-1. Select a **Metric name** to base your scaling on, for example *Sessions*.
-1. Select **Enable metric divide by instance count** so that the number of sessions per instance is measured.
-1. 1. From the **Operator** dropdown, select **Greater than**.
-1. Enter the **Metric threshold to trigger the scale action**, for example, *70*.
-1. Under **Actions**, set the **Operation** to *Increase count* and set the **Instance count** to *1*.
+1. Select a **Metric name** to base your scaling on. For example, use **Sessions**.
+1. Select the **Enable metric divide by instance count** checkbox so that the number of sessions per instance is measured.
+1. From the **Operator** dropdown, select **Greater than**.
+1. Enter the **Metric threshold to trigger the scale action**. For example, use **70**.
+1. Under **Action**, set **Operation** to **Increase count by**. Set **Instance count** to **1**.
1. Select **Add**.
- :::image type="content" source="media/autoscale-custom-metric/scale-out-rule.png" alt-text="A screenshot of the Scale rule page where you configure the scale out rule.":::
+ :::image type="content" source="media/autoscale-custom-metric/scale-out-rule.png" alt-text="Screenshot that shows the Scale rule page where you configure the scale-out rule.":::
+
+### Set up a scale-in rule
+Set up a scale-in rule so that Azure spins down one of the instances when the number of sessions your web app is handling is less than 60 per instance. Azure will reduce the number of instances each time this rule is run until the minimum number of instances is reached.
-### Set up a scale in rule
-Set up a scale in rule so Azure spins down one of the instances when the number of sessions your web app is handling is less than 60 per instance. Azure will reduce the number of instances each time this rule is run until the minimum number of instances is reached.
-1. In the **Rules** section of the default scale condition, select **Add a rule**.
+1. In the **Rules** section of the default scale condition, select **Add a rule**.
1. From the **Metric source** dropdown, select **Other resource**.
-1. From **Resource Type**, select **Application Insights**.
+1. From **Resource type**, select **Application Insights**.
1. From the **Resource** dropdown, select your web app.
-1. Select a **Metric name** to base your scaling on, for example *Sessions*.
-1. Select **Enable metric divide by instance count** so that the number of sessions per instance is measured.
+1. Select a **Metric name** to base your scaling on. For example, use **Sessions**.
+1. Select the **Enable metric divide by instance count** checkbox so that the number of sessions per instance is measured.
1. From the **Operator** dropdown, select **Less than**.
-1. Enter the **Metric threshold to trigger the scale action**, for example, *60*.
-1. Under **Actions**, set the **Operation** to **Decrease count** and set the **Instance count** to *1*.
+1. Enter the **Metric threshold to trigger the scale action**. For example, use **60**.
+1. Under **Action**, set **Operation** to **Decrease count by** and set **Instance count** to **1**.
1. Select **Add**.
- :::image type="content" source="media/autoscale-custom-metric/scale-in-rule.png" alt-text="A screenshot of the Scale rule page where you configure the scale in rule.":::
+ :::image type="content" source="media/autoscale-custom-metric/scale-in-rule.png" alt-text="Screenshot that shows the Scale rule page where you configure the scale-in rule.":::
### Limit the number of instances
-1. Set the maximum number of instances that can be spun up in the **Maximum** field of the **Instance limits** section, for example, *4*.
+1. Set the maximum number of instances that can be spun up in the **Maximum** field of the **Instance limits** section. For example, use **4**.
1. Select **Save**.
- :::image type="content" source="media/autoscale-custom-metric/autoscale-instance-limits.png" alt-text="A screenshot of the autoscale settings page where you set up instance limits.":::
+ :::image type="content" source="media/autoscale-custom-metric/autoscale-instance-limits.png" alt-text="Screenshot that shows the Autoscale setting page where you set up instance limits.":::
## Clean up resources
-If you're not going to continue to use this application, delete
-resources with the following steps:
-1. From the App service overview page, select **Delete**.
+If you're not going to continue to use this application, delete resources.
- :::image type="content" source="media/autoscale-custom-metric/delete-web-app.png" alt-text="A screenshot of the App Service page where you can Delete the web app.":::
+1. On the App Service overview page, select **Delete**.
-1. From The App Service Plan page, select **Delete**. The autoscale settings are deleted along with the App Service plan.
+ :::image type="content" source="media/autoscale-custom-metric/delete-web-app.png" alt-text="Screenshot that shows the App Service page where you can delete the web app.":::
- :::image type="content" source="media/autoscale-custom-metric/delete-service-plan.png" alt-text="A screenshot of the App Service plan page where you can Delete the app service plan.":::
+1. On the **App Service plans** page, select **Delete**. The autoscale settings are deleted along with the App Service plan.
+
+ :::image type="content" source="media/autoscale-custom-metric/delete-service-plan.png" alt-text="Screenshot that shows the App Service plans page where you can delete the App Service plan.":::
## Next steps
-Learn more about autoscale by referring to the following articles:
+
+To learn more about autoscale, see the following articles:
+ - [Use autoscale actions to send email and webhook alert notifications](./autoscale-webhook-email.md) - [Overview of autoscale](./autoscale-overview.md) - [Azure Monitor autoscale common metrics](./autoscale-common-metrics.md)
azure-monitor Autoscale Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-get-started.md
Last updated 04/05/2022
-# Get started with Autoscale in Azure
-This article describes how to set up your Autoscale settings for your resource in the Microsoft Azure portal.
+# Get started with autoscale in Azure
-Azure Monitor autoscale applies only to [Virtual Machine scale sets](https://azure.microsoft.com/services/virtual-machine-scale-sets/), [Cloud Services](https://azure.microsoft.com/services/cloud-services/), [App Service - Web Apps](https://azure.microsoft.com/services/app-service/web/), and [API Management services](../../api-management/api-management-key-concepts.md).
+This article describes how to set up your autoscale settings for your resource in the Azure portal.
-## Discover the Autoscale settings in your subscription
+Azure Monitor autoscale applies only to [Azure Virtual Machine Scale Sets](https://azure.microsoft.com/services/virtual-machine-scale-sets/), [Azure Cloud Services](https://azure.microsoft.com/services/cloud-services/), [Azure App Service - Web Apps](https://azure.microsoft.com/services/app-service/web/), and [Azure API Management](../../api-management/api-management-key-concepts.md).
+
+## Discover the autoscale settings in your subscription
> [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RE4u7ts]
-You can discover all the resources for which Autoscale is applicable in Azure Monitor. Use the following steps for a step-by-step walkthrough:
+To discover all the resources for which autoscale is applicable in Azure Monitor, follow these steps.
1. Open the [Azure portal.][1]
-1. Click the Azure Monitor icon on the top of the page.
- [![Screenshot on how to open Azure Monitor.](./media/autoscale-get-started/click-on-monitor-1.png)](./media/autoscale-get-started/click-on-monitor-1.png#lightbox)
-1. Click **Autoscale** to view all the resources for which Autoscale is applicable, along with their current Autoscale status.
- [![Screenshot of Autoscale in Azure Monitor.](./media/autoscale-get-started/click-on-autoscale-2.png)](./media/autoscale-get-started/click-on-autoscale-2.png#lightbox)
-
+1. Select the Azure Monitor icon at the top of the page.
+
+ [![Screenshot that shows how to open Azure Monitor.](./media/autoscale-get-started/click-on-monitor-1.png)](./media/autoscale-get-started/click-on-monitor-1.png#lightbox)
+
+1. Select **Autoscale** to view all the resources for which autoscale is applicable, along with their current autoscale status.
+
+ [![Screenshot that shows autoscale in Azure Monitor.](./media/autoscale-get-started/click-on-autoscale-2.png)](./media/autoscale-get-started/click-on-autoscale-2.png#lightbox)
-You can use the filter pane at the top to scope down the list to select resources in a specific resource group, specific resource types, or a specific resource.
+1. Use the filter pane at the top to scope down the list to select resources in a specific resource group, specific resource types, or a specific resource.
-[![Screenshot of View resource status.](./media/autoscale-get-started/view-all-resources-3.png)](./media/autoscale-get-started/view-all-resources-3.png#lightbox)
+ [![Screenshot that shows viewing resource status.](./media/autoscale-get-started/view-all-resources-3.png)](./media/autoscale-get-started/view-all-resources-3.png#lightbox)
-For each resource, you will find the current instance count and the Autoscale status. The Autoscale status can be:
+ For each resource, you'll find the current instance count and the autoscale status. The autoscale status can be:
-- **Not configured**: You have not enabled Autoscale yet for this resource.-- **Enabled**: You have enabled Autoscale for this resource.-- **Disabled**: You have disabled Autoscale for this resource.
+ - **Not configured**: You haven't enabled autoscale yet for this resource.
+ - **Enabled**: You've enabled autoscale for this resource.
+ - **Disabled**: You've disabled autoscale for this resource.
+ You can also reach the scaling page by selecting **All Resources** on the home page and filter to the resource you're interested in scaling.
-Additionally, you can reach the scaling page by clicking on **All Resources** on the home page and filter to the resource you're interested in scaling.
+ [![Screenshot that shows all resources.](./media/autoscale-get-started/choose-all-resources.png)](./media/autoscale-get-started/choose-all-resources.png#lightbox)
-[![Screenshot of all resources.](./media/autoscale-get-started/choose-all-resources.png)](./media/autoscale-get-started/choose-all-resources.png#lightbox)
+1. After you've selected the resource that you're interested in, select the **Scaling** tab to configure autoscaling rules.
+ [![Screenshot that shows the scaling button.](./media/autoscale-get-started/scaling-page.png)](./media/autoscale-get-started/scaling-page.png#lightbox)
-Once you've selected the resource that you're interested in, select the **Scaling** tab to configure autoscaling rules.
+## Create your first autoscale setting
-[![Screenshot of scaling button.](./media/autoscale-get-started/scaling-page.png)](./media/autoscale-get-started/scaling-page.png#lightbox)
+Let's now go through a step-by-step walkthrough to create your first autoscale setting.
-## Create your first Autoscale setting
+1. Open the **Autoscale** pane in Azure Monitor and select a resource that you want to scale. The following steps use an App Service plan associated with a web app. You can [create your first ASP.NET web app in Azure in 5 minutes.][5]
+1. The current instance count is 1. Select **Custom autoscale**.
-Let's now go through a simple step-by-step walkthrough to create your first Autoscale setting.
+ [![Screenshot that shows scale setting for a new web app.](./media/autoscale-get-started/manual-scale-04.png)](./media/autoscale-get-started/manual-scale-04.png#lightbox)
-1. Open the **Autoscale** blade in Azure Monitor and select a resource that you want to scale. (The following steps use an App Service plan associated with a web app. You can [create your first ASP.NET web app in Azure in 5 minutes.][5])
-1. Note that the current instance count is 1. Click **Custom autoscale**.
- [![Scale setting for new web app.](./media/autoscale-get-started/manual-scale-04.png)](./media/autoscale-get-started/manual-scale-04.png#lightbox)
-1. Provide a name for the scale setting, and then click **Add a rule**. This opens as a context pane on the right side. By default, this sets the option to scale your instance count by 1 if the CPU percentage of the resource exceeds 70 percent. Leave it at its default values and click **Add**.
- [![Create scale setting for a web app.](./media/autoscale-get-started/custom-scale-add-rule-05.png)](./media/autoscale-get-started/custom-scale-add-rule-05.png#lightbox)
-1. You've now created your first scale rule. Note that the UX recommends best practices and states that "It is recommended to have at least one scale in rule." To do so:
+1. Provide a name for the scale setting. Select **Add a rule** to open a context pane on the right side. By default, this action sets the option to scale your instance count by 1 if the CPU percentage of the resource exceeds 70 percent. Leave it at its default values and select **Add**.
- a. Click **Add a rule**.
+ [![Screenshot that shows creating a scale setting for a web app.](./media/autoscale-get-started/custom-scale-add-rule-05.png)](./media/autoscale-get-started/custom-scale-add-rule-05.png#lightbox)
- b. Set **Operator** to **Less than**.
+1. You've now created your first scale rule. The UX recommends best practices and states that "It is recommended to have at least one scale in rule." To do so:
- c. Set **Threshold** to **20**.
+ 1. Select **Add a rule**.
+ 1. Set **Operator** to **Less than**.
+ 1. Set **Threshold** to **20**.
+ 1. Set **Operation** to **Decrease count by**.
- d. Set **Operation** to **Decrease count by**.
+ You should now have a scale setting that scales out and scales in based on CPU usage.
- You should now have a scale setting that scales out/scales in based on CPU usage.
- [![Scale based on CPU](./media/autoscale-get-started/custom-scale-results-06.png)](./media/autoscale-get-started/custom-scale-results-06.png#lightbox)
-1. Click **Save**.
+ [![Screenshot that shows scale based on CPU.](./media/autoscale-get-started/custom-scale-results-06.png)](./media/autoscale-get-started/custom-scale-results-06.png#lightbox)
+
+1. Select **Save**.
Congratulations! You've now successfully created your first scale setting to autoscale your web app based on CPU usage. > [!NOTE]
-> The same steps are applicable to get started with a Virtual Machine Scale Set or cloud service role.
+> The same steps are applicable to get started with a Virtual Machine Scale Sets or cloud service role.
## Other considerations+
+The following sections introduce other considerations for autoscaling.
+ ### Scale based on a schedule
-In addition to scale based on CPU, you can set your scale differently for specific days of the week.
-1. Click **Add a scale condition**.
+You can set your scale differently for specific days of the week.
+
+1. Select **Add a scale condition**.
1. Setting the scale mode and the rules is the same as the default condition. 1. Select **Repeat specific days** for the schedule. 1. Select the days and the start/end time for when the scale condition should be applied.
-[![Scale condition based on schedule](./media/autoscale-get-started/scale-same-based-on-condition-07.png)](./media/autoscale-get-started/scale-same-based-on-condition-07.png#lightbox)
+[![Screenshot that shows the scale condition based on schedule.](./media/autoscale-get-started/scale-same-based-on-condition-07.png)](./media/autoscale-get-started/scale-same-based-on-condition-07.png#lightbox)
+ ### Scale differently on specific dates
-In addition to scale based on CPU, you can set your scale differently for specific dates.
-1. Click **Add a scale condition**.
+You can set your scale differently for specific dates.
+
+1. Select **Add a scale condition**.
1. Setting the scale mode and the rules is the same as the default condition. 1. Select **Specify start/end dates** for the schedule. 1. Select the start/end dates and the start/end time for when the scale condition should be applied.
-[![Scale condition based on dates](./media/autoscale-get-started/scale-different-based-on-time-08.png)](./media/autoscale-get-started/scale-different-based-on-time-08.png#lightbox)
+[![Screenshot that shows the scale condition based on dates.](./media/autoscale-get-started/scale-different-based-on-time-08.png)](./media/autoscale-get-started/scale-different-based-on-time-08.png#lightbox)
### View the scale history of your resource+ Whenever your resource is scaled up or down, an event is logged in the activity log. You can view the scale history of your resource for the past 24 hours by switching to the **Run history** tab.
-![Run history][12]
+![Screenshot that shows a Run history screen.][12]
-If you want to view the complete scale history (for up to 90 days), select **Click here to see more details**. The activity log opens, with Autoscale pre-selected for your resource and category.
+To view the complete scale history for up to 90 days, select **Click here to see more details**. The activity log opens, with autoscale preselected for your resource and category.
### View the scale definition of your resource
-Autoscale is an Azure Resource Manager resource. You can view the scale definition in JSON by switching to the **JSON** tab.
-[![Scale definition](./media/autoscale-get-started/view-scale-definition-09.png)](./media/autoscale-get-started/view-scale-definition-09.png#lightbox)
+Autoscale is an Azure Resource Manager resource. To view the scale definition in JSON, switch to the **JSON** tab.
+
+[![Screenshot that shows scale definition.](./media/autoscale-get-started/view-scale-definition-09.png)](./media/autoscale-get-started/view-scale-definition-09.png#lightbox)
-You can make changes in JSON directly, if required. These changes will be reflected after you save them.
+You can make changes in JSON directly, if necessary. These changes will be reflected after you save them.
### Cool-down period effects
-Autoscale uses a cool-down period to prevent "flapping", which is the rapid, repetitive up and down scaling of instances. For more information, see [Autoscale evaluation steps](autoscale-understanding-settings.md#autoscale-evaluation). Other valuable information on flapping and understanding how to monitor the autoscale engine can be found in [Autoscale Best Practices](autoscale-best-practices.md#choose-the-thresholds-carefully-for-all-metric-types) and [Troubleshooting autoscale](autoscale-troubleshoot.md) respectively.
+Autoscale uses a cool-down period to prevent "flapping," which is the rapid, repetitive up-and-down scaling of instances. For more information, see [Autoscale evaluation steps](autoscale-understanding-settings.md#autoscale-evaluation). For other valuable information on flapping and understanding how to monitor the autoscale engine, see [Autoscale best practices](autoscale-best-practices.md#choose-the-thresholds-carefully-for-all-metric-types) and [Troubleshooting autoscale](autoscale-troubleshoot.md), respectively.
## Route traffic to healthy instances (App Service) <a id="health-check-path"></a>
-When your Azure web app is scaled out to multiple instances, App Service can perform health checks on your instances to route traffic to the healthy instances. To learn more, see [this article on App Service Health check](../../app-service/monitor-instances-health-check.md).
+When your Azure web app is scaled out to multiple instances, App Service can perform health checks on your instances to route traffic to the healthy instances. To learn more, see [Monitor App Service instances using Health check](../../app-service/monitor-instances-health-check.md).
+
+## Move autoscale to a different region
+
+This section describes how to move Azure autoscale to another region under the same subscription and resource group. You can use REST API to move autoscale settings.
-## Moving Autoscale to a different region
-This section describes how to move Azure autoscale to another region under the same Subscription, and Resource Group. You can use REST API to move autoscale settings.
-### Prerequisite
-1. Ensure that the subscription and Resource Group are available and the details in both the source and destination regions are identical.
-1. Ensure that Azure autoscale is available in the [Azure region you want to move to](https://azure.microsoft.com/global-infrastructure/services/?products=monitor&regions=all).
+### Prerequisites
+
+- Ensure that the subscription and resource group are available and the details in both the source and destination regions are identical.
+- Ensure that Azure autoscale is available in the [Azure region you want to move to](https://azure.microsoft.com/global-infrastructure/services/?products=monitor&regions=all).
### Move+ Use [REST API](/rest/api/monitor/autoscalesettings/createorupdate) to create an autoscale setting in the new environment. The autoscale setting created in the destination region will be a copy of the autoscale setting in the source region.
-[Diagnostic settings](../essentials/diagnostic-settings.md) that were created in association with the autoscale setting in the source region cannot be moved. You will need to recreate diagnostic settings in the destination region, after the creation of autosale settings is completed.
+[Diagnostic settings](../essentials/diagnostic-settings.md) that were created in association with the autoscale setting in the source region can't be moved. You'll need to re-create diagnostic settings in the destination region, after the creation of autoscale settings is completed.
### Learn more about moving resources across Azure regions
-To learn more about moving resources between regions and disaster recovery in Azure, refer to [Move resources to a new resource group or subscription](../../azure-resource-manager/management/move-resource-group-and-subscription.md)
+
+To learn more about moving resources between regions and disaster recovery in Azure, see [Move resources to a new resource group or subscription](../../azure-resource-manager/management/move-resource-group-and-subscription.md).
## Next steps-- [Create an Activity Log Alert to monitor all Autoscale engine operations on your subscription](https://github.com/Azure/azure-quickstart-templates/tree/master/demos/monitor-autoscale-alert)-- [Create an Activity Log Alert to monitor all failed Autoscale scale-in/scale-out operations on your subscription](https://github.com/Azure/azure-quickstart-templates/tree/master/demos/monitor-autoscale-failed-alert)+
+- [Create an activity log alert to monitor all autoscale engine operations on your subscription](https://github.com/Azure/azure-quickstart-templates/tree/master/demos/monitor-autoscale-alert)
+- [Create an activity log alert to monitor all failed autoscale scale-in/scale-out operations on your subscription](https://github.com/Azure/azure-quickstart-templates/tree/master/demos/monitor-autoscale-failed-alert)
<!--Reference-->
azure-monitor Autoscale Predictive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-predictive.md
Title: Use predictive autoscale to scale out before load demands in virtual machine scale sets (Preview)
-description: Details on the new predictive autoscale feature in Azure Monitor.
+ Title: Use predictive autoscale to scale out before load demands in virtual machine scale sets (preview)
+description: This article provides information on the new predictive autoscale feature in Azure Monitor.
Last updated 07/18/2022
-# Use predictive autoscale to scale out before load demands in virtual machine scale sets (Preview)
+# Use predictive autoscale to scale out before load demands in virtual machine scale sets (preview)
-**Predictive autoscale** uses machine learning to help manage and scale Azure Virtual Machine Scale Sets with cyclical workload patterns. It forecasts the overall CPU load to your virtual machine scale set, based on your historical CPU usage patterns. By observing and learning from historical usage, it predicts the overall CPU load ensuring scale-out occurs in time to meet the demand.
+*Predictive autoscale* uses machine learning to help manage and scale Azure Virtual Machine Scale Sets with cyclical workload patterns. It forecasts the overall CPU load to your virtual machine scale set, based on your historical CPU usage patterns. It predicts the overall CPU load by observing and learning from historical usage. This process ensures that scale-out occurs in time to meet the demand.
-Predictive autoscale needs a minimum of 7 days of history to provide predictions, though 15 days of historical data provides the most accurate results. It adheres to the scaling boundaries you have set for your virtual machine scale set. When the system predicts that the percentage CPU load of your virtual machine scale set will cross your scale-out boundary, new instances are added according to your specifications. You can also configure how far in advance you would like new instances to be provisioned, up to 1 hour before the predicted workload spike will occur.
+Predictive autoscale needs a minimum of 7 days of history to provide predictions. The most accurate results come from 15 days of historical data.
-**Forecast only** allows you to view your predicted CPU forecast without actually triggering the scaling action based on the prediction. You can then compare the forecast with your actual workload patterns to build confidence in the prediction models before enabling the predictive autoscale feature.
+Predictive autoscale adheres to the scaling boundaries you've set for your virtual machine scale set. When the system predicts that the percentage CPU load of your virtual machine scale set will cross your scale-out boundary, new instances are added according to your specifications. You can also configure how far in advance you want new instances to be provisioned, up to 1 hour before the predicted workload spike will occur.
-## Public preview support, availability and limitations
+*Forecast only* allows you to view your predicted CPU forecast without triggering the scaling action based on the prediction. You can then compare the forecast with your actual workload patterns to build confidence in the prediction models before you enable the predictive autoscale feature.
+
+## Public preview support, availability, and limitations
>[!NOTE]
-> This is a public preview release. We are testing and gathering feedback for future releases. As such, we do not provide production level support for this feature. Support is best effort. Send feature suggestions or feedback on predicative autoscale to predautoscalesupport@microsoft.com.
+> This release is a public preview. We're testing and gathering feedback for future releases. As such, we do not provide production-level support for this feature. Support is best effort. Send feature suggestions or feedback on predicative autoscale to predautoscalesupport@microsoft.com.
During public preview, predictive autoscale is only available in the following regions:
During public preview, predictive autoscale is only available in the following r
The following limitations apply during public preview. Predictive autoscale: - Only works for workloads exhibiting cyclical CPU usage patterns.-- Only can be enabled for Virtual Machine Scale Sets.
+- Only can be enabled for virtual machine scale sets.
- Only supports using the metric *Percentage CPU* with the aggregation type *Average*.-- Only supports scale-out. You canΓÇÖt use predictive autoscale to scale-in.
+- Only supports scale-out. You can't use predictive autoscale to scale in.
+
+You must enable standard (or reactive) autoscale to manage scale-in.
-You have to enable standard (or reactive) autoscale to manage scale-in.
-Enabling predictive autoscale or forecast only with Azure portal
+## Enable predictive autoscale or forecast only with the Azure portal
-1. Go to the virtual machine scale set screen and select on **Scaling**.
+1. Go to the **Virtual machine scale set** screen and select **Scaling**.
- :::image type="content" source="media/autoscale-predictive/main-scaling-screen-1.png" alt-text="Screenshot showing selecting the scaling screen from the left hand menu in Azure portal":::
+ :::image type="content" source="media/autoscale-predictive/main-scaling-screen-1.png" alt-text="Screenshot that shows selecting Scaling on the left menu in the Azure portal.":::
-2. Under **Custom autoscale** section, there's a new field called **Predictive autoscale**.
+1. Under the **Custom autoscale** section, **Predictive autoscale** appears.
- :::image type="content" source="media/autoscale-predictive/custom-autoscale-2.png" alt-text="Screenshot sowing selecting custom autoscale and then predictive autoscale option from Azure portal":::
+ :::image type="content" source="media/autoscale-predictive/custom-autoscale-2.png" alt-text="Screenshot that shows selecting Custom autoscale and the Predictive autoscale option in the Azure portal.":::
- Using the drop-down selection, you can:
- - Disable predictive autoscale - Disable is the default selection when you first land on the page for predictive autoscale.
- - Enable forecast only mode
- - Enable predictive autoscale
+ By using the dropdown selection, you can:
+ - Disable predictive autoscale. Disable is the default selection when you first land on the page for predictive autoscale.
+ - Enable forecast-only mode.
+ - Enable predictive autoscale.
- > [!NOTE]
- > Before you can enable predictive autoscale or forecast only mode, you must set up the standard reactive autoscale conditions.
+ > [!NOTE]
+ > Before you can enable predictive autoscale or forecast-only mode, you must set up the standard reactive autoscale conditions.
-3. To enable forecast only, select it from the dropdown. Define a scale up trigger based on *Percentage CPU*. Then select **Save**. The same process applies to enable predictive autoscale. To disable predictive autoscale or forecast only mode, choose **Disable** from the drop-down.
+1. To enable forecast-only mode, select it from the dropdown. Define a scale-up trigger based on *Percentage CPU*. Then select **Save**. The same process applies to enable predictive autoscale. To disable predictive autoscale or forecast-only mode, select **Disable** from the dropdown.
- :::image type="content" source="media/autoscale-predictive/enable-forecast-only-mode-3.png" alt-text="Screenshot of enable forecast only mode":::
+ :::image type="content" source="media/autoscale-predictive/enable-forecast-only-mode-3.png" alt-text="Screenshot that shows enabling forecast-only mode.":::
-4. If desired, specify a pre-launch time so the instances are full running before they're needed. You can pre-launch instances between 5 and 60 minutes before the needed prediction time.
+1. If desired, specify a pre-launch time so the instances are fully running before they're needed. You can pre-launch instances between 5 and 60 minutes before the needed prediction time.
- :::image type="content" source="media/autoscale-predictive/pre-launch-4.png" alt-text="Screenshot of predictive autoscale pre-launch setup":::
+ :::image type="content" source="media/autoscale-predictive/pre-launch-4.png" alt-text="Screenshot that shows predictive autoscale pre-launch setup.":::
-5. Once you have enabled predictive autoscale or forecast only and saved it, select *Predictive charts*.
+1. After you've enabled predictive autoscale or forecast-only mode and saved it, select **Predictive charts**.
- :::image type="content" source="media/autoscale-predictive/predictve-charts-option-5.png" alt-text="Screenshot of selecting predictive charts menu option":::
+ :::image type="content" source="media/autoscale-predictive/predictve-charts-option-5.png" alt-text="Screenshot that shows selecting the Predictive charts menu option.":::
-6. You see three charts:
+1. You see three charts:
- :::image type="content" source="media/autoscale-predictive/predictive-charts-6.png" alt-text="Screenshot of three charts for predictive autoscale" lightbox="media/autoscale-predictive/predictive-charts-6.png":::
+ :::image type="content" source="media/autoscale-predictive/predictive-charts-6.png" alt-text="Screenshot that shows three charts for predictive autoscale." lightbox="media/autoscale-predictive/predictive-charts-6.png":::
-- The top chart shows an overlaid comparison of actual vs predicted total CPU percentage. The timespan of the graph shown is from the last 24 hours to the next 24 hours.-- The second chart shows the number of instances running at specific times over the last 24 hours.-- The third chart shows the current Average CPU utilization over the last 24 hours.
+ - The top chart shows an overlaid comparison of actual versus predicted total CPU percentage. The time span of the graph shown is from the last 24 hours to the next 24 hours.
+ - The middle chart shows the number of instances running at specific times over the last 24 hours.
+ - The bottom chart shows the current Average CPU utilization over the last 24 hours.
## Enable using an Azure Resource Manager template
-1. Retrieve the virtual machine scale set resource ID and resource group of your virtual machine scale set. For example: /subscriptions/e954e48d-abcd-abcd-abcd-3e0353cb45ae/resourceGroups/patest2/providers/Microsoft.Compute/virtualMachineScaleSets/patest2
+1. Retrieve the virtual machine scale set resource ID and resource group of your virtual machine scale set. For example: /subscriptions/e954e48d-abcd-abcd-abcd-3e0353cb45ae/resourceGroups/patest2/providers/Microsoft.Compute/virtualMachineScaleSets/patest2
-2. Update *autoscale_only_parameters* file with the virtual machine scale set resource ID and any autoscale setting parameters.
+1. Update the *autoscale_only_parameters* file with the virtual machine scale set resource ID and any autoscale setting parameters.
-3. Use a PowerShell command to deploy the template containing the autoscale settings. For example,
+1. Use a PowerShell command to deploy the template that contains the autoscale settings. For example:
```cmd PS G:\works\kusto_onboard\test_arm_template> new-azurermresourcegroupdeployment -name binzAutoScaleDeploy -resourcegroupname cpatest2 -templatefile autoscale_only.json -templateparameterfile autoscale_only_parameters.json ``` **autoscale_only.json** ```json
PS G:\works\kusto_onboard\test_arm_template> new-azurermresourcegroupdeployment
} } ```
-
-For more information on Azure Resource Manager templates, see [Resource Manager template overview](../../azure-resource-manager/templates/overview.md)
+
+For more information on Azure Resource Manager templates, see [Resource Manager template overview](../../azure-resource-manager/templates/overview.md).
## Common questions
+This section answers common questions.
+ ### What happens over time when you turn on predictive autoscale for a virtual machine scale set?
-Prediction autoscale uses the history of a running virtual machine scale set. If your scale set has been running less than 7 days, you'll receive a message that the model is being trained. See the [no predictive data message](#errors-and-warnings). Predictions improve as time goes by achieving its maximum accuracy 15 days after the virtual machine scale set is created.
+Prediction autoscale uses the history of a running virtual machine scale set. If your scale set has been running less than 7 days, you'll receive a message that the model is being trained. For more information, see the [no predictive data message](#errors-and-warnings). Predictions improve as time goes by and achieve maximum accuracy 15 days after the virtual machine scale set is created.
-If changes to the workload pattern occur (but remain periodic), the model recognizes the change and begins to adjust the forecast accordingly. The forecast improves as time goes by. Maximum accuracy is reached 15 days after the change in the traffic pattern happens. Remember that your standard autoscale rules still apply. If a new unpredicted increase in traffic occurs, your virtual machine scale set will still scale out to meet the demand.
+If changes to the workload pattern occur but remain periodic, the model recognizes the change and begins to adjust the forecast. The forecast improves as time goes by. Maximum accuracy is reached 15 days after the change in the traffic pattern happens. Remember that your standard autoscale rules still apply. If a new unpredicted increase in traffic occurs, your virtual machine scale set will still scale out to meet the demand.
### What if the model isn't working well for me?
-The modeling works best with workloads that exhibit periodicity. We recommended you first evaluate the predictions by enabling "forecast only" which will overlay the scale setΓÇÖs predicted CPU usage with the actual, observed usage. Once you compare and evaluate the results, you can then choose to enable scaling based on the predicted metrics if the model predictions are close enough for your scenario.
+The modeling works best with workloads that exhibit periodicity. We recommend that you first evaluate the predictions by enabling "forecast only," which will overlay the scale set's predicted CPU usage with the actual, observed usage. After you compare and evaluate the results, you can then choose to enable scaling based on the predicted metrics if the model predictions are close enough for your scenario.
+
+### Why do I need to enable standard autoscale before I enable predictive autoscale?
-### Why do I need to enable standard autoscale before enabling predictive autoscale?
+Standard autoscaling is a necessary fallback if the predictive model doesn't work well for your scenario. Standard autoscale will cover unexpected load spikes, which aren't part of your typical CPU load pattern. It also provides a fallback if an error occurs in retrieving the predictive data.
-Standard autoscaling is a necessary fallback if the predictive model doesn't work well for your scenario. Standard autoscale will cover unexpected load spikes which aren't part of your typical CPU load pattern. It also provides a fallback should there be any error retrieving the predictive data.
+## Errors and warnings
-## Errors and Warnings
+This section addresses common errors and warnings.
### Didn't enable standard autoscale
-
-You receive the error message as seen below:
- *Predictive autoscale is based on the metric percentage CPU of the current resource. Choose this metric in the scale up trigger rules*.
+You receive the following error message:
+
+ *Predictive autoscale is based on the metric percentage CPU of the current resource. Choose this metric in the scale up trigger rules*.
This message means you attempted to enable predictive autoscale before you enabled standard autoscale and set it up to use the *Percentage CPU* metric with the *Average* aggregation type. ### No predictive data
-You won't see data on the predictive charts under certain conditions. This isn't an error; it's the intended behavior.
+You won't see data on the predictive charts under certain conditions. This behavior isn't an error, it's the intended behavior.
-When predictive autoscale is disabled, you instead receive a message beginning with "No data to show..." and giving you instructions on what to enable so you can see a predictive chart.
+When predictive autoscale is disabled, you instead receive a message that begins with "No data to show..." You then see instructions on what to enable so that you can see a predictive chart.
- :::image type="content" source="media/autoscale-predictive/error-no-data-to-show.png" alt-text="Screenshot of message No data to show":::
+ :::image type="content" source="media/autoscale-predictive/error-no-data-to-show.png" alt-text="Screenshot that shows the message No data to show.":::
-When you first create a virtual machine scale set and enable forecast only mode, you receive a message telling you "Predictive data is being trained.." and a time to return to see the chart.
+When you first create a virtual machine scale set and enable forecast-only mode, you receive the message "Predictive data is being trained..." and a time to return to see the chart.
- :::image type="content" source="media/autoscale-predictive/message-being-trained-12.png" alt-text="Screenshot of message Predictive data is being trained":::
+ :::image type="content" source="media/autoscale-predictive/message-being-trained-12.png" alt-text="Screenshot that shows the message Predictive data is being trained.":::
## Next steps
-Learn more about Autoscale by referring to the following:
+Learn more about autoscale in the following articles:
- [Overview of autoscale](./autoscale-overview.md) - [Azure Monitor autoscale common metrics](./autoscale-common-metrics.md)
azure-monitor Data Collection Rule Structure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/data-collection-rule-structure.md
This section contains a declaration of all the destinations where the data will
This section ties the other sections together. Defines the following for each stream declared in the `streamDeclarations` section: - `destination` from the `destinations` section where the data will be sent. -- `transformKql` which is the [transformation](/data-collection-transformations.md) applied to the data that was sent in the input shape described in the `streamDeclarations` section to the shape of the target table.
+- `transformKql` which is the [transformation](data-collection-transformations.md) applied to the data that was sent in the input shape described in the `streamDeclarations` section to the shape of the target table.
- `outputStream` section, which describes which table in the workspace specified under the `destination` property the data will be ingested into. The value of the outputStream will have the `Microsoft-[tableName]` shape when data is being ingested into a standard Log Analytics table, or `Custom-[tableName]` when ingesting data into a custom-created table. Only one destination is allowed per stream. ## Azure Monitor agent
azure-monitor Data Collection Transformations Structure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/data-collection-transformations-structure.md
ms.reviwer: nikeist
# Structure of transformation in Azure Monitor (preview)
-[Transformations in Azure Monitor](/data-collection-transformations.md) allow you to filter or modify incoming data before it's stored in a Log Analytics workspace. They are implemented as a Kusto Query Language (KQL) statement in a [data collection rule (DCR)](data-collection-rule-overview.md). This article provides details on how this query is structured and limitations on the KQL language allowed.
+[Transformations in Azure Monitor](data-collection-transformations.md) allow you to filter or modify incoming data before it's stored in a Log Analytics workspace. They are implemented as a Kusto Query Language (KQL) statement in a [data collection rule (DCR)](data-collection-rule-overview.md). This article provides details on how this query is structured and limitations on the KQL language allowed.
## Transformation structure
azure-monitor Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Monitor description: Lists Azure Policy Regulatory Compliance controls available for Azure Monitor. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 07/26/2022 Last updated : 08/01/2022
azure-resource-manager Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Resource Manager description: Lists Azure Policy Regulatory Compliance controls available for Azure Resource Manager. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 07/26/2022 Last updated : 08/01/2022
azure-resource-manager Child Resource Name Type https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/child-resource-name-type.md
The following example shows the child resource outside of the parent resource. Y
] ```
-When defined outside of the parent resource, you format the type and with slashes to include the parent type and name.
+When defined outside of the parent resource, you format the type and name values with slashes to include the parent type and name.
```json "type": "{resource-provider-namespace}/{parent-resource-type}/{child-resource-type}",
azure-signalr Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure SignalR description: Lists Azure Policy Regulatory Compliance controls available for Azure SignalR. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 07/26/2022 Last updated : 08/01/2022
backup Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Backup description: Lists Azure Policy Regulatory Compliance controls available for Azure Backup. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 07/26/2022 Last updated : 08/01/2022
batch Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Batch description: Lists Azure Policy Regulatory Compliance controls available for Azure Batch. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 07/26/2022 Last updated : 08/01/2022
cognitive-services Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Cognitive Services description: Lists Azure Policy Regulatory Compliance controls available for Azure Cognitive Services. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 07/26/2022 Last updated : 08/01/2022
communication-services Call Logs Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/analytics/call-logs-azure-monitor.md
The Call Summary Log contains data to help you identify key properties of all Ca
| operationVersion | The api-version associated with the operation, if the `operationName` was performed using an API. If there is no API that corresponds to this operation, the version represents the version of that operation in case the properties associated with the operation change in the future. | | category | The log category of the event. Category is the granularity at which you can enable or disable logs on a particular resource. The properties that appear within the `properties` blob of an event are the same within a particular log category and resource type. | | correlationIdentifier | `correlationIdentifier` is the unique ID for a Call. The `correlationIdentifier` identifies correlated events from all of the participants and endpoints that connect during a single Call, and it can be used to join data from different logs. If you ever need to open a support case with Microsoft, the `correlationID` will be used to easily identify the Call you're troubleshooting. |
-| identifier | This is the unique ID for the user, matching the identity assigned by the Communications Authentication service. You can use this ID to correlate user events across different logs. This ID can also be used to identify Microsoft Teams "Interoperability" scenarios described later in this document. |
+| identifier | This is the unique ID for the user. The identity can be an Azure Communications Services user, Azure AD user ID, Teams anonymous user ID or Teams bot ID. You can use this ID to correlate user events across different logs. |
| callStartTime | A timestamp for the start of the call, based on the first attempted connection from any Endpoint. | | callDuration | The duration of the Call expressed in seconds, based on the first attempted connection and end of the last connection between two endpoints. | | callType | Will contain either `"P2P"` or `"Group"`. A `"P2P"` Call is a direct 1:1 connection between only two, non-server endpoints. A `"Group"` Call is a Call that has more than two endpoints or is created as `"Group"` Call prior to the connection. |
Call Diagnostic Logs provide important information about the Endpoints and the m
| category | The log category of the event. Category is the granularity at which you can enable or disable logs on a particular resource. The properties that appear within the `properties` blob of an event are the same within a particular log category and resource type. | | correlationIdentifier | The `correlationIdentifier` identifies correlated events from all of the participants and endpoints that connect during a single Call. `correlationIdentifier` is the unique ID for a Call. If you ever need to open a support case with Microsoft, the `correlationID` will be used to easily identify the Call you're troubleshooting. | | participantId | This ID is generated to represent the two-way connection between a "Participant" Endpoint (`endpointType` = `ΓÇ£ServerΓÇ¥`) and the server. When `callType` = `"P2P"`, there is a direct connection between two endpoints, and no `participantId` is generated. |
-| identifier | This ID represents the user identity, as defined by the Authentication service. Use this ID to correlate different events across calls and services. |
+| identifier | This is the unique ID for the user. The identity can be an Azure Communications Services user, Azure AD user ID, Teams anonymous user ID or Teams bot ID. You can use this ID to correlate user events across different logs. |
| endpointId | Unique ID that represents each Endpoint connected to the call, with Endpoint type defined by `endpointType`. When the value is `null`, it means that the connected entity is the Communication Services server. `EndpointId` can persist for the same user across multiple calls (`correlationIdentifier`) for native clients but will be unique for every Call when the client is a web browser. | | endpointType | This value describes the properties of each `endpointId`. Can contain `ΓÇ£ServerΓÇ¥`, `ΓÇ£VOIPΓÇ¥`, `ΓÇ£PSTNΓÇ¥`, `ΓÇ£BOTΓÇ¥`, or `ΓÇ£UnknownΓÇ¥`. | | mediaType | This string value describes the type of media being transmitted between endpoints within each stream. Possible values include `ΓÇ£AudioΓÇ¥`, `ΓÇ£VideoΓÇ¥`, `ΓÇ£VBSSΓÇ¥` (Video-Based Screen Sharing), and `ΓÇ£AppSharingΓÇ¥`. |
Diagnostic log for audio stream from Server Endpoint to VoIP Endpoint 3:
"jitterAvg": "1", "jitterMax": "4", "packetLossRateAvg": "0",
-```
+```
communication-services Logging And Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/logging-and-diagnostics.md
Communication Services offers the following types of logs that you can enable:
| URI | The URI of the request. | | SdkType | The SDK type used in the request. | | PlatformType | The platform type used in the request. |
-| Identity | The Communication Services identity related to the operation. |
+| Identity | The identity of Azure Communication Services or Teams user related to the operation. |
| Scopes | The Communication Services scopes present in the access token. | ### Network Traversal operational logs
Communication Services offers the following types of logs that you can enable:
| RecipientId | The email address for the targeted recipient. If this is a message-level event, the property will be empty. | | EngagementType | The type of user engagement being tracked. | | EngagementContext | The context represents what the user interacted with. |
-| UserAgent | The user agent string from the client. |
+| UserAgent | The user agent string from the client. |
communication-services Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/pricing.md
Note: USA mixed rates to `+1-425` is $0.013. Refer to the following link for det
**Total cost for the VoIP + escalation call**: $0.16 + $0.13 = $.29 -
-### Pricing example: A user of the Communication Services JavaScript SDK joins a scheduled Microsoft Teams meeting
-
-Alice is a doctor meeting with her patient, Bob. Alice will be joining the visit from the Teams Desktop application. Bob will receive a link to join using the healthcare provider website, which connects to the meeting using the Communication Services JavaScript SDK. Bob will use his mobile phone to enter the meeting using a web browser (iPhone with Safari). Chat will be available during the virtual visit.
--- The call lasts a total of 30 minutes.-- When Bob joins the meeting, he's placed in the Teams meeting lobby per Teams policy. After one minute, Alice admits him into the meeting.-- After Bob is admitted to the meeting, Alice and Bob participate for the entire call. Alice turns on her video five minutes after the call starts and shares her screen for 13 minutes. Bob has his video on for the whole call.-- Alice sends five messages, Bob replies with three messages.--
-**Cost calculations**
--- One Participant (Bob) connected to Teams lobby x 1 minute x $0.004 per participant per minute (lobby charged at regular rate of meetings) = $0.004-- One participant (Bob) x 29 minutes x $0.004 per participant per minute = $0.116 [both video and audio are charged at the same rate]-- One participant (Alice) x 30 minutes x $0.000 per participant per minute = $0.0*.-- One participant (Bob) x three chat messages x $0.0008 = $0.0024.-- One participant (Alice) x five chat messages x $0.000 = $0.0*.-
-*Alice's participation is covered by her Teams license. Your Azure invoice will show the minutes and chat messages that Teams users had with Communication Services Users for your convenience, but those minutes and messages originating from the Teams client won't be charged.
-
-**Total cost for the visit**:
-- User joining using the Communication Services JavaScript SDK: $0.004 + $0.116 + $0.0024 = $0.1224-- User joining on Teams Desktop Application: $0 (covered by Teams license)-
-### Pricing example: Inbound PSTN call to the Communication Services JavaScript SDK with Teams identity elevated to group call with another Teams user on Teams desktop client
-
-Alice has ordered a product from Contoso and struggles to set it up. Alice calls from her phone (Android) 800-CONTOSO to ask for help with the received product. Bob is a customer support agent in Contoso and sees an incoming call from Alice on the customer support website (Windows, Chrome browser). Bob accepts the incoming call via Communication Services JavaScript SDK initialized with Teams identity. Teams calling plan enables Bob to receive PSTN calls. Bob sees on the website the product ordered by Alice. Bob decides to invite product expert Charlie to the call. Charlie sees an incoming group call from Bob in the Teams Desktop client and accepts the call.
--- The call lasts a total of 30 minutes.-- Bob accepts the call from Alice.-- After five minutes, Bob adds Charlie to the call. Charlie has his camera turned off for 10 minutes. Then turns his camera on for the rest of the call. -- After another 10 minutes, Alice leaves the call. -- After another five minutes, both Bob and Charlie leave the call-
-**Cost calculations**
--- One Participant (Alice) called the phone number associated with Teams user Bob using Teams Calling plan x 25 minutes deducted from Bob's tenant Teams minute pool-- One participant (Bob) x 30 minutes x $0.004 per participant per minute = $0.12 [both video and audio are charged at the same rate]-- One participant (Charlie) x 25 minutes x $0.000 per participant per minute = $0.0*.-
-*Charlie's participation is covered by his Teams license.
-
-**Total cost of the visit**:
-- Teams cost for a user joining using the Communication Services JavaScript SDK: 25 minutes from Teams minute pool-- Communication Services cost for a user joining using the Communication Services JavaScript SDK: $0.12-- User joining on Teams Desktop client: $0 (covered by Teams license)-- ## Call Recording Azure Communication Services allows customers to record PSTN, WebRTC, Conference, SIP Interface calls. Currently Call Recording supports mixed audio+video MP4 and mixed audio-only MP3/WAV output formats. Call Recording SDKs are available for Java and C#. Refer to [this page to learn more](../quickstarts/voice-video-calling/call-recording-sample.md).
communication-services Teams Interop Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/pricing/teams-interop-pricing.md
+
+ Title: Pricing for Teams interop scenarios
+
+description: Learn about Communication Services' Pricing Model for Teams interoperability
+++++ Last updated : 08/01/2022+++
+# Teams interoperability pricing
+
+Azure Communication Services and Graph API allow developers to integrate chat and calling capabilities into any product. The pricing depends on the following factors:
+- Identity
+- Product used for real-time communication
+
+The following sections cover communication defined based on the criteria mentioned above.
+
+## Communication as Teams guest
+
+Teams guest is a user that does not belong to any Azure AD tenant, and Teams administrator regulates its access via policies targeting `Teams anonymous users`.
+
+### Teams clients
+Teams meeting organizer's license covers the usage generated by Teams guests joining Teams meeting via built-in experience in Teams web, desktop, and mobile client. The Teams meeting organizer's license does not cover the usage generated in the third-party Teams extension and Teams app. The following table shows the price of using Teams clients as Teams guests:
+
+| Action | Tool | Price|
+|--|| --|
+| Send message | Teams web, mobile, desktop client | $0|
+| Receive message | Teams web, mobile, desktop client | $0 |
+| Teams guest participates in Teams meeting with audio, video, screen sharing, and TURN services | Teams web, mobile, desktop client | $0 per minute |
+
+### APIs
+External customers joining Teams meeting's audio, video, screen sharing, or chat create usage on Azure Communication Services resource. Teams extensions and Teams apps will use existing APIs to integrate communication, which generates consumption on Azure Communication Services resources. The following table shows the price of using Azure Communication Services as Teams guests:
+
+| Action | Tool | Price|
+|--|| --|
+| Send message | Azure Communication Services | $0.0008|
+| Receive message | Azure Communication Services | $0 |
+| Teams guest participates in Teams meeting with audio, video, screen sharing, and TURN services | Azure Communication Services | $0.004 per minute |
+
+Teams user in the lobby or on hold generates consumption on the Azure Communication Services resource.
+
+## Communication as Teams user
+
+Teams user is an Azure AD user with appropriate licenses. Teams users can be from the same or different organizations, depending on the Azure AD tenant. Teams administrator regulates the communication of Teams users via policies targeting `people in my organization` and `people in trusted organization`.
+
+### Teams clients
+Teams meeting organizer's license covers the usage generated by Teams users joining Teams meetings and participating in calls via built-in experience in Teams web, desktop, and mobile client. The Teams license does not cover the usage generated in third-party Teams extensions and Teams apps. The following table shows the price of using Teams clients as Teams users:
+
+| Action | Tool | Price|
+|--|| --|
+| Send message | Teams web, mobile, desktop client | $0|
+| Receive message | Teams web, mobile, desktop client | $0 |
+| Teams guest participates in Teams meeting with audio, video, screen sharing, and TURN services | Teams web, mobile, desktop client | $0 per minute |
+
+### APIs
+Teams users participating in Teams meetings and calls generate usage on Azure Communication Services resources and Graph API for audio, video, screen sharing, and chat. Teams extensions and Teams apps will use existing APIs to integrate communication, which generates consumption on Azure Communication Services resources or Graph API. The following table shows the price of using Azure Communication Services as Teams user:
+
+| Action | Tool | Price|
+|--|| --|
+| Send message | Graph API | $0|
+| Receive message | Graph API | $0 |
+| Teams guest participates in Teams meeting with audio, video, screen sharing, and TURN services | Azure Communication Services | $0.004 per minute |
+
+Teams user in the lobby or on hold generates consumption on the Azure Communication Services resource.
+
+## Pricing scenarios
+
+### Teams guest joins scheduled Microsoft Teams meeting via Azure Communication Services SDK
+
+Alice is a doctor meeting with her patient, Bob. Alice will be joining the visit from the Teams Desktop application. Bob will receive a link to join using the healthcare provider website, which connects to the meeting using the Communication Services JavaScript SDK. Bob will use his mobile phone to enter the meeting using a web browser (iPhone with Safari). Chat will be available during the virtual visit.
+
+- The call lasts a total of 30 minutes.
+- When Bob joins the meeting, he's placed in the Teams meeting lobby per Teams policy. After one minute, Alice admits him into the meeting.
+- After Bob is admitted to the meeting, Alice and Bob participate for the entire call. Alice turns on her video five minutes after the call starts and shares her screen for 13 minutes. Bob has his video on for the whole call.
+- Alice sends five messages, Bob replies with three messages.
++
+**Cost calculations**
+
+- One Participant (Bob) connected to Teams lobby x 1 minute x $0.004 per participant per minute (lobby charged at regular rate of meetings) = $0.004
+- One participant (Bob) x 29 minutes x $0.004 per participant per minute = $0.116 [both video and audio are charged at the same rate]
+- One participant (Alice) x 30 minutes x $0.000 per participant per minute = $0.0*.
+- One participant (Bob) x three chat messages x $0.0008 = $0.0024.
+- One participant (Alice) x five chat messages x $0.000 = $0.0*.
+
+*Alice's participation is covered by her Teams license. Your Azure invoice will show the minutes and chat messages that Teams users had with Communication Services Users for your convenience, but those minutes and messages originating from the Teams client won't be charged.
+
+**Total cost for the visit**:
+- User joining using the Communication Services JavaScript SDK: $0.004 + $0.116 + $0.0024 = $0.1224
+- User joining on Teams Desktop Application: $0 (covered by Teams license)
+
+## Inbound phone call to the Teams user using Azure Communication Services SDK elevated to group call with another Teams user on Teams desktop client
+
+Alice has ordered a product from Contoso and struggles to set it up. Alice calls from her phone (Android) 800-CONTOSO to ask for help with the received product. Bob is a customer support agent in Contoso and sees an incoming call from Alice on the customer support website (Windows, Chrome browser). Bob accepts the incoming call via Communication Services JavaScript SDK initialized with Teams identity. Teams calling plan enables Bob to receive PSTN calls. Bob sees on the website the product ordered by Alice. Bob decides to invite product expert Charlie to the call. Charlie sees an incoming group call from Bob in the Teams Desktop client and accepts the call.
+
+- The call lasts a total of 30 minutes.
+- Bob accepts the call from Alice.
+- After five minutes, Bob adds Charlie to the call. Charlie has his camera turned off for 10 minutes. Then turns his camera on for the rest of the call.
+- After another 10 minutes, Alice leaves the call.
+- After another five minutes, both Bob and Charlie leave the call
+
+**Cost calculations**
+
+- One Participant (Alice) called the phone number associated with Teams user Bob using Teams Calling plan x 25 minutes deducted from Bob's tenant Teams minute pool
+- One participant (Bob) x 30 minutes x $0.004 per participant per minute = $0.12 [both video and audio are charged at the same rate]
+- One participant (Charlie) x 25 minutes x $0.000 per participant per minute = $0.0*.
+
+*Charlie's participation is covered by his Teams license.
+
+**Total cost of the visit**:
+- Teams cost for a user joining using the Communication Services JavaScript SDK: 25 minutes from Teams minute pool
+- Communication Services cost for a user joining using the Communication Services JavaScript SDK: $0.12
+- User joining on Teams Desktop client: $0 (covered by Teams license)
communication-services Sdk Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/sdk-options.md
The Calling package supports UWP apps build with .NET Native or C++/WinRT on:
## REST APIs
-Communication Services APIs are documented alongside other [Azure REST APIs in docs.microsoft.com](/rest/api/azure/). This documentation will tell you how to structure your HTTP messages and offers guidance for using [Postman](../tutorials/postman-tutorial.md). REST interface documentation is also published in Swagger format on [GitHub](https://github.com/Azure/azure-rest-api-specs).
+Communication Services APIs are documented alongside other [Azure REST APIs in docs.microsoft.com](/rest/api/azure/). This documentation will tell you how to structure your HTTP messages and offers guidance for using [Postman](../tutorials/postman-tutorial.md). REST interface documentation is also published in Swagger format on [GitHub](https://github.com/Azure/azure-rest-api-specs). You can find throttling limits for individual APIs on [service limits page](./service-limits.md).
### REST API Throttles
confidential-computing Confidential Node Pool Aks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/confidential-node-pool-aks.md
+
+ Title: Confidential VM node pools support on AKS with AMD SEV-SNP confidential VMs - Preview
+description: Learn about confidential node pool support on AKS with AMD SEV-SNP confidential VMs
+++ Last updated : 8/1/2022+++++
+# Confidential VM node pool support on AKS with AMD SEV-SNP confidential VMs - Preview
+
+[Azure Kubernetes Service (AKS)](../aks/index.yml) makes it simple to deploy a managed Kubernetes cluster in Azure. In AKS, nodes of the same configuration are grouped together into node pools. These node pools contain the underlying VMs that run your applications.
+
+AKS now supports confidential VM node pools with Azure confidential VMs. These confidential VMs are the [generally available DCasv5 and ECasv5 confidential VM-series](https://aka.ms/AMD-ACC-VMs-GA-Inspire-2022) utilizing 3rd Gen AMD EPYC<sup>TM</sup> processors with Secure Encrypted Virtualization-Secure Nested Paging ([SEV-SNP](https://www.amd.com/en/technologies/infinity-guard)) security features. To read more about this offering, head to our [announcement](https://aka.ms/ACC-AKS-AMD-SEV-SNP-Preview-Blog).
+
+## Benefits
+Confidential node pools leverage VMs with a hardware-based Trusted Execution Environment (TEE). AMD SEV-SNP confidential VMs deny the hypervisor and other host management code access to VM memory and state, and add defense in depth protections against operator access.
+
+In addition to the hardened security profile, confidential node pools on AKS also enable:
+
+- Lift and Shift with full AKS feature support - to enable a seamless lift-and-shift of Linux container workloads
+- Heterogenous Node Pools - to store sensitive data in a VM-level TEE node pool with memory encryption keys generated from the chipset itself
++
+Get started and add confidential node pools to existing AKS cluster with [this quick start guide](../aks/use-cvm.md).
+
+## Questions?
+
+If you have questions about container offerings, please reach out to <acconaks@microsoft.com>.
+
+## Next steps
+
+- [Deploy a confidential node pool in your AKS cluster](../aks/use-cvm.md)
+- Learn more about sizes and specs for [general purpose](../virtual-machines/dcasv5-dcadsv5-series.md) and [memory-optimized](../virtual-machines/ecasv5-ecadsv5-series.md) confidential VMs.
connectors Connectors Create Api Azureblobstorage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-azureblobstorage.md
ms.suite: integration Previously updated : 05/28/2022 Last updated : 07/30/2022 tags: connectors
You can add network security to an Azure storage account by [restricting access
- To access storage accounts behind firewalls using the Azure Blob Storage managed connector in Consumption, Standard, and ISE-based logic apps, review the following documentation:
- - [Access storage accounts in same region with managed identities](#access-blob-storage-in-same-region-with-managed-identities)
+ - [Access storage accounts in same region with system-managed identities](#access-blob-storage-in-same-region-with-system-managed-identities)
- [Access storage accounts in other regions](#access-storage-accounts-in-other-regions)
To add your outbound IP addresses to the storage account firewall, follow these
You don't have to create a private endpoint. You can just permit traffic through the ISE outbound IPs on the storage account.
-### Access Blob Storage in same region with managed identities
+### Access Blob Storage in same region with system-managed identities
To connect to Azure Blob Storage in any region, you can use [managed identities for authentication](../active-directory/managed-identities-azure-resources/overview.md). You can create an exception that gives Microsoft trusted services, such as a managed identity, access to your storage account through a firewall.
To use managed identities in your logic app to access Blob Storage, follow these
> [!NOTE] > Limitations for this solution: >
-> - You must set up a managed identity to authenticate your storage account connection.
+> - To authenticate your storage account connection, you have to set up a system-assigned managed identity.
+> A user-assigned managed identity won't work.
>
-> - For Standard logic apps in the single-tenant Azure Logic Apps environment, only the system-assigned
-> managed identity is available and supported, not the user-assigned managed identity.
#### Configure storage account access
container-apps Microservices Dapr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/microservices-dapr.md
LOG_ANALYTICS_WORKSPACE_CLIENT_ID=`az containerapp env show --name $CONTAINERAPP
az monitor log-analytics query \ --workspace $LOG_ANALYTICS_WORKSPACE_CLIENT_ID \
- --analytics-query "ContainerAppConsoleLogs_CL | where ContainerAppName_s == 'nodeapp' and (Log_s contains 'persisted' or Log_s contains 'order') | project ContainerAppName_s, Log_s, TimeGenerated | take 5" \
- --out table
+ --analytics-query "ContainerAppConsoleLogs_CL | where ContainerAppName_s == 'nodeapp' and (Log_s contains 'persisted' or Log_s contains 'order') | project ContainerAppName_s, Log_s, TimeGenerated | sort by TimeGenerated | take 5" \
+ --out table |
``` # [PowerShell](#tab/powershell)
$LOG_ANALYTICS_WORKSPACE_CLIENT_ID=`
az monitor log-analytics query ` --workspace $LOG_ANALYTICS_WORKSPACE_CLIENT_ID `
- --analytics-query "ContainerAppConsoleLogs_CL | where ContainerAppName_s == 'nodeapp' and (Log_s contains 'persisted' or Log_s contains 'order') | project ContainerAppName_s, Log_s, TimeGenerated | take 5" `
+ --analytics-query "ContainerAppConsoleLogs_CL | where ContainerAppName_s == 'nodeapp' and (Log_s contains 'persisted' or Log_s contains 'order') | project ContainerAppName_s, Log_s, TimeGenerated | sort by TimeGenerated | take 5" `
--out table ```
container-registry Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Container Registry description: Lists Azure Policy Regulatory Compliance controls available for Azure Container Registry. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 07/26/2022 Last updated : 08/01/2022
cosmos-db How To Javascript Manage Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/how-to-javascript-manage-queries.md
ms.devlang: javascript Previously updated : 06/23/2022 Last updated : 07/29/2022
-# Use a query in Azure Cosmos DB MongoDB API using JavaScript
+# Query data in Azure Cosmos DB MongoDB API using JavaScript
[!INCLUDE[appliesto-mongodb-api](../includes/appliesto-mongodb-api.md)]
-Use queries to find documents in a collection.
+Use [queries](#query-for-documents) and [aggregation pipelines](#aggregation-pipelines) to find and manipulate documents in a collection.
> [!NOTE] > The [example code snippets](https://github.com/Azure-Samples/cosmos-db-mongodb-api-javascript-samples) are available on GitHub as a JavaScript project.
The preceding code snippet displays the following example console output:
:::code language="console" source="~/samples-cosmosdb-mongodb-javascript/275-find/index.js" id="console_result_findone":::
+## Aggregation pipelines
+
+Aggregation pipelines are useful to isolate expensive query computation, transformations, and other processing on your Cosmos DB server, instead of performing these operations on the client.
+
+For specific **aggregation pipeline support**, refer to the following:
+
+* [Version 4.2](feature-support-42.md#aggregation-pipeline)
+* [Version 4.0](feature-support-40.md#aggregation-pipeline)
+* [Version 3.6](feature-support-36.md#aggregation-pipeline)
+* [Version 3.2](feature-support-32.md#aggregation-pipeline)
+
+### Aggregation pipeline syntax
+
+A pipeline is an array with a series of stages as JSON objects.
+
+```javascript
+const pipeline = [
+ stage1,
+ stage2
+]
+```
+
+### Pipeline stage syntax
+
+A _stage_ defines the operation and the data it's applied to, such as:
+
+* $match - find documents
+* $addFields - add field to cursor, usually from previous stage
+* $limit - limit the number of results returned in cursor
+* $project - pass along new or existing fields, can be computed fields
+* $group - group results by a field or fields in pipeline
+* $sort - sort results
+
+```javascript
+// reduce collection to relative documents
+const matchStage = {
+ '$match': {
+ 'categoryName': { $regex: 'Bikes' },
+ }
+}
+
+// sort documents on field `name`
+const sortStage = {
+ '$sort': {
+ "name": 1
+ }
+},
+```
+
+### Aggregate the pipeline to get iterable cursor
+
+The pipeline is aggregated to produce an iterable cursor.
+
+```javascript
+const db = 'adventureworks';
+const collection = 'products';
+
+const aggCursor = client.db(databaseName).collection(collectionName).aggregate(pipeline);
+
+await aggCursor.forEach(product => {
+ console.log(JSON.stringify(product));
+});
+```
+
+## Use an aggregation pipeline in JavaScript
+
+Use a pipeline to keep data processing on the server before returning to the client.
+
+### Example product data
+
+The aggregations below use the [sample products collection](https://github.com/Azure-Samples/cosmos-db-mongodb-api-javascript-samples/blob/main/252-insert-many/products.json) with data in the shape of:
+
+```json
+[
+ {
+ "_id": "08225A9E-F2B3-4FA3-AB08-8C70ADD6C3C2",
+ "categoryId": "75BF1ACB-168D-469C-9AA3-1FD26BB4EA4C",
+ "categoryName": "Bikes, Touring Bikes",
+ "sku": "BK-T79U-50",
+ "name": "Touring-1000 Blue, 50",
+ "description": "The product called \"Touring-1000 Blue, 50\"",
+ "price": 2384.0700000000002,
+ "tags": [
+ ]
+ },
+ {
+ "_id": "0F124781-C991-48A9-ACF2-249771D44029",
+ "categoryId": "56400CF3-446D-4C3F-B9B2-68286DA3BB99",
+ "categoryName": "Bikes, Mountain Bikes",
+ "sku": "BK-M68B-42",
+ "name": "Mountain-200 Black, 42",
+ "description": "The product called \"Mountain-200 Black, 42\"",
+ "price": 2294.9899999999998,
+ "tags": [
+ ]
+ },
+ {
+ "_id": "3FE1A99E-DE14-4D11-B635-F5D39258A0B9",
+ "categoryId": "26C74104-40BC-4541-8EF5-9892F7F03D72",
+ "categoryName": "Components, Saddles",
+ "sku": "SE-T924",
+ "name": "HL Touring Seat/Saddle",
+ "description": "The product called \"HL Touring Seat/Saddle\"",
+ "price": 52.640000000000001,
+ "tags": [
+ ]
+ },
+]
+```
+
+### Example 1: Product subcategories, count of products, and average price
+
+Use the following [sample code](https://github.com/Azure-Samples/cosmos-db-mongodb-api-javascript-samples/blob/main/280-aggregation/average-price-in-each-product-subcategory.js) to report on average price in each product subcategory.
+++
+### Example 2: Bike types with price range
+
+Use the following [sample code](https://github.com/Azure-Samples/cosmos-db-mongodb-api-javascript-samples/blob/main/280-aggregation/bike-types-and-price-ranges.js) to report on the `Bikes` subcategory.
++++ ## See also - [Get started with Azure Cosmos DB MongoDB API and JavaScript](how-to-javascript-get-started.md)
cosmos-db Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Cosmos DB description: Lists Azure Policy Regulatory Compliance controls available for Azure Cosmos DB. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 07/26/2022 Last updated : 08/01/2022
cosmos-db Create Sql Api Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/create-sql-api-java.md
>
-In this quickstart, you create and manage an Azure Cosmos DB SQL API account from the Azure portal, and by using a Java app cloned from GitHub. First, you create an Azure Cosmos DB SQL API account using the Azure portal, then create a Java app using the SQL Java SDK, and then add resources to your Cosmos DB account by using the Java application. Azure Cosmos DB is a multi-model database service that lets you quickly create and query document, table, key-value, and graph databases with global distribution and horizontal scale capabilities.
+In this quickstart, you create and manage an Azure Cosmos DB SQL API account from the Azure portal, and by using a Java app cloned from GitHub. First, you create an Azure Cosmos DB SQL API account using the Azure portal, or without a credit card or an Azure subscription, you can set up a free [Try Azure Cosmos DB account](https://aka.ms/trycosmosdb), then create a Java app using the SQL Java SDK, and then add resources to your Cosmos DB account by using the Java application. Azure Cosmos DB is a multi-model database service that lets you quickly create and query document, table, key-value, and graph databases with global distribution and horizontal scale capabilities.
> [!IMPORTANT] > This quickstart is for Azure Cosmos DB Java SDK v4 only. Please view the Azure Cosmos DB Java SDK v4 [Release notes](sql-api-sdk-java-v4.md), [Maven repository](https://mvnrepository.com/artifact/com.azure/azure-cosmos), Azure Cosmos DB Java SDK v4 [performance tips](performance-tips-java-sdk-v4-sql.md), and Azure Cosmos DB Java SDK v4 [troubleshooting guide](troubleshoot-java-sdk-v4-sql.md) for more information. If you are currently using an older version than v4, see the [Migrate to Azure Cosmos DB Java SDK v4](migrate-java-v4-sdk.md) guide for help upgrading to v4.
In this quickstart, you create and manage an Azure Cosmos DB SQL API account fro
## Prerequisites -- An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). Or [try Azure Cosmos DB for free](https://azure.microsoft.com/try/cosmosdb/) without an Azure subscription. You can also use the [Azure Cosmos DB Emulator](https://aka.ms/cosmosdb-emulator) with a URI of `https://localhost:8081` and the key `C2y6yDjf5/R+ob0N8A7Cgv30VRDJIWEHLM+4QDU5DE2nQ9nDuVTqobD4b8mGGyPMbIZnqyMsEcaGQy67XIw/Jw==`.
+- An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). Or [try Azure Cosmos DB for free](https://aka.ms/trycosmosdb) without an Azure subscription. You can also use the [Azure Cosmos DB Emulator](https://aka.ms/cosmosdb-emulator) with a URI of `https://localhost:8081` and the key `C2y6yDjf5/R+ob0N8A7Cgv30VRDJIWEHLM+4QDU5DE2nQ9nDuVTqobD4b8mGGyPMbIZnqyMsEcaGQy67XIw/Jw==`.
- [Java Development Kit (JDK) 8](https://www.azul.com/downloads/azure-only/zulu/?&version=java-8-lts&architecture=x86-64-bit&package=jdk). Point your `JAVA_HOME` environment variable to the folder where the JDK is installed. - A [Maven binary archive](https://maven.apache.org/download.cgi). On Ubuntu, run `apt-get install maven` to install Maven. - [Git](https://www.git-scm.com/downloads). On Ubuntu, run `sudo apt-get install git` to install Git.
cosmos-db Create Sql Api Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/create-sql-api-nodejs.md
> * [Spark v3](create-sql-api-spark.md) > * [Go](create-sql-api-go.md) >
-In this quickstart, you create and manage an Azure Cosmos DB SQL API account from the Azure portal, and by using a Node.js app cloned from GitHub. Azure Cosmos DB is a multi-model database service that lets you quickly create and query document, table, key-value, and graph databases with global distribution and horizontal scale capabilities.
+In this quickstart, you create and manage an Azure Cosmos DB SQL API account from the Azure portal, and by using a Node.js app cloned from GitHub. Azure Cosmos DB is a multi-model database service that lets you quickly create and query document, table, key-value, and graph databases with global distribution and horizontal scale capabilities. Without a credit card or an Azure subscription, you can set up a free 30 day [Try Azure Cosmos DB account](https://aka.ms/trycosmosdb).
## Walkthrough video
cosmos-db Create Sql Api Spark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/create-sql-api-spark.md
> * [Go](create-sql-api-go.md) >
-This tutorial is a quick start guide to show how to use Cosmos DB Spark Connector to read from or write to Cosmos DB. Cosmos DB Spark Connector supports Spark 3.1.x and 3.2.x.
+This tutorial is a quick start guide to show how to use Cosmos DB Spark Connector to read from or write to Cosmos DB. Cosmos DB Spark Connector supports Spark 3.1.x and 3.2.x. Without a credit card or an Azure subscription, you can set up a free [Try Azure Cosmos DB account](https://aka.ms/trycosmosdb)
Throughout this quick tutorial, we rely on [Azure Databricks Runtime 8.0 with Spark 3.1.1](/azure/databricks/release-notes/runtime/8.0) and a Jupyter Notebook to show how to use the Cosmos DB Spark Connector, but you can also use [Azure Databricks Runtime 10.3 with Spark 3.2.1](/azure/databricks/release-notes/runtime/10.3).
You can use any other Spark 3.1.1 or 3.2.1 spark offering as well, also you shou
## Prerequisites
-* An active Azure account. If you don't have one, you can sign up for a [free account](https://azure.microsoft.com/try/cosmosdb/). Alternatively, you can use the [use Azure Cosmos DB Emulator](../local-emulator.md) for development and testing.
+* An active Azure account. If you don't have one, you can sign up for a [free account](https://aka.ms/trycosmosdb). Alternatively, you can use the [use Azure Cosmos DB Emulator](../local-emulator.md) for development and testing.
* [Azure Databricks](/azure/databricks/release-notes/runtime/8.0) runtime 8.0 with Spark 3.1.1 or [Azure Databricks](/azure/databricks/release-notes/runtime/10.3) runtime 10.3 with Spark 3.2.1.
cosmos-db Create Sql Api Spring Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/create-sql-api-spring-data.md
> * [Go](create-sql-api-go.md) >
-In this quickstart, you create and manage an Azure Cosmos DB SQL API account from the Azure portal, and by using a Spring Data Azure Cosmos DB v3 app cloned from GitHub. First, you create an Azure Cosmos DB SQL API account using the Azure portal, then create a Spring Boot app using the Spring Data Azure Cosmos DB v3 connector, and then add resources to your Cosmos DB account by using the Spring Boot application. Azure Cosmos DB is a multi-model database service that lets you quickly create and query document, table, key-value, and graph databases with global distribution and horizontal scale capabilities.
+In this quickstart, you create and manage an Azure Cosmos DB SQL API account from the Azure portal, and by using a Spring Data Azure Cosmos DB v3 app cloned from GitHub. First, you create an Azure Cosmos DB SQL API account using the Azure portal or without a credit card or an Azure subscription, you can set up a free [Try Azure Cosmos DB account](https://aka.ms/trycosmosdb), then create a Spring Boot app using the Spring Data Azure Cosmos DB v3 connector, and then add resources to your Cosmos DB account by using the Spring Boot application. Azure Cosmos DB is a multi-model database service that lets you quickly create and query document, table, key-value, and graph databases with global distribution and horizontal scale capabilities.
> [!IMPORTANT] > These release notes are for version 3 of Spring Data Azure Cosmos DB. You can find [release notes for version 2 here](sql-api-sdk-java-spring-v2.md).
In this quickstart, you create and manage an Azure Cosmos DB SQL API account fro
## Prerequisites -- An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). Or [try Azure Cosmos DB for free](https://azure.microsoft.com/try/cosmosdb/) without an Azure subscription. You can also use the [Azure Cosmos DB Emulator](https://aka.ms/cosmosdb-emulator) with a URI of `https://localhost:8081` and the key `C2y6yDjf5/R+ob0N8A7Cgv30VRDJIWEHLM+4QDU5DE2nQ9nDuVTqobD4b8mGGyPMbIZnqyMsEcaGQy67XIw/Jw==`.
+- An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). Or [try Azure Cosmos DB for free](https://aka.ms/trycosmosdb) without an Azure subscription or credit card. You can also use the [Azure Cosmos DB Emulator](https://aka.ms/cosmosdb-emulator) with a URI of `https://localhost:8081` and the key `C2y6yDjf5/R+ob0N8A7Cgv30VRDJIWEHLM+4QDU5DE2nQ9nDuVTqobD4b8mGGyPMbIZnqyMsEcaGQy67XIw/Jw==`.
- [Java Development Kit (JDK) 8](https://www.azul.com/downloads/azure-only/zulu/?&version=java-8-lts&architecture=x86-64-bit&package=jdk). Point your `JAVA_HOME` environment variable to the folder where the JDK is installed. - A [Maven binary archive](https://maven.apache.org/download.cgi). On Ubuntu, run `apt-get install maven` to install Maven. - [Git](https://www.git-scm.com/downloads). On Ubuntu, run `sudo apt-get install git` to install Git.
cosmos-db Quick Create Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/quick-create-template.md
# Quickstart: Create an Azure Cosmos DB and a container by using an ARM template [!INCLUDE[appliesto-sql-api](../includes/appliesto-sql-api.md)]
-Azure Cosmos DB is MicrosoftΓÇÖs fast NoSQL database with open APIs for any scale. You can use Azure Cosmos DB to quickly create and query key/value databases, document databases, and graph databases. This quickstart focuses on the process of deploying an Azure Resource Manager template (ARM template) to create an Azure Cosmos database and a container within that database. You can later store data in this container.
+Azure Cosmos DB is MicrosoftΓÇÖs fast NoSQL database with open APIs for any scale. You can use Azure Cosmos DB to quickly create and query key/value databases, document databases, and graph databases. Without a credit card or an Azure subscription, you can set up a free [Try Azure Cosmos DB account](https://aka.ms/trycosmosdb). This quickstart focuses on the process of deploying an Azure Resource Manager template (ARM template) to create an Azure Cosmos database and a container within that database. You can later store data in this container.
[!INCLUDE [About Azure Resource Manager](../../../includes/resource-manager-quickstart-introduction.md)]
cosmos-db Quickstart Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/quickstart-dotnet.md
> * [Go](create-sql-api-go.md) >
-Get started with the Azure Cosmos DB client library for .NET to create databases, containers, and items within your account. Follow these steps to install the package and try out example code for basic tasks.
+Get started with the Azure Cosmos DB client library for .NET to create databases, containers, and items within your account. Without a credit card or an Azure subscription, you can set up a free [Try Azure Cosmos DB account](https://aka.ms/trycosmosdb). Follow these steps to install the package and try out example code for basic tasks.
> [!NOTE] > The [example code snippets](https://github.com/Azure-Samples/azure-cosmos-db-dotnet-quickstart) are available on GitHub as a .NET project.
Get started with the Azure Cosmos DB client library for .NET to create databases
## Prerequisites
-* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).
+* An Azure account with an active subscription. [Create an account for free](https://aka.ms/trycosmosdb).
* [.NET 6.0 or later](https://dotnet.microsoft.com/download) * [Azure Command-Line Interface (CLI)](/cli/azure/) or [Azure PowerShell](/powershell/azure/)
cosmos-db Samples Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/samples-dotnet.md
The [cosmos-db-sql-api-dotnet-samples](https://github.com/Azure-Samples/cosmos-d
## Prerequisites
-* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).
+* An Azure account with an active subscription. Without a credit card or an Azure subscription, you can set up a free [Try Azure Cosmos DB account](https://aka.ms/trycosmosdb).
* Azure Cosmos DB SQL API account. [Create a SQL API account](how-to-create-account.md). * [.NET 6.0 or later](https://dotnet.microsoft.com/download) * [Azure Command-Line Interface (CLI)](/cli/azure/) or [Azure PowerShell](/powershell/azure/)
cosmos-db Sql Api Java Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-api-java-application.md
> * [Python](./create-sql-api-python.md) >
-This Java web application tutorial shows you how to use the [Microsoft Azure Cosmos DB](https://azure.microsoft.com/services/cosmos-db/) service to store and access data from a Java application hosted on Azure App Service Web Apps. In this article, you will learn:
+This Java web application tutorial shows you how to use the [Microsoft Azure Cosmos DB](https://azure.microsoft.com/services/cosmos-db/) service to store and access data from a Java application hosted on Azure App Service Web Apps. Without a credit card or an Azure subscription, you can set up a free [Try Azure Cosmos DB account](https://aka.ms/trycosmosdb). In this article, you will learn:
* How to build a basic JavaServer Pages (JSP) application in Eclipse. * How to work with the Azure Cosmos DB service using the [Azure Cosmos DB Java SDK](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/cosmos/azure-cosmos).
This Java application tutorial shows you how to create a web-based task-manageme
Before you begin this application development tutorial, you must have the following:
-* If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+* If you don't have an Azure subscription, without a credit card or an Azure subscription, you can set up a free [Try Azure Cosmos DB account](https://aka.ms/trycosmosdb).
[!INCLUDE [cosmos-db-emulator-docdb-api](../includes/cosmos-db-emulator-docdb-api.md)]
cosmos-db Sql Api Nodejs Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-api-nodejs-get-started.md
> * [Node.js](sql-api-nodejs-get-started.md) >
-As a developer, you might have applications that use NoSQL document data. You can use a SQL API account in Azure Cosmos DB to store and access this document data. This tutorial shows you how to build a Node.js console application to create Azure Cosmos DB resources and query them.
+As a developer, you might have applications that use NoSQL document data. You can use a SQL API account in Azure Cosmos DB to store and access this document data. This tutorial shows you how to build a Node.js console application to create Azure Cosmos DB resources and query them. Without a credit card or an Azure subscription, you can set up a free [Try Azure Cosmos DB account](https://aka.ms/trycosmosdb)
In this tutorial, you will:
In this tutorial, you will:
Make sure you have the following resources:
-* An active Azure account. If you don't have one, you can sign up for a [Free Azure Trial](https://azure.microsoft.com/pricing/free-trial/).
+* An active Azure account. If you don't have one, without a credit card or an Azure subscription, you can set up a free [Try Azure Cosmos DB account](https://aka.ms/trycosmosdb).
[!INCLUDE [cosmos-db-emulator-docdb-api](../includes/cosmos-db-emulator-docdb-api.md)]
cost-management-billing Subscription Transfer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/subscription-transfer.md
tags: billing
Previously updated : 07/18/2022 Last updated : 08/01/2022
As you begin to plan your product transfer, consider the information needed to a
- Previous Azure offer in CSP - New Azure offer in CSP, also referred to as Azure Plan with a Microsoft Partner Agreement (MPA) - Enterprise Agreement (EA)
- - Microsoft Customer Agreement (MCA) in the Enterprise motion when you buy Azure services through a Microsoft representative and individual MCA when you buy Azure services through Azure.com
+ - Microsoft Customer Agreement (MCA) in the Enterprise motion where you buy Azure services through a Microsoft representative. Also called an MCA enterprise agreement.
+ - Microsoft Customer Agreement (MCA) that you bought through the Azure website. Also called an MCA individual agreement.
- Others like MSDN, BizSpark, EOPEN, Azure Pass, and Free Trial - Do you have the required permissions on the product to accomplish a transfer? Specific permission needed for each transfer type is listed in the following product transfer support table. - Only the billing administrator of an account can transfer subscription ownership.
cost-management-billing Troubleshoot Reservation Utilization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/troubleshoot-reservation-utilization.md
As usage data arrives, the value changes toward the correct percentage. When all
If you find that your utilization values don't match your expectations, review the graph to get the most view of your actual utilization. Any point value older than two days should be accurate. Longer term averages from seven to 30 days should be accurate. ## Other common scenarios
+- If the reservation status is "No Benefit", it will give you a warning message and to solve this, follow recommendations presented on the reservation's page.
- You may have stopped running resource A and started running resource B which is not applicable for the reservation you purchased for. To solve this, you may need to exchange the reservation to match it to the right resource. - You may have moved a resource from one subscription or resource group to another, whereas the scope of the reservation is different from where the resource is being moved to. To resolve this case, you may need to change the scope of the reservation. - You may have purchased another reservation that also applied a benefit to the same scope, and as a result, less of an existing reserved instance applied a benefit. To solve this, you may need to exchange/refund one of the reservations.
data-factory Connector Amazon Marketplace Web Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-amazon-marketplace-web-service.md
This article outlines how to use the Copy Activity in an Azure Data Factory or S
## Supported capabilities
-This Amazon Marketplace Web Service connector is supported for the following activities:
+This Amazon Marketplace Web Service connector is supported for the following capabilities:
-- [Copy activity](copy-activity-overview.md) with [supported source/sink matrix](copy-activity-overview.md)-- [Lookup activity](control-flow-lookup-activity.md)
+| Supported capabilities|IR |
+|| --|
+|[Copy activity](copy-activity-overview.md) (source/-)|&#9312; &#9313;|
+|[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;|
-You can copy data from Amazon Marketplace Web Service to any supported sink data store. For a list of data stores that are supported as sources/sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table.
+<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+
+ For a list of data stores that are supported as sources/sinks, see the [Supported data stores](connector-overview.md#supported-data-stores) table.
The service provides a built-in driver to enable connectivity, therefore you don't need to manually install any driver using this connector.
data-factory Connector Asana https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-asana.md
This article outlines how to use Data Flow to transform data in Asana (Preview).
## Supported capabilities
-This Asana connector is supported for the following activities:
+This Asana connector is supported for the following capabilities:
-- [Mapping data flow](concepts-data-flow-overview.md)
+| Supported capabilities|IR |
+|| --|
+|[Mapping data flow](concepts-data-flow-overview.md) (source/-)|&#9312; |
+
+<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+
+For a list of data stores that are supported as sources/sinks, see the [Supported data stores](connector-overview.md#supported-data-stores) table.
## Create an Asana linked service using UI
data-factory Connector Concur https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-concur.md
This article outlines how to use the Copy Activity in an Azure Data Factory or S
## Supported capabilities
-This Concur connector is supported for the following activities:
+This Concur connector is supported for the following capabilities:
-- [Copy activity](copy-activity-overview.md) with [supported source/sink matrix](copy-activity-overview.md)-- [Lookup activity](control-flow-lookup-activity.md)
+| Supported capabilities|IR |
+|| --|
+|[Copy activity](copy-activity-overview.md) (source/-)|&#9312; &#9313;|
+|[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;|
-You can copy data from Concur to any supported sink data store. For a list of data stores that are supported as sources/sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table.
+<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+
+For a list of data stores that are supported as sources/sinks, see the [Supported data stores](connector-overview.md#supported-data-stores) table.
> [!NOTE] > Partner account is currently not supported.
data-factory Connector Dataworld https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-dataworld.md
This article outlines how to use Data Flow to transform data in data.world (Prev
## Supported capabilities
-This data.world connector is supported for the following activities:
+This data.world connector is supported for the following capabilities:
-- [Mapping data flow](concepts-data-flow-overview.md)
+| Supported capabilities|IR |
+|| --|
+|[Mapping data flow](concepts-data-flow-overview.md) (source/-)|&#9312; |
+
+<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+
+For a list of data stores that are supported as sources/sinks, see the [Supported data stores](connector-overview.md#supported-data-stores) table.
## Create a data.world linked service using UI
data-factory Connector Dynamics Ax https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-dynamics-ax.md
This article outlines how to use Copy Activity in Azure Data Factory and Synapse
## Supported capabilities
-This Dynamics AX connector is supported for the following activities:
+This Dynamics AX connector is supported for the following capabilities:
-- [Copy activity](copy-activity-overview.md) with [supported source/sink matrix](copy-activity-overview.md)-- [Lookup activity](control-flow-lookup-activity.md)
+| Supported capabilities|IR |
+|| --|
+|[Copy activity](copy-activity-overview.md) (source/-)|&#9312; &#9313;|
+|[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;|
-You can copy data from Dynamics AX to any supported sink data store. For a list of data stores that Copy Activity supports as sources and sinks, see [Supported data stores and formats](copy-activity-overview.md#supported-data-stores-and-formats).
+<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+
+For a list of data stores that supports as sources and sinks, see [Supported data stores](connector-overview.md#supported-data-stores).
Specifically, this Dynamics AX connector supports copying data from Dynamics AX using **OData protocol** with **Service Principal authentication**.
data-factory Connector Dynamics Crm Office 365 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-dynamics-crm-office-365.md
This article outlines how to use a copy activity in Azure Data Factory or Synaps
This connector is supported for the following activities: -- [Copy activity](copy-activity-overview.md) with [supported source and sink matrix](copy-activity-overview.md)-- [Mapping data flow](concepts-data-flow-overview.md)-- [Lookup activity](control-flow-lookup-activity.md)
+| Supported capabilities|IR |
+|| --|
+|[Copy activity](copy-activity-overview.md) (source/sink)|&#9312; &#9313;|
+|[Mapping data flow](concepts-data-flow-overview.md) (source/sink)|&#9312; |
+|[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;|
-You can copy data from Dynamics 365 (Microsoft Dataverse) or Dynamics CRM to any supported sink data store. You also can copy data from any supported source data store to Dynamics 365 (Microsoft Dataverse) or Dynamics CRM. For a list of data stores that a copy activity supports as sources and sinks, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table.
+<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
++
+For a list of data stores that a copy activity supports as sources and sinks, see the [Supported data stores](connector-overview.md#supported-data-stores) table.
>[!NOTE] >Effective November 2020, Common Data Service has been renamed to [Microsoft Dataverse](/powerapps/maker/data-platform/data-platform-intro). This article is updated to reflect the latest terminology.
data-factory Connector Google Adwords https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-google-adwords.md
This article outlines how to use the Copy Activity in an Azure Data Factory or S
## Supported capabilities
-This Google AdWords connector is supported for the following activities:
+This Google AdWords connector is supported for the following capabilities:
-- [Copy activity](copy-activity-overview.md) with [supported source/sink matrix](copy-activity-overview.md)-- [Lookup activity](control-flow-lookup-activity.md)
+| Supported capabilities|IR |
+|| --|
+|[Copy activity](copy-activity-overview.md) (source/-)|&#9312; &#9313;|
+|[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;|
+<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
-You can copy data from Google AdWords to any supported sink data store. For a list of data stores that are supported as sources/sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table.
+For a list of data stores that are supported as sources/sinks, see the [Supported data stores](connector-overview.md#supported-data-stores) table.
The service provides a built-in driver to enable connectivity, therefore you don't need to manually install any driver using this connector.
data-lake-analytics Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Data Lake Analytics description: Lists Azure Policy Regulatory Compliance controls available for Azure Data Lake Analytics. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 07/26/2022 Last updated : 08/01/2022
data-lake-store Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Data Lake Storage Gen1 description: Lists Azure Policy Regulatory Compliance controls available for Azure Data Lake Storage Gen1. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 07/26/2022 Last updated : 08/01/2022
databox Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Data Box description: Lists Azure Policy Regulatory Compliance controls available for Azure Data Box. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 07/26/2022 Last updated : 08/01/2022
defender-for-cloud Defender For Containers Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-introduction.md
Title: Container security with Microsoft Defender for Cloud description: Learn about Microsoft Defender for Containers Previously updated : 07/12/2022 Last updated : 08/01/2022 # Overview of Microsoft Defender for Containers
Learn more in [Vulnerability assessment](defender-for-containers-usage.md).
### View vulnerabilities for running images
-The recommendation `Running container images should have vulnerability findings resolved` shows vulnerabilities for running images by using the scan results from ACR registries and information on running images from the Defender security profile/extension. Images that are deployed from a non-ACR registry, will appear under the Not applicable tab.
+The recommendation `Running container images should have vulnerability findings resolved` shows vulnerabilities for running images by using the scan results from ACR registries and information on running images from the Defender agent. Images that are deployed from a non-ACR registry, will appear under the Not applicable tab.
:::image type="content" source="media/defender-for-containers/running-image-vulnerabilities-recommendation.png" alt-text="Screenshot showing where the recommendation is viewable." lightbox="media/defender-for-containers/running-image-vulnerabilities-recommendation-expanded.png"::: ## Run-time protection for Kubernetes nodes and clusters
-Defender for Containers provides real-time threat protection for your containerized environments and generates alerts for suspicious activities. You can use this information to quickly remediate security issues and improve the security of your containers. Threat protection at the cluster level is provided by the Defender profile and analysis of the Kubernetes audit logs. Examples of events at this level include exposed Kubernetes dashboards, creation of high-privileged roles, and the creation of sensitive mounts.
+Defender for Containers provides real-time threat protection for your containerized environments and generates alerts for suspicious activities. You can use this information to quickly remediate security issues and improve the security of your containers. Threat protection at the cluster level is provided by the Defender agent and analysis of the Kubernetes audit logs. Examples of events at this level include exposed Kubernetes dashboards, creation of high-privileged roles, and the creation of sensitive mounts.
In addition, our threat detection goes beyond the Kubernetes management layer. Defender for Containers includes host-level threat detection with over 60 Kubernetes-aware analytics, AI, and anomaly detections based on your runtime workload.
defender-for-cloud Supported Machines Endpoint Solutions Clouds Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/supported-machines-endpoint-solutions-clouds-containers.md
Title: Microsoft Defender for Containers feature availability description: Learn about the availability of Microsoft Defender for Cloud containers features according to OS, machine type, and cloud deployment. Previously updated : 07/27/2022 Last updated : 08/01/2022
The **tabs** below show the features that are available, by environment, for Mic
| Domain | Feature | Supported Resources | Linux release state <sup>[1](#footnote1)</sup> | Windows release state <sup>[1](#footnote1)</sup> | Agentless/Agent-based | Pricing Tier | Azure clouds availability | |--|--|--|--|--|--|--|--|
-| Compliance | Docker CIS | VM, VMSS | GA | - | Log Analytics agent | Defender for Servers Plan 2 | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
+| Compliance | Docker CIS | VM, Virtual Machine Scale Set | GA | - | Log Analytics agent | Defender for Servers Plan 2 | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
| Vulnerability Assessment <sup>[2](#footnote2)</sup> | Registry scan - OS packages | ACR, Private ACR | GA | Preview | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet | | Vulnerability Assessment <sup>[3](#footnote3)</sup> | Registry scan - language specific packages | ACR, Private ACR | Preview | - | Agentless | Defender for Containers | Commercial clouds | | Vulnerability Assessment | View vulnerabilities for running images | AKS | Preview | Preview | Defender profile | Defender for Containers | Commercial clouds |
The **tabs** below show the features that are available, by environment, for Mic
> [!NOTE] > For additional requirements for Kuberenetes workload protection, see [existing limitations](../governance/policy/concepts/policy-for-kubernetes.md#limitations).
+### Network restrictions
+
+#### Private link
+
+Defender for Containers relies on the Defender profile\extension for several features. The Defender profile\extension doesn't support the ability to ingest data through Private Link. You can disable public access for ingestion, so that no machine can send data to that workstation except those that are configured to send traffic through Azure Monitor Private Link by navigating to **`your workspace`** > **Network Isolation** and setting the Virtual networks access configurations to **No**.
++
+Allowing data ingestion to occur only through Private Link Scope on your workspace Network Isolation settings, can result in communication failures and partial converge of the Defender for Containers feature set.
+
+Learn how to [use Azure Private Link to connect networks to Azure Monitor](../azure-monitor/logs/private-link-security.md).
+ ### [**AWS (EKS)**](#tab/aws-eks) | Domain | Feature | Supported Resources | Linux release state <sup>[1](#footnote1)</sup> | Windows release state <sup>[1](#footnote1)</sup> | Agentless/Agent-based | Pricing tier |
The **tabs** below show the features that are available, by environment, for Mic
> [!NOTE] > For additional requirements for Kuberenetes workload protection, see [existing limitations](../governance/policy/concepts/policy-for-kubernetes.md#limitations).
-### Outbound proxy support
-
-Outbound proxy without authentication and outbound proxy with basic authentication are supported. Outbound proxy that expects trusted certificates is currently not supported.
- ### Supported host operating systems Defender for Containers relies on the **Defender extension** for several features. The Defender extension is supported on the following host operating systems:
Defender for Containers relies on the **Defender extension** for several feature
Ensure your Kubernetes node is running on one of the verified supported operating systems. Clusters with different host operating systems, will only get partial coverage. Check out the [Supported features by environment](#supported-features-by-environment) for more information.
+### Network restrictions
+
+#### Private link
+
+Defender for Containers relies on the Defender profile\extension for several features. The Defender profile\extension doesn't support the ability to ingest data through Private Link. You can disable public access for ingestion, so that no machine can send data to that workstation except those that are configured to send traffic through Azure Monitor Private Link by navigating to **`your workspace`** > **Network Isolation** and setting the Virtual networks access configurations to **No**.
++
+Allowing data ingestion to occur only through Private Link Scope on your workspace Network Isolation settings, can result in communication failures and partial converge of the Defender for Containers feature set.
+
+Learn how to [use Azure Private Link to connect networks to Azure Monitor](../azure-monitor/logs/private-link-security.md).
+
+#### Outbound proxy support
+
+Outbound proxy without authentication and outbound proxy with basic authentication are supported. Outbound proxy that expects trusted certificates is currently not supported.
+ ### [**GCP (GKE)**](#tab/gcp-gke) | Domain | Feature | Supported Resources | Linux release state <sup>[1](#footnote1)</sup> | Windows release state <sup>[1](#footnote1)</sup> | Agentless/Agent-based | Pricing tier |
Ensure your Kubernetes node is running on one of the verified supported operatin
> [!NOTE] > For additional requirements for Kuberenetes workload protection, see [existing limitations](../governance/policy/concepts/policy-for-kubernetes.md#limitations).
-### Outbound proxy support
-
-Outbound proxy without authentication and outbound proxy with basic authentication are supported. Outbound proxy that expects trusted certificates is currently not supported.
- ### Supported host operating systems Defender for Containers relies on the **Defender extension** for several features. The Defender extension is supported on the following host operating systems:
Defender for Containers relies on the **Defender extension** for several feature
Ensure your Kubernetes node is running on one of the verified supported operating systems. Clusters with different host operating systems, will only get partial coverage. Check out the [Supported features by environment](#supported-features-by-environment) for more information.
+### Network restrictions
+
+#### Private link
+
+Defender for Containers relies on the Defender profile\extension for several features. The Defender profile\extension doesn't support the ability to ingest data through Private Link. You can disable public access for ingestion, so that no machine can send data to that workstation except those that are configured to send traffic through Azure Monitor Private Link by navigating to **`your workspace`** > **Network Isolation** and setting the Virtual networks access configurations to **No**.
++
+Allowing data ingestion to occur only through Private Link Scope on your workspace Network Isolation settings, can result in communication failures and partial converge of the Defender for Containers feature set.
+
+Learn how to [use Azure Private Link to connect networks to Azure Monitor](../azure-monitor/logs/private-link-security.md).
+
+#### Outbound proxy support
+
+Outbound proxy without authentication and outbound proxy with basic authentication are supported. Outbound proxy that expects trusted certificates is currently not supported.
+ ### [**On-prem/IaaS (Arc)**](#tab/iaas-arc) | Domain | Feature | Supported Resources | Linux release state <sup>[1](#footnote1)</sup> | Windows release state <sup>[1](#footnote1)</sup> | Agentless/Agent-based | Pricing tier |
Ensure your Kubernetes node is running on one of the verified supported operatin
> [!NOTE] > For additional requirements for Kuberenetes workload protection, see [existing limitations](../governance/policy/concepts/policy-for-kubernetes.md#limitations).
-### Outbound proxy support
-
-Outbound proxy without authentication and outbound proxy with basic authentication are supported. Outbound proxy that expects trusted certificates is currently not supported.
- ### Supported host operating systems Defender for Containers relies on the **Defender extension** for several features. The Defender extension is supported on the following host operating systems:
Defender for Containers relies on the **Defender extension** for several feature
Ensure your Kubernetes node is running on one of the verified supported operating systems. Clusters with different host operating systems, will only get partial coverage. Check out the [Supported features by environment](#supported-features-by-environment) for more information.
+### Network restrictions
+
+#### Private link
+
+Defender for Containers relies on the Defender profile\extension for several features. The Defender profile\extension doesn't support the ability to ingest data through Private Link. You can disable public access for ingestion, so that no machine can send data to that workstation except those that are configured to send traffic through Azure Monitor Private Link by navigating to **`your workspace`** > **Network Isolation** and setting the Virtual networks access configurations to **No**.
++
+Allowing data ingestion to occur only through Private Link Scope on your workspace Network Isolation settings, can result in communication failures and partial converge of the Defender for Containers feature set.
+
+Learn how to [use Azure Private Link to connect networks to Azure Monitor](../azure-monitor/logs/private-link-security.md).
+
+#### Outbound proxy support
+
+Outbound proxy without authentication and outbound proxy with basic authentication are supported. Outbound proxy that expects trusted certificates is currently not supported.
+ ## Next steps
defender-for-iot Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/getting-started.md
This procedure describes how to add a Defender for IoT plan for OT networks to a
:::image type="content" source="media/how-to-manage-subscriptions/onboard-plan-2.png" alt-text="Screenshot of adding a plan for OT networks to your subscription.":::
-1. Select **I accept the terms** option, and then select **Save**.
+1. Select the **I accept the terms** option, and then select **Save**.
Your OT networks plan will be shown under the associated subscription in the **Plans** grid.
Continue with one of the following tutorials, depending on whether you're settin
For more information, see: - [Welcome to Microsoft Defender for IoT for organizations](overview.md)-- [Microsoft Defender for IoT architecture](architecture.md)
+- [Microsoft Defender for IoT architecture](architecture.md)
defender-for-iot Integrate Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/integrate-overview.md
The following table lists available integrations for Microsoft Defender for IoT,
|**ServiceNow** | View Defender for IoT device detections, attributes, and connections in ServiceNow. | [Integrate ServiceNow with Microsoft Defender for IoT](tutorial-servicenow.md) | | **Splunk** | Send Defender for IoT alerts to Splunk | [Integrate Splunk with Microsoft Defender for IoT](tutorial-splunk.md) | |**Axonius Cybersecurity Asset Management** | Import and manage device inventory discovered by Defender for IoT in your Axonius instance. | [Axonius documentation](https://docs.axonius.com/docs/azure-defender-for-iot) |
+|**Skybox** | Import vulnerability occurrence data discovered by Defender for IoT in your Skybox platform. | [Skybox documentation](https://docs.skyboxsecurity.com) <br><br> [Skybox integration page](https://www.skyboxsecurity.com/products/integrations) |
## Next steps
defender-for-iot Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/release-notes.md
For more information, see the [Microsoft Security Development Lifecycle practice
|Service area |Updates | |||
-|**Enterprise IoT networks** | - [Enterprise IoT purchase experience and Defender for Endpoint integration in GA](#enterprise-iot-purchase-experience-and-defender-for-endpoint-integration-in-ga) |
+|**Enterprise IoT networks** | - [Enterprise IoT and Defender for Endpoint integration in GA](#enterprise-iot-and-defender-for-endpoint-integration-in-ga) |
|**OT networks** |**Sensor software version 22.2.4**: <br>- [Device inventory enhancements](#device-inventory-enhancements)<br>- [Enhancements for the ServiceNow integration API](#enhancements-for-the-servicenow-integration-api)<br><br>**Sensor software version 22.2.3**:<br>- [OT appliance hardware profile updates](#ot-appliance-hardware-profile-updates)<br>- [PCAP access from the Azure portal](#pcap-access-from-the-azure-portal-public-preview)<br>- [Bi-directional alert synch between sensors and the Azure portal](#bi-directional-alert-synch-between-sensors-and-the-azure-portal-public-preview)<br>- [Support diagnostic log enhancements](#support-diagnostic-log-enhancements-public-preview)<br>- [Improved security for uploading protocol plugins](#improved-security-for-uploading-protocol-plugins)<br>- [Sensor names shown in browser tabs](#sensor-names-shown-in-browser-tabs)<br><br>**Sensor software version 22.1.7**: <br>- [Same passwords for *cyberx_host* and *cyberx* users](#same-passwords-for-cyberx_host-and-cyberx-users) <br><br>**To update to version 22.2.x**:<br>- **From version 22.1.x**, update directly to the latest **22.2.x** version<br>- **From version 10.x**, first update to the latest **22.1.x** version, and then update again to the latest **22.2.x** version <br><br>For more information, see [Update Defender for IoT OT monitoring software](update-ot-software.md). | |**Cloud-only features** | - [Microsoft Sentinel incident synch with Defender for IoT alerts](#microsoft-sentinel-incident-synch-with-defender-for-iot-alerts) |
-### Enterprise IoT purchase experience and Defender for Endpoint integration in GA
+### Enterprise IoT and Defender for Endpoint integration in GA
-Defender for IoTΓÇÖs new purchase experience and the Enterprise IoT integration with Microsoft Defender for Endpoint is now in General Availability (GA). With this update, we've made the following updates and improvements:
+The Enterprise IoT integration with Microsoft Defender for Endpoint is now in General Availability (GA). With this update, we've made the following updates and improvements:
-- An updated **Plans and pricing** page with an enhanced onboarding process, as well as smooth onboarding directly in Defender for Endpoint. For more information, see [Manage your subscriptions](how-to-manage-subscriptions.md) and the [Defender for Endpoint documentation](/microsoft-365/security/defender-endpoint/enable-microsoft-defender-for-iot-integration).
+- Onboard an Enterprise IoT plan directly in Defender for Endpoint. For more information, see [Manage your subscriptions](how-to-manage-subscriptions.md) and the [Defender for Endpoint documentation](/microsoft-365/security/defender-endpoint/enable-microsoft-defender-for-iot-integration).
- Seamless integration with Microsoft Defender for Endpoint to view detected Enterprise IoT devices, and their related alerts, vulnerabilities, and recommendations in the Microsoft 365 Security portal. For more information, see the [Enterprise IoT tutorial](tutorial-getting-started-eiot-sensor.md) and the [Defender for Endpoint documentation](/microsoft-365/security/defender-endpoint/enable-microsoft-defender-for-iot-integration). You can continue to view detected Enterprise IoT devices on the Defender for IoT **Device inventory** page in the Azure portal.
event-grid Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Event Grid description: Lists Azure Policy Regulatory Compliance controls available for Azure Event Grid. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 07/26/2022 Last updated : 08/01/2022
event-grid Subscribe To Graph Api Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/subscribe-to-graph-api-events.md
The common steps to subscribe to events published by any partner, including Grap
### Enable Microsoft Graph API events to flow to your partner topic > [!IMPORTANT]
-> Microsoft Graph API's (MGA) ability to send events to Even Grid (a generally available service) is in private preview. In the following steps, you will follow instructions from [Node.js](https://github.com/microsoftgraph/nodejs-webhooks-sample), [Java](https://github.com/microsoftgraph/java-spring-webhooks-sample), and[.NET Core](https://github.com/microsoftgraph/aspnetcore-webhooks-sample) Webhook samples to enable flow of events from Microsoft Graph API. At some point in the sample, you will have an application registered with Azure AD. Email your application ID to <a href="mailto:ask.graph.and.grid@microsoft.com?subject=Please allow my application ID">mailto:ask.graph.and.grid@microsoft.com?subject=Please allow my Azure AD application with ID to send events through Graph API</a> so that the Microsoft Graph API team can add your application ID to allow list to use this new capability.
+> Microsoft Graph API's (MGA) ability to send events to Event Grid (a generally available service) is in private preview. In the following steps, you will follow instructions from [Node.js](https://github.com/microsoftgraph/nodejs-webhooks-sample), [Java](https://github.com/microsoftgraph/java-spring-webhooks-sample), and[.NET Core](https://github.com/microsoftgraph/aspnetcore-webhooks-sample) Webhook samples to enable flow of events from Microsoft Graph API. At some point in the sample, you will have an application registered with Azure AD. Email your application ID to <a href="mailto:ask.graph.and.grid@service.microsoft.com?subject=Please allow my application ID">mailto:ask.graph.and.grid@service.microsoft.com?subject=Please allow my Azure AD application with ID to send events through Graph API</a> so that the Microsoft Graph API team can add your application ID to allow list to use this new capability.
You request Microsoft Graph API to send events by creating a Graph API subscription. When you create a Graph API subscription, the http request should look like the following sample:
event-hubs Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Event Hubs description: Lists Azure Policy Regulatory Compliance controls available for Azure Event Hubs. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 07/26/2022 Last updated : 08/01/2022
external-attack-surface-management Deploying The Defender Easm Azure Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/external-attack-surface-management/deploying-the-defender-easm-azure-resource.md
+
+ Title: Deploying the Defender EASM Azure resource
+description: This article explains how to deploy the Microsoft Defender External Attack Surface Management (Defender EASM) Azure resource using the Azure Portal.
+++ Last updated : 07/14/2022+++
+# Deploying the Defender EASM Azure resource
+
+This article explains how to deploy the Microsoft Defender External Attack Surface Management (Defender EASM) Azure resource using the Azure Portal.
+
+Deploying the EASM Azure resource involves two steps:
+
+- Create a resource group
+- Deploy the EASM resource to the resource group
+
+## Prerequisites
+
+Before you create a Defender EASM resource group, we recommend that you are familiar with how to access and use the [Microsoft Azure Portal](https://ms.portal.azure.com/) and read the [Defender EASM Overview article](index.md) for key context on the product. You will need:
+
+- A valid Azure subscription or free Defender EASM trial account. If you donΓÇÖt have an [Azure subscription](/azure/guides/developer/azure-developer-guide#understanding-accounts-subscriptions-and-billing), create an free Azure account before you begin.
+
+- Your Azure account must have a contributor role assigned for you to create a resource. To get this role assigned to your account, follow the steps in the [Assign roles](/azure/role-based-access-control/role-assignments-steps) documentation, or contact your administrator.
+
+## Create a resource group
+
+1. To create a new resource group, first select **Resource groups** in the Azure portal.
+
+ ![Screenshot of resource groups pane highlighted from Azure home page](media/QuickStart-1.png)
+
+2. Under Resource Groups, select **Create**:
+
+ ![Screenshot of "create resourceΓÇ¥ highlighted in resource group list view](media/QuickStart-2.png)
+
+3. Select or enter the following property values:
+
+ - **Subscription**: Select an Azure subscription.
+ - **Resource Group**: Give the resource group a name.
+ - **Region**: Specify an Azure location. This location is where the resource group stores metadata about the resource. For compliance reasons, you may want to specify where that metadata is stored. In general, we recommend that you specify a location where most of your resources will be. Using the same location can simplify your template.
+
+ ![Screenshot of create resource group basics tab](media/QuickStart-3.png)
+
+4. Select **Review + Create**.
+
+5. Review the values, and then select **Create**.
+
+6. Select **Refresh** to view the new resource group in the list.
+
+## Deploy resources to a resource group
+
+After you create a resource group, you can deploy resources to the group from the Marketplace. The Marketplace provides all services and pre-defined solutions available in Azure.
+
+1. To start a deployment, select ΓÇ£Create a resourceΓÇ¥ in the Azure portal.
+
+ ![Screenshot of ΓÇ£create resourceΓÇ¥ option highlighted from Azure home page](media/QuickStart-4.png)
+
+2. In the search box, type **Microsoft Defender EASM**, and then press Enter.
+
+3. Select the **Create** button to create an EASM resource.
+
+ ![Screenshot of "createΓÇ¥ button highlighted from Defender EASM list view](media/QuickStart-5.png)
+
+4. Select or enter the following property values:
+
+ - **Subscription**: Select an Azure subscription.
+ - **Resource Group**: Select the Resource Group created in the earlier step, or you can create a new one as part of the process of creating this resource.
+ - **Name**: give the Defender EASM workspace a name.
+ - **Region**: Select an Azure location.
+
+ ![Screenshot of create EASM resource basics tab](media/QuickStart-6.png)
+
+5. Select **Review + Create**.
+
+6. Review the values, and then select **Create**.
+
+7. Select **Refresh** to see the status of the deployment and once finished you can go to the Resource to get started.
+
+## Next steps
+
+- [Using and managing discovery](using-and-managing-discovery.md)
+- [Understanding dashboards](understanding-dashboards.md)
external-attack-surface-management Discovering Your Attack Surface https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/external-attack-surface-management/discovering-your-attack-surface.md
+
+ Title: Discovering your attack surface
+description: Microsoft has preemptively configured the attack surfaces of many organizations, mapping their initial attack surface by discovering infrastructure thatΓÇÖs connected to known assets.
+++ Last updated : 07/14/2022+++
+# Discovering your attack surface
+
+## Prerequisites
+
+Before completing this tutorial, see the [What is discovery?](what-is-discovery.md) and [Using and managing discovery](using-and-managing-discovery.md) articles to understand key concepts mentioned in this article.
+
+## Accessing your automated attack surface
+
+Microsoft has preemptively configured the attack surfaces of many organizations, mapping their initial attack surface by discovering infrastructure thatΓÇÖs connected to known assets. It is recommended that all users search for their organizationΓÇÖs attack surface before creating a custom attack surface and running additional discoveries. This enables users to quickly access their inventory as Defender EASM refreshes the data, adding additional assets and recent context to your Attack Surface.
+
+1. When first accessing your Defender EASM instance, select ΓÇ£Getting StartedΓÇ¥ in the ΓÇ£GeneralΓÇ¥ section to search for your organization in the list of automated attack surfaces.
+
+2. Then select your organization from the list and click ΓÇ£Build my Attack SurfaceΓÇ¥.
+
+![Screenshot of pre-configured attack surface option](media/Tutorial-1.png)
+
+At this point, the discovery will be running in the background. If you selected a pre-configured Attack Surface from the list of available organizations, you will be redirected to the Dashboard Overview screen where you can view insights into your organizationΓÇÖs infrastructure in Preview Mode. Please review these dashboard insights to become familiar with your Attack Surface as you wait for additional assets to be discovered and populated in your inventory. Please read the [Understanding dashboards](understanding-dashboards.md) article for more information on how to derive insights from these dashboards.
+
+If you notice any missing assets or have other entities to manage that may not be discovered through infrastructure clearly linked to your organization, you can elect to run customized discoveries to detect these outlier assets.
+
+## Customizing discovery
+Custom discoveries are ideal for organizations that require deeper visibility into infrastructure that may not be immediately linked to their primary seed assets. By submitting a larger list of known assets to operate as discovery seeds, the discovery engine will return a wider pool of assets. Custom discovery can also help organizations find disparate infrastructure that may relate to independent business units and acquired companies.
+
+## Discovery groups
+Custom discoveries are organized into Discovery Groups. They are independent seed clusters that comprise a single discovery run and operate on their own recurrence schedules. Users can elect to organize their Discovery Groups to delineate assets in whatever way best benefits their company and workflows. Common options include organizing by responsible team/business unit, brands or subsidiaries.
+
+## Creating a discovery group
+
+1. Select the **Discovery** panel under the **Manage** section in the left-hand navigation column.
+
+ ![Screenshot of EASM instance from overview page with manage section highlighted](media/Tutorial-2.png)
+
+2. This Discovery page shows your list of Discovery Groups by default. This list will be empty when you first access the platform. To run your first discovery, click **Add Discovery Group**.
+
+ ![Screenshot of Discovery screen with ΓÇ£add disco groupΓÇ¥ highlighted](media/Tutorial-3.png)
+
+3. First, name your new discovery group and add a description. The **Recurring Frequency** field allows you to schedule discovery runs for this group, scanning for new assets related to the designated seeds on a continuous basis. The default recurrence selection is **Weekly**; Microsoft recommends this cadence to ensure that your organizationΓÇÖs assets are routinely monitored and updated. For a single, one-time discovery run, select **Never**. However, we recommend that users keep the **Weekly** default cadence and instead turn off historical monitoring within their Discovery Group settings if they later decide to discontinue recurrent discovery runs.
+
+ Select **Next: Seeds >**
+
+ ![Screenshot of first page of disco group setup](media/Tutorial-4.png)
+
+4. Next, select the seeds that youΓÇÖd like to use for this Discovery Group. Seeds are known assets that belong to your organization; the Defender EASM platform scans these entities, mapping their connections to other online infrastructure to create your Attack Surface.
+
+ ![Screenshot of seed selection page of disco group setup](media/Tutorial-5.png)
+
+ The **Quick Start** option lets you search for your organization in a list of pre-populated Attack Surfaces. You can quickly create a Discovery Group based on the known assets belonging to your organization.
+
+ ![Screenshot of pre-baked attack surface selection page, then output in seed list](media/Tutorial-6.png)
+
+ ![Screenshot of pre-baked attack surface selection page,](media/Tutorial-7.png)
+
+ Alternatively, users can manually input their seeds. Defender EASM accepts domains, IP blocks, hosts, email contacts, ASNs, certificate common names, and WhoIs organizations as seed values. You can also specify entities to exclude from asset discovery to ensure they are not added to your inventory if detected. For example, this is useful for organizations that have subsidiaries that will likely be connected to their central infrastructure, but do not belong to your organization.
+
+ Once your seeds have been selected, select **Review + Create**.
+
+5. Review your group information and seed list, then select **Create & Run**.
+
+ ![Screenshot of review + create screen](media/Tutorial-8.png)
+
+You will then be taken back to the main Discovery page that displays your Discovery Groups. Once your discovery run is complete, you will see new assets added to your Confirmed Inventory.
+
+## Next steps
+- [Understanding asset details](understanding-asset-details.md)
+- [Understanding dashboards](understanding-dashboards.md)
external-attack-surface-management Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/external-attack-surface-management/index.md
+
+ Title: Overview
+description: Microsoft Defender External Attack Surface Management (Defender EASM) continuously discovers and maps your digital attack surface to provide an external view of your online infrastructure.
+++ Last updated : 07/14/2022+++
+# Defender EASM Overview
+
+*Microsoft Defender External Attack Surface Management (Defender EASM)* continuously discovers and maps your digital attack surface to provide an external view of your online infrastructure. This visibility enables security and IT teams to identify unknowns, prioritize risk, eliminate threats, and extend vulnerability and exposure control beyond the firewall.
+Defender EASM leverages MicrosoftΓÇÖs crawling technology to discover assets that are related to your known online infrastructure, and actively scans these assets to discover new connections over time. Attack Surface Insights are generated by leveraging vulnerability and infrastructure data to showcase the key areas of concern for your organization.
+
+![Screenshot of Overview Dashboard](media/Overview-1.png)
+
+## Discovery and inventory
+
+Microsoft's proprietary discovery technology recursively searches for infrastructure with observed connections to known legitimate assets to make inferences about that infrastructure's relationship to the organization and uncover previously unknown and unmonitored properties. These known legitimate assets are called discovery ΓÇ£seedsΓÇ¥; Defender EASM first discovers strong connections to these selected entities, recursing to unveil more connections and ultimately compile your Attack Surface.
+
+Defender EASM includes the discovery of the following kinds of assets:
+
+- Domains
+- Hostnames
+- Web Pages
+- IP Blocks
+- IP Addresses
+- ASNs
+- SSL Certificates
+- WHOIS Contacts
+
+![Screenshot of Discovery View](media/Overview-2.png)
+
+Discovered assets are indexed and classified in your Defender EASM Inventory, providing a dynamic record of all web infrastructure under the organization's management. Assets are categorized as recent (currently active) or historic, and can include web applications, third party dependencies, and other asset connections.
+
+## Dashboards
+
+Defender EASM provides a series of dashboards that help users quickly understand their online infrastructure and any key risks to their organization. These dashboards are designed to provide insight on specific areas of risk, including vulnerabilities, compliance, and security hygiene. These insights help customers quickly address the components of their attack surface that pose the greatest risk to their organization.
+
+![Screenshot of Dashboard View](media/Overview-3.png)
+
+## Managing assets
+
+Customers can filter their inventory to surface the specific insights they care about most. Filtering offers a level of flexibility and customization that enables users to access a specific subset of assets. This allows you to leverage Defender EASM data according to your specific use case, whether searching for assets that connect to deprecating infrastructure or identifying new cloud resources.
+
+![Screenshot of Inventory View](media/Overview-4.png)
+
+## Data residency, availability and privacy
+
+Microsoft Defender External Attack Surface Management contains both global data and customer-specific data. The underlying internet data is global Microsoft data; labels applied by customers are considered customer data. All customer data is stored in the region of the customerΓÇÖs choosing.
+
+For security purposes, Microsoft collects users' IP addresses when they log in. This data is stored for up to 30 days but may be stored longer if needed to investigate potential fraudulent or malicious use of the product.
+
+In the case of a region down scenario, customers should see now downtime as Defender EASM uses technologies that replicate data to a backup regions.
+
+Defender EASM processes customer data. By default, customer data is replicated to the paired region.
+
+## Next Steps
+
+- [Deploying the EASM Azure resource](deploying-the-defender-easm-azure-resource.md)
+- [Understanding inventory assets](understanding-inventory-assets.md)
+- [What is discovery?](what-is-discovery.md)
external-attack-surface-management Understanding Asset Details https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/external-attack-surface-management/understanding-asset-details.md
+
+ Title: Understanding asset details
+description: Understanding asset details- Microsoft Defender External Attack Surface Management (Defender EASM) relies on our proprietary discovery technology to continuously define your organizationΓÇÖs unique Internet-exposed attack surface.
+++ Last updated : 07/14/2022+++
+# Understanding asset details
+
+## Overview
+
+Defender EASM frequently scans all inventory assets, collecting robust contextual metadata that powers Attack Surface Insights and can also be viewed more granularly on the Asset Details page. The provided data changes depending on the asset type. For instance, the platform provides unique WHOIS data for domains, hosts and IP addresses and signature algorithm data for SSL certificates.
+
+This article provides guidance on how to view and interpret the expansive data collected by Microsoft for each of your inventory assets. It defines this metadata for each asset type and explains how the insights derived from it can help you manage the security posture of your online infrastructure.
+
+*For more information, see [understanding inventory assets](understanding-inventory-assets.md) to familiarize yourself with the key concepts mentioned in this article.*
+
+## Asset details summary view
+
+You can view the Asset Details page for any asset by clicking on its name from your inventory list. On the left pane of this page, you can view an asset summary that provides key information about that particular asset. This section is primarily comprised of data that applies to all asset types, although additional fields will be available in some cases. The chart below for more information on the metadata provided for each asset type in the summary section.
+
+![Screenshot of asset details, left-hand summary pane highlighted](media/Inventory_1.png)
+
+### General information
+
+This section is comprised of high-level information that is key to understanding your assets at a glance. Most of these fields are applicable to all assets, although this section can also include information that is specific to one or more asset types.
+
+| Name | Definition | Asset Types |
+|--|--|--|
+| Asset Name | The name of an asset. | All |
+| UUID | This 128-bit label represents the universally unique identifier (UUID) for the | All |
+| Status | The status of the asset within the RiskIQ system. Options include Approved Inventory, Candidate, Dependencies, or Requires Investigation. | All |
+| First seen | This column displays the date that the asset was first observed by crawling | All |
+| Last seen | This column displays the date that the asset was last observed by crawling infrastructure. | All |
+| Discovered on | The date that the asset was found in a discovery run scanning for assets related to an organizationΓÇÖs known infrastructure. | All |
+| Last updated | This column displays the date that the asset was last updated in the system after new data was found in a scan. | All |
+| Country | The country of origin detected for this asset. | All |
+| State/Province | The state or province of origin detected for this asset. | All |
+| City | The city of origin detected for this asset. | All |
+| WhoIs name | | Host |
+| WhoIs email | The primary contact email in a WhoIs record. | Host |
+| WhoIS organization | The listed organization in a WhoIs record. | Host |
+| WhoIs registrar | The listed registrar in a WhoIs record. | Host |
+| WhoIs name servers | The listed name servers in a WhoIs record. | Host |
+| Certificate issued | The date when a certificate was issued. | SSL certificate |
+| Certificate expires | The date when a certificate will expire. | SSL certificate |
+| Serial number | The serial number associated with an SSL certificate. | SSL certificate |
+| SSL version | The version of SSL that the certificate was registered | SSL certificate |
+| Certificate key algorithm | The key algorithm used to encrypt the SSL certificate. | SSL certificate |
+| Certificate key size | The number of bits within a SSL certificate key. | SSL certificate |
+| Signature algorithm oid | The OID identifying the hash algorithm used to sign the certificate request. | SSL certificate |
+| Self-signed | Indicates whether the SSL certificate was self-signed.| SSL certificate |
+
+### Network
+
+IP address information that provides additional context about the usage of the IP.
+
+| Name | Definition | Asset Types |
+|--|--|--|
+| Name server record | Any name servers detected on the asset. | IP address |
+| Mail server record | Any mail servers detected on the asset. | IP address |
+| IP Blocks | The IP block that contains the IP address asset. | IP address |
+| ASNs | The ASN associated with an asset. | IP address |
+
+### Block info
+
+Data specific to IP blocks that provides contextual information about its use.
+
+| Name | Definition | Asset Types |
+|--|--|--|
+| CIDR | The Classless Inter-Domain Routing (CIDR) for an IP Block. | IP block |
+| Network name | The network name associated to the IP block. | IP block |
+| Organization name | The organization name found in the registration information for the IP block. | IP block |
+| Org ID | The organization ID found in the registration information for the IP block. | IP block |
+| ASNs | The ASN associated with the IP block. | IP block |
+| Country | The country of origin as detected in the WhoIs registration information for the IP block. | IP block |
+
+### Subject
+
+Data specific to the subject (i.e. protected entity) associated with a SSL Certificate.
+
+| Name | Definition | Asset Types |
+|--|--|--|
+| Common name | The Issuer Common Name of the subject of the SSL certificate. | SSL certificate |
+| Alternate names | Any alternative common names for the subject of the SSL certificate.| SSL certificate |
+| Organization name | The organization linked to the subject of the SSL certificate. | SSL certificate |
+| Organization unit | Optional metadata that indicates the department within an organization that is responsible for the certificate. | SSL certificate |
+| Locality | Denotes the city where the organization is located. | SSL certificate |
+| Country | Denotes the country where the organization is located. | SSL certificate |
+| State/Province | Denotes the state or province where the organization is located. | SSL certificate |
+
+### Issuer
+
+Data specific to the issuer of an SSL Certificate.
+
+| Name | Definition | Asset Types |
+|--|--|--|
+| Common name | The common name of the issuer of the certificate. | SSL certificate |
+| Alternate names | Any additional names of the issuer. | SSL certificate |
+| Organization name | The name of the organization that orchestrated the issue of a certificate. | SSL certificate |
+| Organization unit | Additional information about the organization issuing the certificate. | SSL certificate |
+
+## Data tabs
+
+In the right-hand pane of the Asset Details page, users can access more expansive data related to the selected asset. This data is organized in a series of categorized tabs. The available metadata tabs will change depending on the type of asset youΓÇÖre viewing.
+
+### Overview
+
+The Overview tab provides key additional context to ensure that significant insights are quickly identifiable when viewing the details of an asset. This section will include key discovery data for all asset types, providing insight about how Microsoft maps the asset to your known infrastructure. This section can also include dashboard widgets that visualize insights that are particularly relevant to the asset type in question.
+
+![Screenshot of asset details, right-hand overview pane highlighted](media/Inventory_2.png)
+
+### Discovery chain
+
+The discovery chain outlines the observed connections between a discovery seed and the asset. This information helps users visualize these connections and better understand why an asset was determined to belong to their organization.
+
+In the example below, we see that the seed domain is tied to this asset through the contact email in its WhoIs record. That same contact email was used to register the IP block that includes this particular IP address asset.
+
+![Screenshot of discovery chain](media/Inventory_3.png)
+
+### Discovery information
+
+This section provides information about the process used to detect the asset. It includes information about the discovery seed that connects to the asset, as well as the approval process. Options include ΓÇ£Approved InventoryΓÇ¥ which indicates the relationship between the seed and discovered asset was strong enough to warrant an automatic approval by the Defender EASM system. Otherwise, the process will be listed as ΓÇ£CandidateΓÇ¥, indicating that the asset required manual approval to be incorporated into your inventory. This section also provides the date that the asset was added to your inventory, as well as the date that it was last scanned in a discovery run.
+
+### IP reputation
+
+The IP reputation tab displays a list of potential threats related to a given IP address. This section outlines any detected malicious or suspicious activity that relates to the IP address. This is key to understanding the trustworthiness of your own attack surface; these threats can help organizations uncover past or present vulnerabilities in their infrastructure.
+
+Defender EASMΓÇÖs IP reputation data displays instances when the IP address was detected on a threat list. For instance, the recent detection in the example below shows that the IP address relates to a host known to be running a cryptocurrency miner. This data was derived from a suspicious host list supplied by CoinBlockers. Results are organized by the ΓÇ£last seenΓÇ¥ date, surfacing the most relevant detections first. In this example, the IP address is present on an abnormally high number of threat feeds, indicating that the asset should be thoroughly investigated to prevent malicious activity in the future.
+
+![Screenshot of asset details, IP reputation tab](media/Inventory_4.png)
+
+### Services
+
+The ΓÇ£ServicesΓÇ¥ tab is available for IP address, domain and host assets. This section provides information on services observed to be running on the asset, and includes IP addresses, name and mail servers, and open ports that correspond with additional types of infrastructure (e.g. remote access services). Defender EASMΓÇÖs Services data is key to understanding the infrastructure powering your asset. It can also alert you of resources that are exposed on the open internet that should be protected.
+
+![Screenshot of asset details, services tab](media/Inventory_5.png)
+
+### IP Addresses
+
+This section provides insight on any IP addresses that are running on the assetΓÇÖs infrastructure. On the Services tab, Defender EASM provides the name of the IP address, the first and last seen dates, and a recency column which indicates whether the IP address was observed during our most recent scan of the asset. If there is no checkbox in this column, the IP address has been seen in prior scans but is not currently running on the asset.
+
+![Screenshot of asset details, IP address section of services tab](media/Inventory_6.png)
+
+### Mail Servers
+
+This section provides a list of any mail servers running on the asset, indicating that the asset is capable of sending emails. In this section, Defender EASM provides the name of the mail server, the first and last seen dates, and a recency column that indicates whether the mail server was detected during our most recent scan of the asset.
+
+![Screenshot of asset details, mail server section of services tab](media/Inventory_7.png)
+
+### Name Servers
+
+This section displays any name servers running on the asset, providing resolution for a host. In this section, we provide the name of the mail server, the first and last seen dates, and a recency column that indicates whether the name server was detected during our most recent scan of the asset.
+
+![Screenshot of asset details, name server section of services tab](media/Inventory_8.png)
+
+### Open Ports
+
+This section lists any open ports detected on the asset. Microsoft scans around 230 distinct ports on a regular basis. This data is useful to identify any unsecured services that shouldnΓÇÖt be accessible from the open internet, including databases, IoT devices, and network services like routers and switches. ItΓÇÖs also helpful in identifying shadow IT infrastructure or insecure remote access services.
+
+In this section, Defender EASM provides the open port number, a description of the port, the last state it was observed in, the first and last seen dates, and a recency column that indicates whether the port was observed as open during MicrosoftΓÇÖs most recent scan.
+
+![Screenshot of asset details, open ports section of services tab](media/Inventory_9.png)
+
+### Trackers
+
+Trackers are unique codes or values found within web pages and often are used to track user interaction. These codes can be used to correlate a disparate group of websites to a central entity. Microsoft's tracker dataset includes IDs from providers like Google, Yandex, Mixpanel, New Relic, Clicky and continues to grow on a regular basis.
+
+In this section, Defender EASM provides the tracker type (e.g. GoogleAnalyticsID), the unique identifier value, and the first and last seen dates.
+
+### Web components & CVEs
+
+Web components are details describing the infrastructure of an asset as observed through a Microsoft scan. These components provide a high-level understanding of the technologies leveraged on the asset. Microsoft categorizes the specific components and includes version numbers when possible.
+
+![Screenshot of top of Web components & CVEs tab](media/Inventory_10.png)
+
+The Web components section provides the category, name and version of the component, as well as a list of any applicable CVEs that should be remediated. Defender EASM also provides a first and last seen date as well as a recency indicator; a checked box indicates that this infrastructure was observed during our most recent scan of the asset.
+
+Web components are categorized based on their function. Options include:
+
+| Web Component | Examples |
+|--|--|
+| Hosting Provider | hostingprovider.com |
+| Server | Apache |
+| DNS Server | ISC BIND |
+| Data stores | MySQL, ElasticSearch, MongoDB |
+| Remote access | OpenSSH, Microsoft Admin Center, Netscaler Gateway |
+| Data Exchange | Pure-FTPd, |
+| Internet of things (IoT) | HP Deskjet, Linksys Camera, Sonos |
+| Email server | ArmorX, Lotus Domino, Symantec Messaging Gateway |
+| Network device | Cisco Router, Motorola WAP, ZyXEL Modem |
+| Building control | Linear eMerge, ASI Controls Weblink, Optergy |
+
+Below the Web components section, users can view a list of all CVEs applicable to the list of web components. This provides a more granular view of the CVEs themselves, and the CVSS score indicating the level of risk it poses to your organization.
+
+![Screenshot of CVEs section of tab](media/Inventory_11.png)
+
+### Resources
+
+The Resources tab provides insight on any JavaScript resources running on any page or host assets. When applicable to a host, these resources are aggregated to represent the Javascript running on all pages on that host. This section provides an inventory of the JavaScript detected on each asset so that your organization has full visibility into these resources and can detect any changes. Defender EASM provides the resource URL and host, MD5 value, and first and last seen dates to help organizations effectively monitor the use of Javascript resources across their inventory.
+
+![Screenshot of resources tab](media/Inventory_12.png)
+
+### SSL certificates
+
+Certificates are used to secure communications between a browser and a web server via Secure Sockets Layer (SSL). This ensures that sensitive data in transit cannot be read, tampered with, or forged. This section of Defender EASM lists any SSL certificates detected on the asset, including key data like the issue and expiry dates.
+
+![Screenshot of SSL certificates tab](media/Inventory_13.png)
+
+### WhoIs
+
+WhoIs is a protocol that is leveraged to query and respond to the databases that store data related to the registration and ownership of Internet resources. WhoIs contains key registration data that can apply to domains, hosts, IP addresses and IP blocks in Defender EASM. In the WhoIs data tab, Microsoft provides a robust amount of information associated with the registry of the asset.
+
+![Screenshot of WhoIs values tab](media/Inventory_14.png)
+
+Fields include:
+
+| Field | Description |
+|--|--|
+| WhoIs server | A server set up by an ICANN-accredited registrar to acquire up-to-date information about entities that are registered with it. |
+| Registrar | The company whose service was used to register an asset. Popular registrars include GoDaddy, Namecheap, and HostGator. |
+| Domain status | Any status for a domain as set by the registry. These statuses can indicate that a domain is pending delete or transfer by the registrar or is simply active on the internet. This field can also denote the limitations of an asset; in the below example, ΓÇ£client delete prohibitedΓÇ¥ indicates that the registrar is unable to delete the asset. |
+| Email | Any contact email addresses provided by the registrant. WhoIs allows registrants to specify the contact type; options include administrative, technical, registrant and registrar contacts. |
+| Name | The name of a registrant, if provided. |
+| Organization | The organization responsible for the registered entity. |
+| Street | The street address for the registrant if provided|
+| City | The city listed in the street address for the registrant if provided. |
+| State | The state listed in the street address for the registrant if provided. |
+| Postal Code | The postal code listed in the street address for the registrant if provided. |
+| Country | The country listed in the street address for the registrant if provided. |
+| Phone | The phone number associated with a registrant contact if provided. |
+| Name Servers | Any name servers associated with the registered entity. |
+
+ItΓÇÖs important to note that many organizations opt to obfuscate their registry information. In the example above, you can see that some of the contact email addresses end in ΓÇ£@anonymised.emailΓÇ¥ which is a placeholder in lieu of the real contact address. Furthermore, many of these fields are optional when configurating a registration, so any field with an empty value was not included by the registrant.
+
+## Next steps
+
+- [Understanding dashboards](understanding-dashboards.md)
+- [Using and managing discovery](using-and-managing-discovery.md)
external-attack-surface-management Understanding Dashboards https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/external-attack-surface-management/understanding-dashboards.md
+
+ Title: Understanding dashboards
+description: Microsoft Defender External Attack Surface Management (Defender EASM) offers a series of four dashboards designed to help users quickly surface valuable insights derived from their Attack Surface inventory.
+++ Last updated : 07/14/2022+++
+# Understanding dashboards
+
+## Overview
+
+Microsoft Defender External Attack Surface Management (Defender EASM) offers a series of four dashboards designed to help users quickly surface valuable insights derived from their Attack Surface inventory. These dashboards help organizations prioritize the vulnerabilities, risks and compliance issues that pose the greatest threat to their Attack Surface, making it easy to quickly mitigate key issues.
+
+Defender EASM provides four dashboards:
+
+- **Attack Surface Summary**: this dashboard summarizes the key observations derived from your inventory. It provides a high-level overview of your Attack Surface and the asset types that comprise it, and surfaces potential vulnerabilities by severity (high, medium, low). This dashboard also provides key context on the infrastructure that comprises your Attack Surface, providing insight into cloud hosting, sensitive services, SSL certificate and domain expiry, and IP reputation.
+- **Security Posture**: this dashboard helps organizations understand the maturity and complexity of their security program based on the metadata derived from assets in your Confirmed Inventory. It is comprised of technical and non-technical policies, processes and controls that mitigate risk of external threats. This dashboard provides insight on CVE exposure, domain administration and configuration, hosting and networking, open ports, and SSL certificate configuration.
+- **GDPR Compliance**: this dashboard surfaces key areas of compliance risk based on the General Data Protection Regulation (GDPR) requirements for online infrastructure thatΓÇÖs accessible to European nations. This dashboard provides insight on the status of your websites, SSL certificate issues, exposed personal identifiable information (PII), login protocols, and cookie compliance.
+- **OWASP Top 10**: this dashboard surfaces any assets that are vulnerable according to OWASPΓÇÖs list of the most critical web application security risks. On this dashboard, organizations can quickly identify assets with broken access control, cryptographic failures, injections, insecure designs, security misconfigurations and other critical risks as defined by OWASP.
+
+## Accessing dashboards
+
+To access your Defender EASM dashboards, first navigate to your Defender EASM instance. In the left-hand navigation column, select the dashboard youΓÇÖd like to view. You can access these dashboards from many pages in your Defender EASM instance from this navigation pane.
+
+![Screenshot of dashboard screen with dashboard navigation section highlighted](media/Dashboards-1.png)
+
+## Attack surface summary
+
+The Attack Surface summary dashboard is designed to provide a high-level summary of the composition of your Attack Surface, surfacing the key observations that should be addressed to improve your security posture. This dashboard identifies and prioritizes risks within an organization's assets by High, Medium, and Low severity and enables users to drill down into each section, accessing the list of impacted assets. Additionally, the dashboard reveals key details about your Attack Surface composition, cloud infrastructure, sensitive services, SSL and domain expiry timelines, and IP reputation.
+
+Microsoft identifies organizations' attack surfaces through proprietary technology that discovers Internet-facing assets that belong to an organization based on infrastructure connections to some set of initially known assets. Data in the dashboard is updated daily based on new observations.
+
+### Attack surface priorities
+
+At the top of this dashboard, Defender EASM provides a list of security priorities organized by severity (high, medium, low). Large organizationsΓÇÖ attack surfaces can be incredibly broad, so prioritizing the key findings derived from our expansive data helps users quickly and efficiently address the most important exposed elements of their attack surface. These priorities can include critical CVEs, known associations to compromised infrastructure, use of deprecated technology, infrastructure best practice violations, or compliance issues.
+
+Insight Priorities are determined by MicrosoftΓÇÖs assessment of the potential impact of each insight. For instance, high severity insights may include vulnerabilities that are new, exploited frequently, particularly damaging, or easily exploited by hackers with a lower skill level. Low Severity Insights may include use of deprecated technology that is no longer supported, infrastructure that will soon expire, or compliance issues that do not align with security best practices. Each Insight contains suggested remediation actions to protect against potential exploits.
+
+![Screenshot of attack surface priorities with clickable options highlighted](media/Dashboards-2.png)
+
+Based on the Attack Surface Priorities chart displayed above, a user would want to first investigate the two Medium Severity Observations. You can click the top-listed observation (ΓÇ£Hosts with Expired SSL CertificatesΓÇ¥) to be directly routed to a list of applicable assets, or instead select ΓÇ£View All 91 InsightsΓÇ¥ to see a comprehensive, expandable list of all potential observations that Defender EASM categorizes as ΓÇ£medium severityΓÇ¥.
+
+The Medium Severity Observations page features a list of all potential insights in the left-hand column. This list is sorted by the number of assets that are impacted by each security risk, displaying the issues that impact the greatest number of assets first. To view the details of any security risk, simply click on it from this list.
+
+![Screenshot of attack surface drilldown for medium severity priorities](media/Dashboards-3.png)
+
+This detailed view for any observation will include the title of the issue, a description, and remediation guidance from the Defender EASM team. In this example, the description explains how expired SSL certificates can lead to critical business functions becoming unavailable, preventing customers or employees from accessing web content and thus damaging your organizationΓÇÖs brand. The Remediation section provides advice on how to swiftly fix the issue; in this example, Microsoft recommends that you review the certificates associated with the impacted host assets, update the coinciding SSL certificate(s), and update your internal procedures to ensure that SSL certificates are updated in a timely manner.
+
+Finally, the Asset section lists any entities that have been impacted by this specific security concern. In this example, a user will want to investigate the impacted assets to learn more about the expired SSL Certificate. You can click on any asset name from this list to view the Asset Details page.
+
+From the Asset Details page, weΓÇÖll then click on the ΓÇ£SSL certificatesΓÇ¥ tab to view more information about the expired certificate. In this example, the listed certificate shows an ΓÇ£ExpiresΓÇ¥ date in the past, indicating that the certificate is currently expired and therefore likely inactive. This section also provides the name of the SSL certificate which you can then send to the appropriate team within your organization for swift remediation.
+
+![Screenshot of impacted asset list from drilldown view, must be expired SSL certificate](media/Dashboards-4.png)
+
+### Attack surface composition
+
+The following section provides a high-level summary of the composition of your Attack Surface. This chart provides counts of each asset type, helping users understand how their infrastructure is spread across domains, hosts, pages, SSL certificates, ASNs, IP blocks, IP addresses and email contacts.
+
+![Screenshot of asset details view of same SSL certificate showing expiration highlighted](media/Dashboards-5.png)
+
+Each value is clickable, routing users to their inventory list filtered to display only assets of the designated type. From this page, you can click on any asset to view more details, or you can add additional filters to narrow down the list according to your needs.
+
+### Securing the cloud
+
+This section of the Attack Surface Summary dashboard provides insight on the cloud technologies used across your infrastructure. As most organizations adapt to the cloud gradually, the hybrid nature of your online infrastructure can be difficult to monitor and manage. Defender EASM helps organizations understand the usage of specific cloud technologies across your Attack Surface, mapping cloud host providers to your confirmed assets to inform your cloud adoption program and ensure compliance with your organizations process.
+
+![Screenshot of cloud chart](media/Dashboards-6.png)
+
+For instance, your organization may have recently decided to migrate all cloud infrastructure to a single provider to simplify and consolidate their Attack Surface. This chart can help you identify assets that still need to be migrated. Each bar of the chart is clickable, routing users to a filtered list that displays the assets that comprise the chart value.
+
+### Sensitive services
+
+This section displays sensitive services detected on your Attack Surface that should be assessed and potentially adjusted to ensure the security of your organization. This chart highlights any services that have historically been vulnerable to attack or are common vectors of information leakage to malicious actors. Any assets in this section should be investigated, and Microsoft recommends that organizations consider alternative services with a better security posture to mitigate risk.
+
+![Screenshot of sensitive services chart](media/Dashboards-7.png)
+
+The chart is organized by the name of each service; clicking on any individual bar will return a list of assets that are running that particular service. The chart below is empty, indicating that the organization is not currently running any services that are especially susceptible to attack.
+
+### SSL and domain expirations
+
+These two expiration charts display upcoming SSL Certificate and Domain expirations, ensuring that an organization has ample visibility into upcoming renewals of key infrastructure. An expired domain can suddenly make key content inaccessible, and the domain could even be swiftly purchased by a malicious actor who intends to target your organization. An expired SSL Certificate leaves corresponding assets susceptible to attack.
+
+![Screenshot of SSL charts](media/Dashboards-8.png)
+
+Both charts are organized by the expiration timeframe, ranging from ΓÇ£greater than 90 daysΓÇ¥ to already expired. Microsoft recommends that organizations immediately renew any expired SSL certificates or domains, and proactively arrange the renewal of assets due to expire in 30-60 days.
+
+### IP reputation
+
+IP reputation data helps users understand the trustworthiness of your attack surface and identifying potentially compromised hosts. Microsoft develops IP reputation scores based on our proprietary data as well as IP information collected from external sources. We recommend further investigation of any IP addresses identified here, as a suspicious or malicious score associated with an owned asset indicates that the asset is susceptible to attack or has already been leveraged by malicious actors.
+
+![Screenshot of IP reputation chart](media/Dashboards-9.png)
+
+This chart is organized by the detection policy that triggered a negative reputation score. For instance, the DDOS value indicates that the IP address has been involved in a Distributed Denial-Of-Service attack. Users can click on any bar value to access a list of assets that comprise it. In the example below, the chart is empty which indicates all IP addresses in your inventory have satisfactory reputation scores.
+
+## Security posture dashboard
+
+The Security Posture dashboard helps organizations measure the maturity of their security program based on the status of assets in your Confirmed Inventory. It is comprised of technical and non-technical policies, processes and controls that mitigate the risk of external threats. This dashboard provides insight on CVE exposure, domain administration and configuration, hosting and networking, open ports, and SSL certificate configuration.
+
+![Screenshot of security posture chart](media/Dashboards-10.png)
+
+### CVE exposure
+
+The first chart in the Security Posture dashboard relates to the management of an organizationΓÇÖs website portfolio. Microsoft analyzes website components such as frameworks, server software, and 3rd party plugins and then matches them to a current list of Common Vulnerability Exposures (CVEs) to identify vulnerability risks to your organization. The web components that comprise each website are inspected daily to ensure recency and accuracy.
+
+![Screenshot of CVE exposure chart](media/Dashboards-11.png)
+
+It is recommended that users immediately address any CVE-related vulnerabilities, mitigating risk by updating your web components or following the remediation guidance for each CVE. Each bar on the chart is clickable, displaying a list of any impacted assets.
+
+### Domains administration
+
+This chart provides insight on how an organization manages their domains. Companies with a decentralized domain portfolio management program are susceptible to unnecessary threats, including domain hijacking, domain shadowing, email spoofing, phishing, and illegal domain transfers. A cohesive domain registration process mitigates this risk. For instance, organizations should use the same registrars and registrant contact information for their domains to ensure that all domains are mappable to the same entities. This helps ensure that domains donΓÇÖt slip through the cracks as you update and maintain them.
+
+![Screenshot of domain administration chart](media/Dashboards-12.png)
+
+Each bar of the chart is clickable, routing to a list of all assets that comprise the value.
+
+### Hosting and networking
+
+This chart provides insight on the security posture related to where an organizationΓÇÖs hosts are located. Risk associated with ownership of Autonomous systems depends on the size, maturity of an organizationΓÇÖs IT department.
+
+![Screenshot of hosting and networking chart](media/Dashboards-13.png)
+
+Each bar of the chart is clickable, routing to a list of all assets that comprise the value.
+
+### Domains configuration
+
+This section helps organizations understand the configuration of their domain names, surfacing any domains that may be susceptible to unnecessary risk. Extensible Provisioning Protocol (EPP) domain status codes indicate the status of a domain name registration. All domains have at least one code, although multiple codes can apply to a single domain. This section is useful to understanding the policies in place to manage your domains, or missing policies that leave domains vulnerable.
+
+![Screenshot of domain config chart](media/Dashboards-14.png)
+
+For instance, the ΓÇ£clientUpdateProhibitedΓÇ¥ status code prevents unauthorized updates to your domain name; an organization must contact their registrar to lift this code and make any updates. The chart below searches for domain assets that do not have this status code, indicating that the domain is currently open to updates which can potentially result in fraud. Users should click any bar on this chart to view a list of assets that do not have the appropriate status codes applied to them so they can update their domain configurations accordingly.
+
+### Open Ports
+
+This section helps users understand how their IP space is managed, detecting services that are exposed on the open internet. Attackers commonly scan ports across the internet to look for known exploits related to service vulnerabilities or misconfigurations. Microsoft identifies these open ports to compliment vulnerability assessment tools, flagging observations for review to ensure they are properly managed by your information technology team.
+
+![Screenshot of open ports chart](media/Dashboards-15.png)
+
+By performing basic TCP SYN/ACK scans across all open ports on the addresses in an IP space, Microsoft detects ports that may need to be restricted from direct access to the open internet. Examples include databases, DNS servers, IoT devices, routers and switches. This data can also be used to detect shadow IT assets or insecure remote access services. All bars on this chart are clickable, opening a list of assets that comprise the value so your organization can investigate the open port in question and remediate any risk.
+
+### SSL configuration and organization
+
+The SSL configuration and organization charts display common SSL-related issues that may impact functions of your online infrastructure.
+
+![Screenshot of SSL configuration and organization charts](media/Dashboards-16.png)
+
+For instance, the SSL configuration chart displays any detected configuration issues that can disrupt your online services. This includes expired SSL certificates and certificates using outdated signature algorithms like SHA1 and MD5, resulting in unnecessary security risk to your organization.
+
+The SSL organization chart provides insight on the registration of your SSL certificates, indicating the organization and business units associated with each certificate. This can help users understand the designated ownership of these certificates; it is recommended that companies consolidate their organization and unit list when possible to help ensure proper management moving forward.
+
+## GDPR compliance dashboard
+
+The GDPR compliance dashboard presents an analysis of assets in your Confirmed Inventory as they relate to the requirements outlined in General Data Protection Regulation (GDPR). GDPR is a regulation in European Union (EU) law that enforces data protection and privacy standards for any online entities accessible to the EU. These regulations have become a model for similar laws outside of the EU, so it serves as an excellent guide on how to handle data privacy worldwide.
+
+This dashboard analyzes an organizationΓÇÖs public-facing web properties to surface any assets that are potentially non-compliant with GDPR.
+
+## Websites by status
+
+This chart organizes your website assets by HTTP response status code. These codes indicate whether a specific HTTP request has been successfully completed or provides context as to why the site is inaccessible. HTTP codes can also alert you of redirects, server error responses, and client errors. The HTTP response ΓÇ£451ΓÇ¥ indicates that a website is unavailable for legal reasons. This may indicate that a site has been blocked for people in the EU because it does not comply with GDPR.
+
+This chart organizes your websites by status code. Options include Active, Inactive, Requires Authorization, Broken, and Browser Error; users can click any component on the bar graph to view a comprehensive list of assets that comprise the value.
+
+### SSL certificate posture
+
+An organizationΓÇÖs security posture for SSL/TLS Certificates is a critical component of security for web-based communication. SSL certificates are leveraged by websites to ensure secure communication between a website and its users. Decentralized or complex management of SSL certificates heightens the risk of SSL certificates expiring, use of weak ciphers, and potential exposure to fraudulent SSL Registration. The GDPR compliance dashboard provides charts on live sites with certificate issues, certificate expiration time frames, and sites by certificate posture.
+
+### Live sites with cert issues
+
+This chart displays pages that are actively serving content and present users with a warning that the site is insecure. The user must manually accept the warning to view the content on these pages. This can occur for a variety of reasons; this chart organizes results by the specific reason for easy mitigation. Options include broken certificates, active certificate issues, requires authorization and browser certificate errors.
+
+### SSL certificate expiration
+
+This chart displays upcoming SSL Certificate expirations, ensuring that an organization has ample visibility into any upcoming renewals. An expired SSL Certificate leaves corresponding assets susceptible to attack and can make the content of a page inaccessible to the internet.
+
+This chart is organized by the detected expiry window, ranging from already expired to expiring in over 90 days. Users can click any component in the bar graph to access a list of applicable assets, making it easy to send a list of certificate names to your IT Department for remediation.
+
+### SSL certificate posture
+
+This section analysis the signature algorithms that power an SSL certificate. SSL certificates can be secured with a variety of cryptographic algorithms; certain newer algorithms are considered more reputable and secure than older algorithms, so companies are advised to retire older algorithms like SHA-1.
+
+Users can click any segment of the pie chart to view a list of assets that comprise the selected value. SHA256 is considered secure, whereas organizations should update any certificates using the SHA1 algorithm.
+
+## Personal identifiable information (PII) posture
+
+The protection of personal identifiable information (PII) is a critical component to the General Data Protection Regulation. PII is defined as any data that can identify an individual, including names, addresses, birthdays, or email addresses. Any website that accepts this data through a form must be thoroughly secured according to GDPR guidelines. By analyzing the Document Object Model (DOM) of your pages, Microsoft identifies forms and login pages that may accept PII and should therefore be assessed according to European Union law.
+
+### Login posture
+
+A login page is a page on a website where a user has the option to enter a username and password to gain access to services hosted on that site. Login pages have specific requirements under GDPR, so Defender EASM references the DOM of all scanned pages to search for code that correlates to a login. For instance, login pages must be secure to be compliant.
+
+### Cookie posture
+
+A cookie is information in the form of a very small text file that is placed on the hard drive of the computer running a web browser when browsing a site. Each time a website is visited, the browser sends the cookie back to the server to notify the website of your previous activity. GDPR has specific requirements for obtaining consent to issue a cookie, and different storage regulations for first- versus third-party cookies.
+
+## OWASP top 10 dashboard
+
+The OWASP Top 10 dashboard is designed to provide insight on the most critical security recommendations as designated by OWASP, a reputable open-source foundation for web application security. This list is globally recognized as a critical resource for developers who want to ensure their code is secure. OWASP provides key information about their top 10 security risks, as well as guidance on how to avoid or remediate the issue. This Defender EASM dashboard looks for evidence of these security risks within your Attack Surface and surfaces them, listing any applicable assets and how to remediate the risk.
+
+![Screenshot of OWASP dashboard](media/Dashboards-17.png)
+
+The current OWASP Top 10 Critical Securities list includes:
+
+1. **Broken access control**: the failure of access control infrastructure that enforces policies such that users cannot act outside of their intended permissions.
+2. **Cryptographic failure**: failures related to cryptography (or lack thereof) which often lead to the exposure of sensitive data.
+3. **Injection**: applications vulnerable to injection attacks due to improper handling of data and other compliance-related issues.
+4. **Insecure design**: missing or ineffective security measures that result in weaknesses to your application.
+5. **Security misconfiguration**: missing or incorrect security configurations that are often the result of insufficiently defined configuration process.
+6. **Vulnerable and outdated components**: outdated components that run the risk of added exposures in comparison to up-to-date software.
+7. **Identification and authentication failures**: failure to properly confirm a userΓÇÖs identity, authentication or session management to protect against authentication-related attacks.
+8. **Software and data integrity failures**: code and infrastructure that does not protect against integrity violations, such as plugins from untrusted sources.
+9. **Security logging and monitoring**: lack of proper security logging and alerting, or related misconfigurations, that can impact an organizationΓÇÖs visibility and subsequent accountability over their security posture.
+10. **Server-side request forgery**: web applications that fetch a remote resource without validating the user-supplied URL.
+
+This dashboard provides a description of each critical risk, information on why it matters, and remediation guidance alongside a list of any assets that are potentially impacted. For more information, see the [OWASP website](https://owasp.org/www-project-top-ten/).
+
+## Next Steps
+
+- [Understanding asset details](understanding-asset-details.md)
+- [Using and managing discovery](using-and-managing-discovery.md)
external-attack-surface-management Understanding Inventory Assets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/external-attack-surface-management/understanding-inventory-assets.md
+
+ Title: Understanding inventory assets
+description: Microsoft's proprietary discovery technology recursively searches for infrastructure with observed connections to known legitimate assets.
+++ Last updated : 07/14/2022+++
+# Understanding inventory assets
+
+## Overview
+
+Microsoft's proprietary discovery technology recursively searches for infrastructure with observed connections to known legitimate assets (e.g. discovery "seeds") to make inferences about that infrastructure's relationship to the organization and uncover previously unknown and unmonitored properties.
+
+Defender EASM includes the discovery of the following kinds of assets:
+
+- Domains
+- Hosts
+- Pages
+- IP Blocks
+- IP Addresses
+- Autonomous System Numbers (ASNs)
+- SSL Certificates
+- WHOIS Contacts
+
+These asset types comprise your attack surface inventory in Defender EASM. This solution discovers externally facing assets that are exposed to the open internet outside of traditional firewall protection; they need to be monitored and maintained to minimize risk and improve an organizationΓÇÖs security posture. Microsoft Defender External Attack Surface Management (Defender EASM) actively discovers and monitors these assets, then surfacing key insights that help customers efficiently address any vulnerabilities to their organization.
+
+![Screenshot of Inventory screen](media/Inventory-1.png)
+
+## Asset states
+
+All assets are labeled as one of the following states:
+
+| State name | Description |
+|--|--|
+| Approved Inventory | A part of your owned attack surface; an item that you are directly responsible for. |
+| Dependency | Infrastructure that is owned by a third party but is part of your attack surface because it directly supports the operation of your owned assets. For example, you might depend on an IT provider to host your web content. While the domain, hostname, and pages would be part of your ΓÇ£Approved Inventory,ΓÇ¥ you may wish to treat the IP Address running the host as a ΓÇ£Dependency.ΓÇ¥ |
+| Monitor Only | An asset that is relevant to your attack surface but is neither directly controlled nor a technical dependency. For example, independent franchisees or assets belonging to related companies might be labeled as ΓÇ£Monitor OnlyΓÇ¥ rather than ΓÇ£Approved InventoryΓÇ¥ to separate the groups for reporting purposes. |
+| Candidate | An asset that has some relationship to your organization's known seed assets but does not have a strong enough connection to immediately label it as ΓÇ£Approved Inventory.ΓÇ¥ These candidate assets must be manually reviewed to determine ownership. |
+| Requires Investigation | A state similar to the ΓÇ£CandidateΓÇ¥ states, but this value is applied to assets that require manual investigation to validate. This is determined based on our internally generated confidence scores that assess the strength of detected connections between assets. It does not indicate the infrastructure's exact relationship to the organization as much as it denotes that this asset has been flagged as requiring additional review to determine how it should be categorized. |
+
+## Handling of different asset states
+
+These asset states are uniquely processed and monitored to ensure that customers have clear visibility into the most critical assets by default. For instance, ΓÇ£Approved InventoryΓÇ¥ assets are always represented in dashboard charts and are scanned daily to ensure data recency. All other kinds of assets are not included in dashboard charts by default; however, users can adjust their inventory filters to view assets in different states as needed. Similarly, "CandidateΓÇ¥ assets are only scanned during the discovery process; itΓÇÖs important to review these assets and change their state to ΓÇ£Approved InventoryΓÇ¥ if they are owned by your organization.
+
+## Next steps
+
+- [Deploying the EASM Azure resource](deploying-the-defender-easm-azure-resource.md)
+- [Understanding asset details](understanding-asset-details.md)
+- [Using and managing discovery](using-and-managing-discovery.md)
external-attack-surface-management Using And Managing Discovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/external-attack-surface-management/using-and-managing-discovery.md
+
+ Title: Using and managing discovery
+description: Using and managing discovery - Microsoft Defender External Attack Surface Management (Defender EASM) relies on our proprietary discovery technology to continuously define your organizationΓÇÖs unique Internet-exposed attack surface.
+++ Last updated : 07/14/2022+++
+# Using and managing discovery
+
+## Overview
+
+Microsoft Defender External Attack Surface Management (Defender EASM) relies on our proprietary discovery technology to continuously define your organizationΓÇÖs unique Internet-exposed attack surface. Discovery scans the internet for assets owned by your organization to uncover previously unknown and unmonitored properties. Discovered assets are indexed in a customerΓÇÖs inventory, providing a dynamic system of record of web applications, third party dependencies, and web infrastructure under the organizationΓÇÖs management through a single pane of glass.
+
+Before you run a custom discovery, see the [What is discovery?](what-is-discovery.md) article to understand key concepts mentioned in this article.
+
+## Accessing your automated attack surface
+
+Microsoft has preemptively configured the attack surfaces of many organizations, mapping their initial attack surface by discovering infrastructure thatΓÇÖs connected to known assets. It is recommended that all users search for their organizationΓÇÖs attack surface before creating a custom attack surface and running additional discoveries. This enables users to quickly access their inventory as Defender EASM refreshes the data, adding additional assets and recent context to your Attack Surface.
+
+When first accessing your Defender EASM instance, select ΓÇ£Getting StartedΓÇ¥ in the ΓÇ£GeneralΓÇ¥ section to search for your organization in the list of automated attack surfaces. Then select your organization from the list and click ΓÇ£Build my Attack SurfaceΓÇ¥.
+
+![Screenshot of pre-configured attack surface selection screen](media/Discovery_1.png)
+
+At this point, the discovery will be running in the background. If you selected a pre-configured Attack Surface from the list of available organizations, you will be redirected to the Dashboard Overview screen where you can view insights into your organizationΓÇÖs infrastructure in Preview Mode. Review these dashboard insights to become familiar with your Attack Surface as you wait for additional assets to be discovered and populated in your inventory. See the [Understanding dashboards](understanding-dashboards.md) article for more information on how to derive insights from these dashboards.
+
+If you notice any missing assets or have other entities to manage that may not be discovered through infrastructure clearly linked to your organization, you can elect to run customized discoveries to detect these outlier assets.
+
+## Customizing discovery
+
+Custom discoveries are ideal for organizations that require deeper visibility into infrastructure that may not be immediately linked to their primary seed assets. By submitting a larger list of known assets to operate as discovery seeds, the discovery engine will return a wider pool of assets. Custom discovery can also help organizations find disparate infrastructure that may relate to independent business units and acquired companies.
+
+### Discovery groups
+
+Custom discoveries are organized into Discovery Groups. They are independent seed clusters that comprise a single discovery run and operate on their own recurrence schedules. Users can elect to organize their Discovery Groups to delineate assets in whatever way best benefits their company and workflows. Common options include organizing by responsible team/business unit, brands or subsidiaries.
+
+### Creating a discovery group
+
+1. Select the **Discovery** panel under the **Manage** section in the left-hand navigation column.
+
+ ![Screenshot of EASM instance from overview page with manage section highlighted](media/Discovery_2.png)
+
+2. This Discovery page shows your list of Discovery Groups by default. This list will be empty when you first access the platform. To run your first discovery, click **Add Discovery Group**.
+
+ ![Screenshot of Discovery screen with ΓÇ£add disco groupΓÇ¥ highlighted](media/Discovery_3.png)
+
+3. First, name your new discovery group and add a description. The **Recurring Frequency** field allows you to schedule discovery runs for this group, scanning for new assets related to the designated seeds on a continuous basis. The default recurrence selection is **Weekly**; Microsoft recommends this cadence to ensure that your organizationΓÇÖs assets are routinely monitored and updated. For a single, one-time discovery run, select **Never**. However, we recommend that users keep the **Weekly** default cadence and instead turn off historical monitoring within their Discovery Group settings if they later decide to discontinue recurrent discovery runs.
+
+ Select **Next: Seeds >**
+
+ ![Screenshot of first page of disco group setup](media/Discovery_4.png)
+
+4. Next, select the seeds that youΓÇÖd like to use for this Discovery Group. Seeds are known assets that belong to your organization; the Defender EASM platform scans these entities, mapping their connections to other online infrastructure to create your Attack Surface.
+
+ ![Screenshot of seed selection page of disco group setup](media/Discovery_5.png)
+
+ The **Quick Start** option lets you search for your organization in a list of pre-populated Attack Surfaces. You can quickly create a Discovery Group based on the known assets belonging to your organization.
+
+ ![Screenshot of pre-baked attack surface selection page, then output in seed list](media/Discovery_6.png)
+
+ ![Screenshot of pre-baked attack surface selection page.](media/Discovery_7.png)
+
+ Alternatively, users can manually input their seeds. Defender EASM accepts domains, IP blocks, hosts, email contacts, ASNs, certificate common names, and WhoIs organizations as seed values. You can also specify entities to exclude from asset discovery to ensure they are not added to your inventory if detected. For example, this is useful for organizations that have subsidiaries that will likely be connected to their central infrastructure, but do not belong to your organization.
+
+ Once your seeds have been selected, select **Review + Create**.
+
+5. Review your group information and seed list, then select **Create & Run**.
+
+ ![Screenshot of review + create screen](media/Discovery_8.png)
+
+ You will then be taken back to the main Discovery page that displays your Discovery Groups. Once your discovery run is complete, you will see new assets added to your Confirmed Inventory.
+
+### Viewing and editing discovery groups
+
+Users can manage their discovery groups from the main ΓÇ£DiscoveryΓÇ¥ page. The default view displays a list of all your discovery groups and some key data about each one. From the list view, you can see the number of seeds, recurrence schedule, last run date and created date for each group.
+
+![Screenshot of discovery groups screen](media/Discovery_9.png)
+
+Click on any discovery group to view more information, edit the group, or immediately kickstart a new discovery process.
+
+### Run history
+
+The discovery group details page contains the run history for the group. Once expanded, this section displays key information about each discovery run that has been performed on the specific group of seeds. The Status column indicates whether the run is ΓÇ£In ProgressΓÇ¥, ΓÇ£Complete,ΓÇ¥ or ΓÇ£FailedΓÇ¥. This section also includes ΓÇ£startedΓÇ¥ and ΓÇ£completedΓÇ¥ timestamps and counts of the total number of assets versus new assets discovered.
+
+Run history is organized by the seed assets scanned during the discovery run. To see a list of the applicable seeds, click ΓÇ£DetailsΓÇ¥. This opens a right-hand pane that lists all the seeds and exclusions by kind and name.
+
+![Screenshot of run history for disco group screen](media/Discovery_10.png)
+
+### Viewing seeds and exclusions
+
+The Discovery page defaults to a list view of Discovery Groups, but users can also view lists of all seeds and excluded entities from this page. Simply click the either tab to view a list of all the seeds or exclusions that power your discovery groups.
+
+### Seeds
+
+The seed list view displays seed values with three columns: type, source name, and discovery group. The ΓÇ£type" field displays the category of the seed asset; the most common seeds are domains, hosts and IP blocks, but you can also use email contacts, ASNs, certificate common names or WhoIs organizations. The source name is simply the value that was inputted in the appropriate type box when creating the discovery group. The final column shows a list of discovery groups that use the seed; each value is clickable, taking you to the details page for that discovery group.
+
+![Screenshot of seeds view of discovery page](media/Discovery_11.png)
+
+### Exclusions
+
+Similarly, you can click the ΓÇ£ExclusionsΓÇ¥ tab to see a list of entities that have been excluded from the discovery group. This means that these assets will not be used as discovery seeds or added to your inventory. It is important to note that exclusions only impact future discovery runs for an individual discovery group. The ΓÇ£type" field displays the category of the excluded entity. The source name is the value that was inputted in the appropriate type box when creating the discovery group. The final column shows a list of discovery groups where this exclusion is present; each value is clickable, taking you to the details page for that discovery group.
+
+## Next steps
+
+- [Discovering your attack surface](discovering-your-attack-surface.md)
+- [Understanding asset details](understanding-asset-details.md)
+- [Understanding dashboards](understanding-dashboards.md)
external-attack-surface-management What Is Discovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/external-attack-surface-management/what-is-discovery.md
+
+ Title: What is Discovery?
+description: What is Discovery - Microsoft Defender External Attack Surface Management (Defender EASM) relies on our proprietary discovery technology to continuously define your organizationΓÇÖs unique Internet-exposed attack surface.
+++ Last updated : 07/14/2022+++
+# What is Discovery?
+
+## Overview
+
+Microsoft Defender External Attack Surface Management (Defender EASM) relies on our proprietary discovery technology to continuously define your organizationΓÇÖs unique Internet-exposed attack surface. Discovery scans known assets owned by your organization to uncover previously unknown and unmonitored properties. Discovered assets are indexed in a customerΓÇÖs inventory, providing a dynamic system of record of web applications, third party dependencies, and web infrastructure under the organizationΓÇÖs management through a single pane of glass.
+
+![Screenshot of Discovery configuration screen](media/Discovery-1.png)
+
+Through this process, Microsoft enables organizations to proactively monitor their constantly shifting digital attack surface and identify emerging risks and policy violations as they arise. Many vulnerability programs lack visibility outside their firewall, leaving them unaware of external risks and threatsΓÇöthe primary source of data breaches. At the same time, digital growth continues to outpace an enterprise security teamΓÇÖs ability to protect it. Digital initiatives and overly common ΓÇ£shadow ITΓÇ¥ lead to an expanding attack surface outside the firewall. At this pace, it is nearly impossible to validate controls, protections, and compliance requirements. Without Defender EASM, it is nearly impossible to identify and remove vulnerabilities and scanners cannot reach beyond the firewall to assess the full attack surface.
+
+## How it works
+
+To create a comprehensive mapping of your organizationΓÇÖs attack surface, the system first intakes known assets (i.e. ΓÇ£seedsΓÇ¥) that are recursively scanned to discover additional entities through their connections to a seed. An initial seed may be any of the following kinds of web infrastructure indexed by Microsoft:
+
+- Pages
+- Host Name
+- Domain
+- SSL Cert
+- Contact Email Address
+- IP Block
+- IP Address
+- ASN
+
+![Screenshot of Seed list view on discovery screen](media/Discovery-2.png)
+
+Starting with a seed, the system then discovers associations to other online infrastructure to discover other assets owned by your organization; this process ultimately creates your attack surface inventory. The discovery process uses the seeds as the central nodes and spiders outward towards the periphery of your attack surface by identifying all the infrastructure directly connected to the seed, and then identifying all the things related to each of the things in the first set of connections, etc. This process continues until we reach the edge of what your organization is responsible for managing.
+
+For example, to discover ContosoΓÇÖs infrastructure, you might use the domain, contoso.com, as the initial keystone seed. Starting with this seed, we could consult the following sources and derive the following relationships:
+
+| Data source | Example |
+|--|--|
+| WhoIs records | Other domain names registered to the same contact email or registrant org used to register contoso.com likely also belong to Contoso |
+| WhoIs records | All domain names registered to any @contoso.com email address likely also belong to Microsoft |
+| Whois records | Other domains associated with the same name server as contoso.com may also belong to Contoso |
+| DNS records | We can assume that Contoso also owns all observed hosts on the domains it owns and any websites that are associated with those hosts |
+| DNS records | Domains with other hosts resolving to the same IP blocks might also belong to Contoso if the organization owns the IP block |
+| DNS records | Mail servers associated with Contoso-owned domain names would also belong to Contoso |
+| SSL certificates | Contoso probably also owns all SSL certificates connected to each of those hosts and any other hosts using the same SSL certs |
+| ASN records | Other IP blocks associated with the same ASN as the IP blocks to which hosts on ContosoΓÇÖs domain names are connected may also belong to Contoso ΓÇô as would all the hosts and domains that resolve to them |
+
+Using this set of first-level connections, we can quickly derive an entirely new set of assets to investigate. Before performing additional recursions, Microsoft determines whether a connection is strong enough for a discovered entity to be automatically added to your Confirmed Inventory. For each of these assets, the discovery system runs automated, recursive searches based on all available attributes to find second-level and third-level connections. This repetitive process provides more information on an organizationΓÇÖs online infrastructure and therefore discovers disparate assets that may not have been discovered and subsequently monitored otherwise.
+
+## Automated versus customized attack surfaces
+
+When first using Defender EASM, you can access a pre-built inventory for your organization to quickly kick start your workflows. From the ΓÇ£Getting StartedΓÇ¥ page, users can search for their organization to quickly populate their inventory based on asset connections already identified by Microsoft. It is recommended that all users search for their organizationΓÇÖs pre-built Attack Surface before creating a custom inventory.
+
+To build a customized inventory, users create Discovery Groups to organize and manage the seeds they use when running discoveries. Separate Discovery groups allow users to automate the discovery process, configuring the seed list and recurrent run schedule.
+
+![Screenshot of Automated attack surface selection screen](media/Discovery-3.png)
+
+## Confirmed inventory vs. candidate assets
+
+If the discovery engine detects a strong connection between a potential asset and the initial seed, the system will automatically include that asset in an organizationΓÇÖs ΓÇ£Confirmed Inventory.ΓÇ¥ As the connections to this seed are iteratively scanned, discovering third- or fourth-level connections, the systemΓÇÖs confidence in the ownership of any newly detected assets is lower. Similarly, the system may detect assets that are relevant to your organization but may not be directly owned by them.
+For these reasons, newly discovered assets are labeled as one of the following states:
+
+| State name | Description |
+|--|--|
+| Approved Inventory | A part of your owned attack surface; an item that you are directly responsible for. |
+| Dependency | Infrastructure that is owned by a third party but is part of your attack surface because it directly supports the operation of your owned assets. For example, you might depend on an IT provider to host your web content. While the domain, hostname, and pages would be part of your ΓÇ£Approved Inventory,ΓÇ¥ you may wish to treat the IP Address running the host as a ΓÇ£Dependency.ΓÇ¥ |
+| Monitor Only | An asset that is relevant to your attack surface but is neither directly controlled nor a technical dependency. For example, independent franchisees or assets belonging to related companies might be labeled as ΓÇ£Monitor OnlyΓÇ¥ rather than ΓÇ£Approved InventoryΓÇ¥ to separate the groups for reporting purposes. |
+| Candidate | An asset that has some relationship to your organization's known seed assets but does not have a strong enough connection to immediately label it as ΓÇ£Approved Inventory.ΓÇ¥ These candidate assets must be manually reviewed to determine ownership. |
+| Requires Investigation | A state similar to the ΓÇ£CandidateΓÇ¥ states, but this value is applied to assets that require manual investigation to validate. This is determined based on our internally generated confidence scores that assess the strength of detected connections between assets. It does not indicate the infrastructure's exact relationship to the organization as much as it denotes that this asset has been flagged as requiring additional review to determine how it should be categorized. |
+
+Asset details are continuously refreshed and updated over time to maintain an accurate map of asset states and relationships, as well as to uncover newly created assets as they emerge. The discovery process is managed by placing seeds in Discovery Groups that can be scheduled to rerun on a recurrent basis. Once an inventory is populated, the Defender EASM system continuously scans your assets with MicrosoftΓÇÖs virtual user technology to uncover fresh, detailed data about each one. This process examines the content and behavior of each page within applicable sites to provide robust information that can be used to identify vulnerabilities, compliance issues and other potential risks to your organization.
+
+## Next steps
+- [Deploying the EASM Azure resource](deploying-the-defender-easm-azure-resource.md)
+- [Using and managing discovery](using-and-managing-discovery.md)
+- [Understanding asset details](understanding-asset-details.md)
firewall Threat Intel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/threat-intel.md
Previously updated : 11/04/2021 Last updated : 08/01/2022 # Azure Firewall threat intelligence-based filtering
-Threat intelligence-based filtering can be enabled for your firewall to alert and deny traffic from/to known malicious IP addresses and domains. The IP addresses and domains are sourced from the Microsoft Threat Intelligence feed, which includes multiple sources including the Microsoft Cyber Security team. [Intelligent Security Graph](https://www.microsoft.com/security/operations/intelligence) powers Microsoft threat intelligence and is used by multiple services including Microsoft Defender for Cloud.<br>
+Threat intelligence-based filtering can be enabled for your firewall to alert and deny traffic from/to known malicious IP addresses, FQDNs, and URLs. The IP addresses, domains and URLs are sourced from the Microsoft Threat Intelligence feed, which includes multiple sources including the Microsoft Cyber Security team. [Intelligent Security Graph](https://www.microsoft.com/security/operations/intelligence) powers Microsoft threat intelligence and is used by multiple services including Microsoft Defender for Cloud.<br>
<br> :::image type="content" source="media/threat-intel/firewall-threat.png" alt-text="Firewall threat intelligence" border="false":::
firewall Tutorial Firewall Deploy Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/tutorial-firewall-deploy-portal.md
Previously updated : 05/25/2022 Last updated : 08/01/2022 #Customer intent: As an administrator new to this service, I want to control outbound network access from resources located in an Azure subnet.
First, create a resource group to contain the resources needed to deploy the fir
The resource group contains all the resources used in this procedure. 1. Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.com).
-2. On the Azure portal menu, select **Resource groups** or search for and select *Resource groups* from any page. Then select **Add**.
+2. On the Azure portal menu, select **Resource groups** or search for and select *Resource groups* from any page. Then select **Create**.
4. For **Subscription**, select your subscription.
-1. For **Resource group name**, enter *Test-FW-RG*.
+1. For **Resource group name**, type **Test-FW-RG**.
1. For **Resource group location**, select a location. All other resources that you create must be in the same location. 1. Select **Review + create**. 1. Select **Create**. ### Create a VNet
-This VNet will have three subnets.
+This VNet will have two subnets.
> [!NOTE] > The size of the AzureFirewallSubnet subnet is /26. For more information about the subnet size, see [Azure Firewall FAQ](firewall-faq.yml#why-does-azure-firewall-need-a--26-subnet-size). 1. On the Azure portal menu or from the **Home** page, select **Create a resource**. 1. Select **Networking** > **Virtual network**.
-1. Select **Create**.
1. For **Subscription**, select your subscription. 1. For **Resource group**, select **Test-FW-RG**. 1. For **Name**, type **Test-FW-VN**. 1. For **Region**, select the same location that you used previously. 1. Select **Next: IP addresses**.
-1. For **IPv4 Address space**, type **10.0.0.0/16**.
-1. Under **Subnet**, select **default**.
-1. For **Subnet name** type **AzureFirewallSubnet**. The firewall will be in this subnet, and the subnet name **must** be AzureFirewallSubnet.
-1. For **Address range**, type **10.0.1.0/26**.
+1. For **IPv4 Address space**, accept the default **10.0.0.0/16**.
+1. Under **Subnet name**, select **default**.
+1. For **Subnet name** change it to **AzureFirewallSubnet**. The firewall will be in this subnet, and the subnet name **must** be AzureFirewallSubnet.
+1. For **Address range**, change it to **10.0.1.0/26**.
1. Select **Save**. Next, create a subnet for the workload server.
This VNet will have three subnets.
Now create the workload virtual machine, and place it in the **Workload-SN** subnet. 1. On the Azure portal menu or from the **Home** page, select **Create a resource**.
-2. Select **Windows Server 2016 Datacenter**.
+2. Select **Windows Server 2019 Datacenter**.
4. Enter these values for the virtual machine: |Setting |Value |
Now create the workload virtual machine, and place it in the **Workload-SN** sub
|Resource group |**Test-FW-RG**| |Virtual machine name |**Srv-Work**| |Region |Same as previous|
- |Image|Windows Server 2016 Datacenter|
+ |Image|Windows Server 2019 Datacenter|
|Administrator user name |Type a user name| |Password |Type a password|
Now create the workload virtual machine, and place it in the **Workload-SN** sub
8. Make sure that **Test-FW-VN** is selected for the virtual network and the subnet is **Workload-SN**. 9. For **Public IP**, select **None**. 11. Accept the other defaults and select **Next: Management**.
-12. Select **Disable** to disable boot diagnostics. Accept the other defaults and select **Review + create**.
+12. For **Boot diagnostics**, select **Disable** to disable boot diagnostics. Accept the other defaults and select **Review + create**.
13. Review the settings on the summary page, and then select **Create**.
+1. After the deployment is complete, select **Srv-Work** and note the private IP address that you'll need to use later.
[!INCLUDE [ephemeral-ip-note.md](../../includes/ephemeral-ip-note.md)] + ## Deploy the firewall Deploy the firewall into the VNet.
Deploy the firewall into the VNet.
|Resource group |**Test-FW-RG** | |Name |**Test-FW01**| |Region |Select the same location that you used previously|
+ |Firewall tier|**Standard**|
|Firewall management|**Use Firewall rules (classic) to manage this firewall**| |Choose a virtual network |**Use existing**: **Test-FW-VN**| |Public IP address |**Add new**<br>**Name**: **fw-pip**|
As a result, there is no need create an additional UDR to include the AzureFirew
For the **Workload-SN** subnet, configure the outbound default route to go through the firewall.
-1. On the Azure portal menu, select **All services** or search for and select *All services* from any page.
-2. Under **Networking**, select **Route tables**.
-3. Select **Add**.
+1. On the Azure portal menu, select **Create a resource**.
+2. Under **Networking**, select **Route table**.
5. For **Subscription**, select your subscription. 6. For **Resource group**, select **Test-FW-RG**. 7. For **Region**, select the same location that you used previously.
For the **Workload-SN** subnet, configure the outbound default route to go throu
After deployment completes, select **Go to resource**.
-1. On the Firewall-route page, select **Subnets** and then select **Associate**.
+1. On the **Firewall-route** page, select **Subnets** and then select **Associate**.
1. Select **Virtual network** > **Test-FW-VN**. 1. For **Subnet**, select **Workload-SN**. Make sure that you select only the **Workload-SN** subnet for this route, otherwise your firewall won't work correctly. 13. Select **OK**. 14. Select **Routes** and then select **Add**. 15. For **Route name**, type **fw-dg**.
-16. For **Address prefix**, type **0.0.0.0/0**.
-17. For **Next hop type**, select **Virtual appliance**.
+1. For **Address prefix destination**, select **IP Addresses**.
+1. For **Destination IP addresses/CIDR ranges**, type **0.0.0.0/0**.
+1. For **Next hop type**, select **Virtual appliance**.
Azure Firewall is actually a managed service, but virtual appliance works in this situation. 18. For **Next hop address**, type the private IP address for the firewall that you noted previously.
This is the network rule that allows outbound access to two IP addresses at port
2. For **Destination type** select **IP address**. 3. For **Destination address**, type **209.244.0.3,209.244.0.4**
- These are public DNS servers operated by CenturyLink.
+ These are public DNS servers operated by Level3.
1. For **Destination Ports**, type **53**. 2. Select **Add**.
This rule allows you to connect a remote desktop to the Srv-Work virtual machine
8. For **Source**, type **\***. 9. For **Destination address**, type the firewall public IP address. 10. For **Destination Ports**, type **3389**.
-11. For **Translated address**, type the **Srv-work** private IP address.
+11. For **Translated address**, type the Srv-work private IP address.
12. For **Translated port**, type **3389**. 13. Select **Add**.
For testing purposes, configure the server's primary and secondary DNS addresses
Now, test the firewall to confirm that it works as expected.
-1. Connect a remote desktop to firewall public IP address and sign in to the **Srv-Work** virtual machine.
-3. Open Internet Explorer and browse to `https://www.google.com`.
+1. Connect a remote desktop to the firewall public IP address and sign in to the Srv-Work virtual machine.
+1. Open Internet Explorer and browse to `https://www.google.com`.
4. Select **OK** > **Close** on the Internet Explorer security alerts. You should see the Google home page.
Now, test the firewall to confirm that it works as expected.
So now you've verified that the firewall rules are working:
+* You can connect to the virtual machine using RDP.
* You can browse to the one allowed FQDN, but not to any others. * You can resolve DNS names using the configured external DNS server.
governance Australia Ism https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/australia-ism.md
Title: Regulatory Compliance details for Australian Government ISM PROTECTED description: Details of the Australian Government ISM PROTECTED Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 07/26/2022 Last updated : 08/01/2022
governance Azure Security Benchmark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/azure-security-benchmark.md
Title: Regulatory Compliance details for Azure Security Benchmark description: Details of the Azure Security Benchmark Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 07/26/2022 Last updated : 08/01/2022
initiative definition.
|[API Management services should use a virtual network](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef619a2c-cc4d-4d03-b2ba-8c94a834d85b) |Azure Virtual Network deployment provides enhanced security, isolation and allows you to place your API Management service in a non-internet routable network that you control access to. These networks can then be connected to your on-premises networks using various VPN technologies, which enables access to your backend services within the network and/or on-premises. The developer portal and API gateway, can be configured to be accessible either from the Internet or only within the virtual network. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/API%20Management/ApiManagement_VNETEnabled_Audit.json) | |[App Configuration should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fca610c1d-041c-4332-9d88-7ed3094967c7) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your app configuration instances instead of the entire service, you'll also be protected against data leakage risks. Learn more at: [https://aka.ms/appconfig/private-endpoint](https://aka.ms/appconfig/private-endpoint). |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Configuration/PrivateLink_Audit.json) | |[Authorized IP ranges should be defined on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0e246bcf-5f6f-4f87-bc6f-775d4712c7ea) |Restrict access to the Kubernetes Service Management API by granting API access only to IP addresses in specific ranges. It is recommended to limit access to authorized IP ranges to ensure that only applications from allowed networks can access the cluster. |Audit, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableIpRanges_KubernetesService_Audit.json) |
-|[Azure Cache for Redis should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7803067c-7d34-46e3-8c79-0ca68fc4036d) |Private endpoints lets you connect your virtual network to Azure services without a public IP address at the source or destination. By mapping private endpoints to your Azure Cache for Redis instances, data leakage risks are reduced. Learn more at: [https://docs.microsoft.com/azure/azure-cache-for-redis/cache-private-link](https://docs.microsoft.com/azure/azure-cache-for-redis/cache-private-link). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cache/RedisCache_PrivateEndpoint_AuditIfNotExists.json) |
+|[Azure Cache for Redis should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7803067c-7d34-46e3-8c79-0ca68fc4036d) |Private endpoints lets you connect your virtual network to Azure services without a public IP address at the source or destination. By mapping private endpoints to your Azure Cache for Redis instances, data leakage risks are reduced. Learn more at: [https://docs.microsoft.com/azure/azure-cache-for-redis/cache-private-link](/azure/azure-cache-for-redis/cache-private-link). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cache/RedisCache_PrivateEndpoint_AuditIfNotExists.json) |
|[Azure Cosmos DB accounts should have firewall rules](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F862e97cf-49fc-4a5c-9de4-40d4e2e7c8eb) |Firewall rules should be defined on your Azure Cosmos DB accounts to prevent traffic from unauthorized sources. Accounts that have at least one IP rule defined with the virtual network filter enabled are deemed compliant. Accounts disabling public access are also deemed compliant. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cosmos%20DB/Cosmos_NetworkRulesExist_Audit.json) | |[Azure Event Grid domains should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9830b652-8523-49cc-b1b3-e17dce1127ca) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your Event Grid domain instead of the entire service, you'll also be protected against data leakage risks. Learn more at: [https://aka.ms/privateendpoints](https://aka.ms/privateendpoints). |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Event%20Grid/Domains_PrivateEndpoint_Audit.json) | |[Azure Event Grid topics should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4b90e17e-8448-49db-875e-bd83fb6f804f) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your Event Grid topic instead of the entire service, you'll also be protected against data leakage risks. Learn more at: [https://aka.ms/privateendpoints](https://aka.ms/privateendpoints). |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Event%20Grid/Topics_PrivateEndpoint_Audit.json) |
-|[Azure Key Vault should have firewall enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F55615ac9-af46-4a59-874e-391cc3dfb490) |Enable the key vault firewall so that the key vault is not accessible by default to any public IPs. You can then configure specific IP ranges to limit access to those networks. Learn more at: [https://docs.microsoft.com/azure/key-vault/general/network-security](https://docs.microsoft.com/azure/key-vault/general/network-security) |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/AzureKeyVaultFirewallEnabled_Audit.json) |
-|[Azure Machine Learning workspaces should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F40cec1dd-a100-4920-b15b-3024fe8901ab) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Azure Machine Learning workspaces, data leakage risks are reduced. Learn more about private links at: [https://docs.microsoft.com/azure/machine-learning/how-to-configure-private-link](https://docs.microsoft.com/azure/machine-learning/how-to-configure-private-link). |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Machine%20Learning/Workspace_PrivateEndpoint_Audit.json) |
+|[Azure Key Vault should have firewall enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F55615ac9-af46-4a59-874e-391cc3dfb490) |Enable the key vault firewall so that the key vault is not accessible by default to any public IPs. You can then configure specific IP ranges to limit access to those networks. Learn more at: [/azure/key-vault/general/network-security](/azure/key-vault/general/network-security) |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/AzureKeyVaultFirewallEnabled_Audit.json) |
+|[Azure Machine Learning workspaces should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F40cec1dd-a100-4920-b15b-3024fe8901ab) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Azure Machine Learning workspaces, data leakage risks are reduced. Learn more about private links at: [/azure/machine-learning/how-to-configure-private-link](/azure/machine-learning/how-to-configure-private-link). |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Machine%20Learning/Workspace_PrivateEndpoint_Audit.json) |
|[Azure SignalR Service should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2393d2cf-a342-44cd-a2e2-fe0188fd1234) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your Azure SignalR Service resource instead of the entire service, you'll reduce your data leakage risks. Learn more about private links at: [https://aka.ms/asrs/privatelink](https://aka.ms/asrs/privatelink). |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SignalR/SignalR_PrivateEndpointEnabled_Audit_v2.json) | |[Azure Spring Cloud should use network injection](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faf35e2a4-ef96-44e7-a9ae-853dd97032c4) |Azure Spring Cloud instances should use virtual network injection for the following purposes: 1. Isolate Azure Spring Cloud from Internet. 2. Enable Azure Spring Cloud to interact with systems in either on premises data centers or Azure service in other virtual networks. 3. Empower customers to control inbound and outbound network communications for Azure Spring Cloud. |Audit, Disabled, Deny |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Platform/Spring_VNETEnabled_Audit.json) | |[Cognitive Services accounts should disable public network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0725b4dd-7e76-479c-a735-68e7ee23d5ca) |Disabling public network access improves security by ensuring that Cognitive Services account isn't exposed on the public internet. Creating private endpoints can limit exposure of Cognitive Services account. Learn more at: [https://go.microsoft.com/fwlink/?linkid=2129800](https://go.microsoft.com/fwlink/?linkid=2129800). |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisablePublicNetworkAccess_Audit.json) |
initiative definition.
|[Storage accounts should restrict network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F34c877ad-507e-4c82-993e-3452a6e0ad3c) |Network access to storage accounts should be restricted. Configure network rules so only applications from allowed networks can access the storage account. To allow connections from specific internet or on-premises clients, access can be granted to traffic from specific Azure virtual networks or to public internet IP address ranges |Audit, Deny, Disabled |[1.1.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/Storage_NetworkAcls_Audit.json) | |[Storage accounts should restrict network access using virtual network rules](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2a1a9cdf-e04d-429a-8416-3bfb72a1b26f) |Protect your storage accounts from potential threats using virtual network rules as a preferred method instead of IP-based filtering. Disabling IP-based filtering prevents public IPs from accessing your storage accounts. |Audit, Deny, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/StorageAccountOnlyVnetRulesEnabled_Audit.json) | |[Storage accounts should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6edd7eda-6dd8-40f7-810d-67160c639cd9) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your storage account, data leakage risks are reduced. Learn more about private links at - [https://aka.ms/azureprivatelinkoverview](https://aka.ms/azureprivatelinkoverview) |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/StorageAccountPrivateEndpointEnabled_Audit.json) |
-|[VM Image Builder templates should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2154edb9-244f-4741-9970-660785bccdaa) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your VM Image Builder building resources, data leakage risks are reduced. Learn more about private links at: [https://docs.microsoft.com/azure/virtual-machines/linux/image-builder-networking#deploy-using-an-existing-vnet](https://docs.microsoft.com/azure/virtual-machines/linux/image-builder-networking#deploy-using-an-existing-vnet). |Audit, Disabled, Deny |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/VM%20Image%20Builder/PrivateLinkEnabled_Audit.json) |
+|[VM Image Builder templates should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2154edb9-244f-4741-9970-660785bccdaa) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your VM Image Builder building resources, data leakage risks are reduced. Learn more about private links at: [https://docs.microsoft.com/azure/virtual-machines/linux/image-builder-networking#deploy-using-an-existing-vnet](/azure/virtual-machines/linux/image-builder-networking#deploy-using-an-existing-vnet). |Audit, Disabled, Deny |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/VM%20Image%20Builder/PrivateLinkEnabled_Audit.json) |
### Deploy firewall at the edge of enterprise network
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | ||||| |[An Azure Active Directory administrator should be provisioned for SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1f314764-cb73-4fc9-b863-8eca98ac36e9) |Audit provisioning of an Azure Active Directory administrator for your SQL server to enable Azure AD authentication. Azure AD authentication enables simplified permission management and centralized identity management of database users and other Microsoft services |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SQL_DB_AuditServerADAdmins_Audit.json) |
-|[Cosmos DB database accounts should have local authentication methods disabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5450f5bd-9c72-4390-a9c4-a7aba4edfdd2) |Disabling local authentication methods improves security by ensuring that Cosmos DB database accounts exclusively require Azure Active Directory identities for authentication. Learn more at: [https://docs.microsoft.com/azure/cosmos-db/how-to-setup-rbac#disable-local-auth](https://docs.microsoft.com/azure/cosmos-db/how-to-setup-rbac#disable-local-auth). |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cosmos%20DB/Cosmos_DisableLocalAuth_AuditDeny.json) |
+|[Cosmos DB database accounts should have local authentication methods disabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5450f5bd-9c72-4390-a9c4-a7aba4edfdd2) |Disabling local authentication methods improves security by ensuring that Cosmos DB database accounts exclusively require Azure Active Directory identities for authentication. Learn more at: [https://docs.microsoft.com/azure/cosmos-db/how-to-setup-rbac#disable-local-auth](/azure/cosmos-db/how-to-setup-rbac#disable-local-auth). |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cosmos%20DB/Cosmos_DisableLocalAuth_AuditDeny.json) |
|[Service Fabric clusters should only use Azure Active Directory for client authentication](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb54ed75b-3e1a-44ac-a333-05ba39b99ff0) |Audit usage of client authentication only via Azure Active Directory in Service Fabric |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Service%20Fabric/ServiceFabric_AuditADAuth_Audit.json) | ### Manage application identities securely and automatically
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
-|[Authentication to Linux machines should require SSH keys](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F630c64f9-8b6b-4c64-b511-6544ceff6fd6) |Although SSH itself provides an encrypted connection, using passwords with SSH still leaves the VM vulnerable to brute-force attacks. The most secure option for authenticating to an Azure Linux virtual machine over SSH is with a public-private key pair, also known as SSH keys. Learn more: [https://docs.microsoft.com/azure/virtual-machines/linux/create-ssh-keys-detailed](https://docs.microsoft.com/azure/virtual-machines/linux/create-ssh-keys-detailed). |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxNoPasswordForSSH_AINE.json) |
+|[Authentication to Linux machines should require SSH keys](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F630c64f9-8b6b-4c64-b511-6544ceff6fd6) |Although SSH itself provides an encrypted connection, using passwords with SSH still leaves the VM vulnerable to brute-force attacks. The most secure option for authenticating to an Azure Linux virtual machine over SSH is with a public-private key pair, also known as SSH keys. Learn more: [https://docs.microsoft.com/azure/virtual-machines/linux/create-ssh-keys-detailed](/azure/virtual-machines/linux/create-ssh-keys-detailed). |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxNoPasswordForSSH_AINE.json) |
|[MFA should be enabled accounts with write permissions on your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9297c21d-2ed6-4474-b48f-163f75654ce3) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with write privileges to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForWritePermissions_Audit.json) | |[MFA should be enabled on accounts with owner permissions on your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faa633080-8b72-40c4-a2d7-d00c03e80bed) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with owner permissions to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForOwnerPermissions_Audit.json) | |[MFA should be enabled on accounts with read permissions on your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe3576e28-8b17-4677-84c3-db2990658d64) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with read privileges to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForReadPermissions_Audit.json) |
initiative definition.
||||| |[\[Preview\]: Private endpoint should be configured for Key Vault](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5f0bc445-3935-4915-9981-011aa2b46147) |Private link provides a way to connect Key Vault to your Azure resources without sending traffic over the public internet. Private link provides defense in depth protection against data exfiltration. |Audit, Deny, Disabled |[1.1.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/AzureKeyVaultPrivateEndpointEnabled_Audit.json) | |[Azure Defender for Key Vault should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0e6763cc-5078-4e64-889d-ff4d9a839047) |Azure Defender for Key Vault provides an additional layer of protection and security intelligence by detecting unusual and potentially harmful attempts to access or exploit key vault accounts. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnKeyVaults_Audit.json) |
-|[Azure Key Vault should have firewall enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F55615ac9-af46-4a59-874e-391cc3dfb490) |Enable the key vault firewall so that the key vault is not accessible by default to any public IPs. You can then configure specific IP ranges to limit access to those networks. Learn more at: [https://docs.microsoft.com/azure/key-vault/general/network-security](https://docs.microsoft.com/azure/key-vault/general/network-security) |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/AzureKeyVaultFirewallEnabled_Audit.json) |
+|[Azure Key Vault should have firewall enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F55615ac9-af46-4a59-874e-391cc3dfb490) |Enable the key vault firewall so that the key vault is not accessible by default to any public IPs. You can then configure specific IP ranges to limit access to those networks. Learn more at: [https://docs.microsoft.com/azure/key-vault/general/network-security](/azure/key-vault/general/network-security) |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/AzureKeyVaultFirewallEnabled_Audit.json) |
|[Key vaults should have purge protection enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0b60c0b2-2dc2-4e1c-b5c9-abbed971de53) |Malicious deletion of a key vault can lead to permanent data loss. A malicious insider in your organization can potentially delete and purge key vaults. Purge protection protects you from insider attacks by enforcing a mandatory retention period for soft deleted key vaults. No one inside your organization or Microsoft will be able to purge your key vaults during the soft delete retention period. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/KeyVault_Recoverable_Audit.json) | |[Key vaults should have soft delete enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1e66c121-a66a-4b1f-9b83-0fd99bf0fc2d) |Deleting a key vault without soft delete enabled permanently deletes all secrets, keys, and certificates stored in the key vault. Accidental deletion of a key vault can lead to permanent data loss. Soft delete allows you to recover an accidentally deleted key vault for a configurable retention period. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/KeyVault_SoftDeleteMustBeEnabled_Audit.json) | |[Resource logs in Key Vault should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fcf820ca0-f99e-4f3e-84fb-66e913812d21) |Audit enabling of resource logs. This enables you to recreate activity trails to use for investigation purposes when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[5.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/KeyVault_AuditDiagnosticLog_Audit.json) |
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
-|[\[Preview\]: Azure Arc enabled Kubernetes clusters should have Microsoft Defender for Cloud extension installed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8dfab9c4-fe7b-49ad-85e4-1e9be085358f) |Microsoft Defender for Cloud extension for Azure Arc provides threat protection for your Arc enabled Kubernetes clusters. The extension collects data from all nodes in the cluster and sends it to the Azure Defender for Kubernetes backend in the cloud for further analysis. Learn more in [https://docs.microsoft.com/azure/defender-for-cloud/defender-for-containers-enable?pivots=defender-for-container-arc](https://docs.microsoft.com/azure/defender-for-cloud/defender-for-containers-enable?pivots=defender-for-container-arc). |AuditIfNotExists, Disabled |[6.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ASC_Azure_Defender_Kubernetes_Arc_Extension_Audit.json) |
+|[\[Preview\]: Azure Arc enabled Kubernetes clusters should have Microsoft Defender for Cloud extension installed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8dfab9c4-fe7b-49ad-85e4-1e9be085358f) |Microsoft Defender for Cloud extension for Azure Arc provides threat protection for your Arc enabled Kubernetes clusters. The extension collects data from all nodes in the cluster and sends it to the Azure Defender for Kubernetes backend in the cloud for further analysis. Learn more in [https://docs.microsoft.com/azure/defender-for-cloud/defender-for-containers-enable?pivots=defender-for-container-arc](/azure/defender-for-cloud/defender-for-containers-enable?pivots=defender-for-container-arc). |AuditIfNotExists, Disabled |[6.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ASC_Azure_Defender_Kubernetes_Arc_Extension_Audit.json) |
|[Azure Defender for App Service should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2913021d-f2fd-4f3d-b958-22354e2bdbcb) |Azure Defender for App Service leverages the scale of the cloud, and the visibility that Azure has as a cloud provider, to monitor for common web app attacks. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnAppServices_Audit.json) | |[Azure Defender for Azure SQL Database servers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7fe3b40f-802b-4cdd-8bd4-fd799c948cc2) |Azure Defender for SQL provides functionality for surfacing and mitigating potential database vulnerabilities, detecting anomalous activities that could indicate threats to SQL databases, and discovering and classifying sensitive data. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedDataSecurityOnSqlServers_Audit.json) | |[Azure Defender for DNS should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbdc59948-5574-49b3-bb91-76b7c986428d) |Azure Defender for DNS provides an additional layer of protection for your cloud resources by continuously monitoring all DNS queries from your Azure resources. Azure Defender alerts you about suspicious activity at the DNS layer. Learn more about the capabilities of Azure Defender for DNS at [https://aka.ms/defender-for-dns](https://aka.ms/defender-for-dns) . Enabling this Azure Defender plan results in charges. Learn about the pricing details per region on Security Center's pricing page: [https://aka.ms/pricing-security-center](https://aka.ms/pricing-security-center) . |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAzureDefenderOnDns_Audit.json) |
initiative definition.
|[Azure Defender for SQL should be enabled for unprotected Azure SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb4388-5bf4-4ad7-ba82-2cd2f41ceae9) |Audit SQL servers without Advanced Data Security |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_AdvancedDataSecurity_Audit.json) | |[Azure Defender for SQL should be enabled for unprotected SQL Managed Instances](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb7388-5bf4-4ad7-ba99-2cd2f41cebb9) |Audit each SQL Managed Instance without advanced data security. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlManagedInstance_AdvancedDataSecurity_Audit.json) | |[Azure Defender for Storage should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F308fbb08-4ab8-4e67-9b29-592e93fb94fa) |Azure Defender for Storage provides detections of unusual and potentially harmful attempts to access or exploit storage accounts. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnStorageAccounts_Audit.json) |
-|[Azure Kubernetes Service clusters should have Defender profile enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa1840de2-8088-4ea8-b153-b4c723e9cb01) |Microsoft Defender for Containers provides cloud-native Kubernetes security capabilities including environment hardening, workload protection, and run-time protection. When you enable the SecurityProfile.AzureDefender on your Azure Kubernetes Service cluster, an agent is deployed to your cluster to collect security event data. Learn more about Microsoft Defender for Containers in [https://docs.microsoft.com/azure/defender-for-cloud/defender-for-containers-introduction?tabs=defender-for-container-arch-aks](https://docs.microsoft.com/azure/defender-for-cloud/defender-for-containers-introduction?tabs=defender-for-container-arch-aks) |Audit, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ASC_Azure_Defender_Kubernetes_AKS_SecurityProfile_Audit.json) |
+|[Azure Kubernetes Service clusters should have Defender profile enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa1840de2-8088-4ea8-b153-b4c723e9cb01) |Microsoft Defender for Containers provides cloud-native Kubernetes security capabilities including environment hardening, workload protection, and run-time protection. When you enable the SecurityProfile.AzureDefender on your Azure Kubernetes Service cluster, an agent is deployed to your cluster to collect security event data. Learn more about Microsoft Defender for Containers in [https://docs.microsoft.com/azure/defender-for-cloud/defender-for-containers-introduction?tabs=defender-for-container-arch-aks](/azure/defender-for-cloud/defender-for-containers-introduction?tabs=defender-for-container-arch-aks) |Audit, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ASC_Azure_Defender_Kubernetes_AKS_SecurityProfile_Audit.json) |
|[Microsoft Defender for Containers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1c988dd6-ade4-430f-a608-2a3e5b0a6d38) |Microsoft Defender for Containers provides hardening, vulnerability assessment and run-time protections for your Azure, hybrid, and multi-cloud Kubernetes environments. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnContainers_Audit.json) | |[Windows Defender Exploit Guard should be enabled on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbed48b13-6647-468e-aa2f-1af1d3f4dd40) |Windows Defender Exploit Guard uses the Azure Policy Guest Configuration agent. Exploit Guard has four components that are designed to lock down devices against a wide variety of attack vectors and block behaviors commonly used in malware attacks while enabling enterprises to balance their security risk and productivity requirements (Windows only). |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_WindowsDefenderExploitGuard_AINE.json) |
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
-|[\[Preview\]: Azure Arc enabled Kubernetes clusters should have Microsoft Defender for Cloud extension installed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8dfab9c4-fe7b-49ad-85e4-1e9be085358f) |Microsoft Defender for Cloud extension for Azure Arc provides threat protection for your Arc enabled Kubernetes clusters. The extension collects data from all nodes in the cluster and sends it to the Azure Defender for Kubernetes backend in the cloud for further analysis. Learn more in [https://docs.microsoft.com/azure/defender-for-cloud/defender-for-containers-enable?pivots=defender-for-container-arc](https://docs.microsoft.com/azure/defender-for-cloud/defender-for-containers-enable?pivots=defender-for-container-arc). |AuditIfNotExists, Disabled |[6.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ASC_Azure_Defender_Kubernetes_Arc_Extension_Audit.json) |
+|[\[Preview\]: Azure Arc enabled Kubernetes clusters should have Microsoft Defender for Cloud extension installed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8dfab9c4-fe7b-49ad-85e4-1e9be085358f) |Microsoft Defender for Cloud extension for Azure Arc provides threat protection for your Arc enabled Kubernetes clusters. The extension collects data from all nodes in the cluster and sends it to the Azure Defender for Kubernetes backend in the cloud for further analysis. Learn more in [https://docs.microsoft.com/azure/defender-for-cloud/defender-for-containers-enable?pivots=defender-for-container-arc](/azure/defender-for-cloud/defender-for-containers-enable?pivots=defender-for-container-arc). |AuditIfNotExists, Disabled |[6.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ASC_Azure_Defender_Kubernetes_Arc_Extension_Audit.json) |
|[Azure Defender for App Service should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2913021d-f2fd-4f3d-b958-22354e2bdbcb) |Azure Defender for App Service leverages the scale of the cloud, and the visibility that Azure has as a cloud provider, to monitor for common web app attacks. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnAppServices_Audit.json) | |[Azure Defender for Azure SQL Database servers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7fe3b40f-802b-4cdd-8bd4-fd799c948cc2) |Azure Defender for SQL provides functionality for surfacing and mitigating potential database vulnerabilities, detecting anomalous activities that could indicate threats to SQL databases, and discovering and classifying sensitive data. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedDataSecurityOnSqlServers_Audit.json) | |[Azure Defender for DNS should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbdc59948-5574-49b3-bb91-76b7c986428d) |Azure Defender for DNS provides an additional layer of protection for your cloud resources by continuously monitoring all DNS queries from your Azure resources. Azure Defender alerts you about suspicious activity at the DNS layer. Learn more about the capabilities of Azure Defender for DNS at [https://aka.ms/defender-for-dns](https://aka.ms/defender-for-dns) . Enabling this Azure Defender plan results in charges. Learn about the pricing details per region on Security Center's pricing page: [https://aka.ms/pricing-security-center](https://aka.ms/pricing-security-center) . |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAzureDefenderOnDns_Audit.json) |
initiative definition.
|[Azure Defender for SQL should be enabled for unprotected Azure SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb4388-5bf4-4ad7-ba82-2cd2f41ceae9) |Audit SQL servers without Advanced Data Security |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_AdvancedDataSecurity_Audit.json) | |[Azure Defender for SQL should be enabled for unprotected SQL Managed Instances](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb7388-5bf4-4ad7-ba99-2cd2f41cebb9) |Audit each SQL Managed Instance without advanced data security. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlManagedInstance_AdvancedDataSecurity_Audit.json) | |[Azure Defender for Storage should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F308fbb08-4ab8-4e67-9b29-592e93fb94fa) |Azure Defender for Storage provides detections of unusual and potentially harmful attempts to access or exploit storage accounts. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnStorageAccounts_Audit.json) |
-|[Azure Kubernetes Service clusters should have Defender profile enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa1840de2-8088-4ea8-b153-b4c723e9cb01) |Microsoft Defender for Containers provides cloud-native Kubernetes security capabilities including environment hardening, workload protection, and run-time protection. When you enable the SecurityProfile.AzureDefender on your Azure Kubernetes Service cluster, an agent is deployed to your cluster to collect security event data. Learn more about Microsoft Defender for Containers in [https://docs.microsoft.com/azure/defender-for-cloud/defender-for-containers-introduction?tabs=defender-for-container-arch-aks](https://docs.microsoft.com/azure/defender-for-cloud/defender-for-containers-introduction?tabs=defender-for-container-arch-aks) |Audit, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ASC_Azure_Defender_Kubernetes_AKS_SecurityProfile_Audit.json) |
+|[Azure Kubernetes Service clusters should have Defender profile enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa1840de2-8088-4ea8-b153-b4c723e9cb01) |Microsoft Defender for Containers provides cloud-native Kubernetes security capabilities including environment hardening, workload protection, and run-time protection. When you enable the SecurityProfile.AzureDefender on your Azure Kubernetes Service cluster, an agent is deployed to your cluster to collect security event data. Learn more about Microsoft Defender for Containers in [https://docs.microsoft.com/azure/defender-for-cloud/defender-for-containers-introduction?tabs=defender-for-container-arch-aks](/azure/defender-for-cloud/defender-for-containers-introduction?tabs=defender-for-container-arch-aks) |Audit, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ASC_Azure_Defender_Kubernetes_AKS_SecurityProfile_Audit.json) |
|[Microsoft Defender for Containers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1c988dd6-ade4-430f-a608-2a3e5b0a6d38) |Microsoft Defender for Containers provides hardening, vulnerability assessment and run-time protections for your Azure, hybrid, and multi-cloud Kubernetes environments. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnContainers_Audit.json) | |[Windows Defender Exploit Guard should be enabled on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbed48b13-6647-468e-aa2f-1af1d3f4dd40) |Windows Defender Exploit Guard uses the Azure Policy Guest Configuration agent. Exploit Guard has four components that are designed to lock down devices against a wide variety of attack vectors and block behaviors commonly used in malware attacks while enabling enterprises to balance their security risk and productivity requirements (Windows only). |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_WindowsDefenderExploitGuard_AINE.json) |
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
-|[Endpoint protection health issues should be resolved on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8e42c1f2-a2ab-49bc-994a-12bcd0dc4ac2) |Resolve endpoint protection health issues on your virtual machines to protect them from latest threats and vulnerabilities. Azure Security Center supported endpoint protection solutions are documented here - [https://docs.microsoft.com/azure/security-center/security-center-services?tabs=features-windows#supported-endpoint-protection-solutions](https://docs.microsoft.com/azure/security-center/security-center-services?tabs=features-windows#supported-endpoint-protection-solutions). Endpoint protection assessment is documented here - [https://docs.microsoft.com/azure/security-center/security-center-endpoint-protection](https://docs.microsoft.com/azure/security-center/security-center-endpoint-protection). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EndpointProtectionHealthIssuesShouldBeResolvedOnYourMachines_Audit.json) |
+|[Endpoint protection health issues should be resolved on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8e42c1f2-a2ab-49bc-994a-12bcd0dc4ac2) |Resolve endpoint protection health issues on your virtual machines to protect them from latest threats and vulnerabilities. Azure Security Center supported endpoint protection solutions are documented here - [https://docs.microsoft.com/azure/security-center/security-center-services?tabs=features-windows#supported-endpoint-protection-solutions](/azure/security-center/security-center-services?tabs=features-windows#supported-endpoint-protection-solutions). Endpoint protection assessment is documented here - [https://docs.microsoft.com/azure/security-center/security-center-endpoint-protection](/azure/security-center/security-center-endpoint-protection). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EndpointProtectionHealthIssuesShouldBeResolvedOnYourMachines_Audit.json) |
|[Endpoint protection should be installed on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1f7c564c-0a90-4d44-b7e1-9d456cffaee8) |To protect your machines from threats and vulnerabilities, install a supported endpoint protection solution. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EndpointProtectionShouldBeInstalledOnYourMachines_Audit.json) | |[Endpoint protection solution should be installed on virtual machine scale sets](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F26a828e1-e88f-464e-bbb3-c134a282b9de) |Audit the existence and health of an endpoint protection solution on your virtual machines scale sets, to protect them from threats and vulnerabilities. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_VmssMissingEndpointProtection_Audit.json) | |[Monitor missing Endpoint Protection in Azure Security Center](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faf6cd1bd-1635-48cb-bde7-5b15693900b9) |Servers without an installed Endpoint Protection agent will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_MissingEndpointProtection_Audit.json) |
governance Azure Security Benchmarkv1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/azure-security-benchmarkv1.md
Title: Regulatory Compliance details for Azure Security Benchmark v1 description: Details of the Azure Security Benchmark v1 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 07/26/2022 Last updated : 08/01/2022
governance Canada Federal Pbmm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/canada-federal-pbmm.md
Title: Regulatory Compliance details for Canada Federal PBMM description: Details of the Canada Federal PBMM Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 07/26/2022 Last updated : 08/01/2022
governance Cis Azure 1 1 0 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/cis-azure-1-1-0.md
Title: Regulatory Compliance details for CIS Microsoft Azure Foundations Benchmark 1.1.0 description: Details of the CIS Microsoft Azure Foundations Benchmark 1.1.0 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 07/26/2022 Last updated : 08/01/2022
governance Cis Azure 1 3 0 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/cis-azure-1-3-0.md
Title: Regulatory Compliance details for CIS Microsoft Azure Foundations Benchmark 1.3.0 description: Details of the CIS Microsoft Azure Foundations Benchmark 1.3.0 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 07/26/2022 Last updated : 08/01/2022
governance Cmmc L3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/cmmc-l3.md
Title: Regulatory Compliance details for CMMC Level 3 description: Details of the CMMC Level 3 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 07/26/2022 Last updated : 08/01/2022
governance Fedramp High https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/fedramp-high.md
Title: Regulatory Compliance details for FedRAMP High description: Details of the FedRAMP High Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 07/26/2022 Last updated : 08/01/2022
governance Fedramp Moderate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/fedramp-moderate.md
Title: Regulatory Compliance details for FedRAMP Moderate description: Details of the FedRAMP Moderate Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 07/26/2022 Last updated : 08/01/2022
governance Gov Azure Security Benchmark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-azure-security-benchmark.md
Title: Regulatory Compliance details for Azure Security Benchmark (Azure Government) description: Details of the Azure Security Benchmark (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 07/26/2022 Last updated : 08/01/2022
governance Gov Cis Azure 1 1 0 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-cis-azure-1-1-0.md
Title: Regulatory Compliance details for CIS Microsoft Azure Foundations Benchmark 1.1.0 (Azure Government) description: Details of the CIS Microsoft Azure Foundations Benchmark 1.1.0 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 07/26/2022 Last updated : 08/01/2022
This built-in initiative is deployed as part of the
||||| |[Monitor missing Endpoint Protection in Azure Security Center](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faf6cd1bd-1635-48cb-bde7-5b15693900b9) |Servers without an installed Endpoint Protection agent will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_MissingEndpointProtection_Audit.json) |
-### Ensure ASC Default policy setting "Monitor Endpoint Protection" is not "Disabled"
+### Ensure ASC Default policy setting "Monitor Disk Encryption" is not "Disabled"
-**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 2.5
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 2.6
**Ownership**: Customer |Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
governance Gov Cis Azure 1 3 0 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-cis-azure-1-3-0.md
Title: Regulatory Compliance details for CIS Microsoft Azure Foundations Benchmark 1.3.0 (Azure Government) description: Details of the CIS Microsoft Azure Foundations Benchmark 1.3.0 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 07/26/2022 Last updated : 08/01/2022
governance Gov Cmmc L3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-cmmc-l3.md
Title: Regulatory Compliance details for CMMC Level 3 (Azure Government) description: Details of the CMMC Level 3 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 07/26/2022 Last updated : 08/01/2022
governance Gov Fedramp High https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-fedramp-high.md
Title: Regulatory Compliance details for FedRAMP High (Azure Government) description: Details of the FedRAMP High (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 07/26/2022 Last updated : 08/01/2022
governance Gov Fedramp Moderate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-fedramp-moderate.md
Title: Regulatory Compliance details for FedRAMP Moderate (Azure Government) description: Details of the FedRAMP Moderate (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 07/26/2022 Last updated : 08/01/2022
governance Gov Irs 1075 Sept2016 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-irs-1075-sept2016.md
Title: Regulatory Compliance details for IRS 1075 September 2016 (Azure Government) description: Details of the IRS 1075 September 2016 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 07/26/2022 Last updated : 08/01/2022
governance Gov Iso 27001 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-iso-27001.md
Title: Regulatory Compliance details for ISO 27001:2013 (Azure Government) description: Details of the ISO 27001:2013 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 07/26/2022 Last updated : 08/01/2022
governance Gov Nist Sp 800 53 R5 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-nist-sp-800-53-r5.md
Title: Regulatory Compliance details for NIST SP 800-53 Rev. 5 (Azure Government) description: Details of the NIST SP 800-53 Rev. 5 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 07/26/2022 Last updated : 08/01/2022
governance Hipaa Hitrust 9 2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/hipaa-hitrust-9-2.md
Title: Regulatory Compliance details for HIPAA HITRUST 9.2 description: Details of the HIPAA HITRUST 9.2 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 07/26/2022 Last updated : 08/01/2022
governance Irs 1075 Sept2016 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/irs-1075-sept2016.md
Title: Regulatory Compliance details for IRS 1075 September 2016 description: Details of the IRS 1075 September 2016 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 07/26/2022 Last updated : 08/01/2022
governance Iso 27001 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/iso-27001.md
Title: Regulatory Compliance details for ISO 27001:2013 description: Details of the ISO 27001:2013 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 07/26/2022 Last updated : 08/01/2022
governance New Zealand Ism https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/new-zealand-ism.md
Title: Regulatory Compliance details for New Zealand ISM Restricted description: Details of the New Zealand ISM Restricted Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 07/26/2022 Last updated : 08/01/2022
governance Nist Sp 800 53 R5 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/nist-sp-800-53-r5.md
Title: Regulatory Compliance details for NIST SP 800-53 Rev. 5 description: Details of the NIST SP 800-53 Rev. 5 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 07/26/2022 Last updated : 08/01/2022
governance Pci Dss 3 2 1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/pci-dss-3-2-1.md
Title: Regulatory Compliance details for PCI DSS 3.2.1 description: Details of the PCI DSS 3.2.1 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 07/26/2022 Last updated : 08/01/2022
governance Rbi_Itf_Nbfc_V2017 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/rbi_itf_nbfc_v2017.md
+
+ Title: Regulatory Compliance details for Reserve Bank of India - IT Framework for NBFC
+description: Details of the Reserve Bank of India - IT Framework for NBFC Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment.
Last updated : 08/01/2022+++
+# Details of the Reserve Bank of India - IT Framework for NBFC Regulatory Compliance built-in initiative
+
+The following article details how the Azure Policy Regulatory Compliance built-in initiative
+definition maps to **compliance domains** and **controls** in Reserve Bank of India - IT Framework for NBFC.
+For more information about this compliance standard, see
+[Reserve Bank of India - IT Framework for NBFC](https://www.rbi.org.in/Scripts/NotificationUser.aspx?Id=10999&Mode=0#C1). To understand
+_Ownership_, see [Azure Policy policy definition](../concepts/definition-structure.md#type) and
+[Shared responsibility in the cloud](../../../security/fundamentals/shared-responsibility.md).
+
+The following mappings are to the **Reserve Bank of India - IT Framework for NBFC** controls. Use the
+navigation on the right to jump directly to a specific **compliance domain**. Many of the controls
+are implemented with an [Azure Policy](../overview.md) initiative definition. To review the complete
+initiative definition, open **Policy** in the Azure portal and select the **Definitions** page.
+Then, find and select the **[Preview]: Reserve Bank of India - IT Framework for NBFC** Regulatory Compliance built-in
+initiative definition.
+
+> [!IMPORTANT]
+> Each control below is associated with one or more [Azure Policy](../overview.md) definitions.
+> These policies may help you [assess compliance](../how-to/get-compliance-data.md) with the
+> control; however, there often is not a one-to-one or complete match between a control and one or
+> more policies. As such, **Compliant** in Azure Policy refers only to the policy definitions
+> themselves; this doesn't ensure you're fully compliant with all requirements of a control. In
+> addition, the compliance standard includes controls that aren't addressed by any Azure Policy
+> definitions at this time. Therefore, compliance in Azure Policy is only a partial view of your
+> overall compliance status. The associations between compliance domains, controls, and Azure Policy
+> definitions for this compliance standard may change over time. To view the change history, see the
+> [GitHub Commit History](https://github.com/Azure/azure-policy/commits/master/built-in-policies/policySetDefinitions/Regulatory%20Compliance/RBI_ITF_NBFC_v2017.json).
+
+## IT Governance
+
+### IT Governance-1
+
+**ID**: RBI IT Framework 1
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[A vulnerability assessment solution should be enabled on your virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F501541f7-f7e7-4cd6-868c-4190fdad3ac9) |Audits virtual machines to detect whether they are running a supported vulnerability assessment solution. A core component of every cyber risk and security program is the identification and analysis of vulnerabilities. Azure Security Center's standard pricing tier includes vulnerability scanning for your virtual machines at no extra cost. Additionally, Security Center can automatically deploy this tool for you. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ServerVulnerabilityAssessment_Audit.json) |
+|[App Service apps that use Java should use the latest 'Java version'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F496223c3-ad65-4ecd-878a-bae78737e9ed) |Periodically, newer versions are released for Java software either due to security flaws or to include additional functionality. Using the latest Java version for web apps is recommended in order to take advantage of security fixes, if any, and/or new functionalities of the latest version. Currently, this policy only applies to Linux apps. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_WebApp_Audit_java_Latest.json) |
+|[App Service apps that use PHP should use the latest 'PHP version'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7261b898-8a84-4db8-9e04-18527132abb3) |Periodically, newer versions are released for PHP software either due to security flaws or to include additional functionality. Using the latest PHP version for App Service apps is recommended in order to take advantage of security fixes, if any, and/or new functionalities of the latest version. Currently, this policy only applies to Linux apps. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_Webapp_Audit_PHP_Latest.json) |
+|[App Service apps that use Python should use the latest 'Python version'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7008174a-fd10-4ef0-817e-fc820a951d73) |Periodically, newer versions are released for Python software either due to security flaws or to include additional functionality. Using the latest Python version for App Service apps is recommended in order to take advantage of security fixes, if any, and/or new functionalities of the latest version. This policy only applies to Linux apps. |AuditIfNotExists, Disabled |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_WebApp_Audit_python_Latest.json) |
+|[Container registry images should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5f0f936f-2f01-4bf5-b6be-d423792fa562) |Container image vulnerability assessment scans your registry for security vulnerabilities and exposes detailed findings for each image. Resolving the vulnerabilities can greatly improve your containers' security posture and protect them from attacks. |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ContainerRegistryVulnerabilityAssessment_Audit.json) |
+|[Email notification for high severity alerts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6e2593d9-add6-4083-9c9b-4b7d2188c899) |To ensure the relevant people in your organization are notified when there is a potential security breach in one of your subscriptions, enable email notifications for high severity alerts in Security Center. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Email_notification.json) |
+|[Email notification to subscription owner for high severity alerts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0b15565f-aa9e-48ba-8619-45960f2c314d) |To ensure your subscription owners are notified when there is a potential security breach in their subscription, set email notifications to subscription owners for high severity alerts in Security Center. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Email_notification_to_subscription_owner.json) |
+|[Function apps that use Java should use the latest 'Java version'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9d0b6ea4-93e2-4578-bf2f-6bb17d22b4bc) |Periodically, newer versions are released for Java software either due to security flaws or to include additional functionality. Using the latest Java version for Function apps is recommended in order to take advantage of security fixes, if any, and/or new functionalities of the latest version. Currently, this policy only applies to Linux apps. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_FunctionApp_Audit_java_Latest.json) |
+|[Function apps that use Python should use the latest 'Python version'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7238174a-fd10-4ef0-817e-fc820a951d73) |Periodically, newer versions are released for Python software either due to security flaws or to include additional functionality. Using the latest Python version for Function apps is recommended in order to take advantage of security fixes, if any, and/or new functionalities of the latest version. This policy only applies to Linux apps since Python is not supported on Windows apps. |AuditIfNotExists, Disabled |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_FunctionApp_Audit_python_Latest.json) |
+|[Kubernetes Services should be upgraded to a non-vulnerable Kubernetes version](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffb893a29-21bb-418c-a157-e99480ec364c) |Upgrade your Kubernetes service cluster to a later Kubernetes version to protect against known vulnerabilities in your current Kubernetes version. Vulnerability CVE-2019-9946 has been patched in Kubernetes versions 1.11.9+, 1.12.7+, 1.13.5+, and 1.14.0+ |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_UpgradeVersion_KubernetesService_Audit.json) |
+|[SQL databases should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffeedbf84-6b99-488c-acc2-71c829aa5ffc) |Monitor vulnerability assessment scan results and recommendations for how to remediate database vulnerabilities. |AuditIfNotExists, Disabled |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_SQLDbVulnerabilities_Audit.json) |
+|[SQL servers on machines should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6ba6d016-e7c3-4842-b8f2-4992ebc0d72d) |SQL vulnerability assessment scans your database for security vulnerabilities, and exposes any deviations from best practices such as misconfigurations, excessive permissions, and unprotected sensitive data. Resolving the vulnerabilities found can greatly improve your database security posture. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ServerSQLVulnerabilityAssessment_Audit.json) |
+|[Subscriptions should have a contact email address for security issues](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4f4f78b8-e367-4b10-a341-d9a4ad5cf1c7) |To ensure the relevant people in your organization are notified when there is a potential security breach in one of your subscriptions, set a security contact to receive email notifications from Security Center. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Security_contact_email.json) |
+|[System updates on virtual machine scale sets should be installed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc3f317a7-a95c-4547-b7e7-11017ebdf2fe) |Audit whether there are any missing system security updates and critical updates that should be installed to ensure that your Windows and Linux virtual machine scale sets are secure. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_VmssMissingSystemUpdates_Audit.json) |
+|[System updates should be installed on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F86b3d65f-7626-441e-b690-81a8b71cff60) |Missing security system updates on your servers will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_MissingSystemUpdates_Audit.json) |
+|[Vulnerabilities in container security configurations should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe8cbc669-f12d-49eb-93e7-9273119e9933) |Audit vulnerabilities in security configuration on machines with Docker installed and display as recommendations in Azure Security Center. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ContainerBenchmark_Audit.json) |
+|[Vulnerabilities in security configuration on your machines should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe1e5fd5d-3e4c-4ce1-8661-7d1873ae6b15) |Servers which do not satisfy the configured baseline will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_OSVulnerabilities_Audit.json) |
+|[Vulnerabilities in security configuration on your virtual machine scale sets should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3c735d8a-a4ba-4a3a-b7cf-db7754cf57f4) |Audit the OS vulnerabilities on your virtual machine scale sets to protect them from attacks. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_VmssOSVulnerabilities_Audit.json) |
+|[Vulnerability assessment should be enabled on SQL Managed Instance](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1b7aa243-30e4-4c9e-bca8-d0d3022b634a) |Audit each SQL Managed Instance which doesn't have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnManagedInstance_Audit.json) |
+|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
+
+### IT Governance-1.1
+
+**ID**: RBI IT Framework 1.1
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[\[Preview\]: All Internet traffic should be routed via your deployed Azure Firewall](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffc5e4038-4584-4632-8c85-c0448d374b2c) |Azure Security Center has identified that some of your subnets aren't protected with a next generation firewall. Protect your subnets from potential threats by restricting access to them with Azure Firewall or a supported next generation firewall |AuditIfNotExists, Disabled |[3.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/ASC_All_Internet_traffic_should_be_routed_via_Azure_Firewall.json) |
+|[IP Forwarding on your virtual machine should be disabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbd352bd5-2853-4985-bf0d-73806b4a5744) |Enabling IP forwarding on a virtual machine's NIC allows the machine to receive traffic addressed to other destinations. IP forwarding is rarely required (e.g., when using the VM as a network virtual appliance), and therefore, this should be reviewed by the network security team. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_IPForwardingOnVirtualMachines_Audit.json) |
+|[Management ports of virtual machines should be protected with just-in-time network access control](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb0f33259-77d7-4c9e-aac6-3aabcfae693c) |Possible network Just In Time (JIT) access will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_JITNetworkAccess_Audit.json) |
+|[Management ports should be closed on your virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F22730e10-96f6-4aac-ad84-9383d35b5917) |Open remote management ports are exposing your VM to a high level of risk from Internet-based attacks. These attacks attempt to brute force credentials to gain admin access to the machine. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_OpenManagementPortsOnVirtualMachines_Audit.json) |
+
+## IT Policy
+
+### IT Policy-2
+
+**ID**: RBI IT Framework 2
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Adaptive application controls for defining safe applications should be enabled on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F47a6b606-51aa-4496-8bb7-64b11cf66adc) |Enable application controls to define the list of known-safe applications running on your machines, and alert you when other applications run. This helps harden your machines against malware. To simplify the process of configuring and maintaining your rules, Security Center uses machine learning to analyze the applications running on each machine and suggest the list of known-safe applications. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_AdaptiveApplicationControls_Audit.json) |
+|[Allowlist rules in your adaptive application control policy should be updated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F123a3936-f020-408a-ba0c-47873faf1534) |Monitor for changes in behavior on groups of machines configured for auditing by Azure Security Center's adaptive application controls. Security Center uses machine learning to analyze the running processes on your machines and suggest a list of known-safe applications. These are presented as recommended apps to allow in adaptive application control policies. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_AdaptiveApplicationControlsUpdate_Audit.json) |
+
+## Information and Cyber Security
+
+### Information Security-3
+
+**ID**: RBI IT Framework 3
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Azure Defender for Storage should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F308fbb08-4ab8-4e67-9b29-592e93fb94fa) |Azure Defender for Storage provides detections of unusual and potentially harmful attempts to access or exploit storage accounts. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnStorageAccounts_Audit.json) |
+
+### Identification and Classification of Information Assets-3.1
+
+**ID**: RBI IT Framework 3.1.a
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Audit usage of custom RBAC rules](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa451c1ef-c6ca-483d-87ed-f49761e3ffb5) |Audit built-in roles such as 'Owner, Contributer, Reader' instead of custom RBAC roles, which are error prone. Using custom roles is treated as an exception and requires a rigorous review and threat modeling |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/General/Subscription_AuditCustomRBACRoles_Audit.json) |
+|[Deprecated accounts should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6b1cbf55-e8b6-442f-ba4c-7246b6381474) |Deprecated accounts should be removed from your subscriptions. Deprecated accounts are accounts that have been blocked from signing in. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveDeprecatedAccounts_Audit.json) |
+|[Deprecated accounts with owner permissions should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Febb62a0c-3560-49e1-89ed-27e074e9f8ad) |Deprecated accounts with owner permissions should be removed from your subscription. Deprecated accounts are accounts that have been blocked from signing in. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveDeprecatedAccountsWithOwnerPermissions_Audit.json) |
+|[External accounts with owner permissions should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff8456c1c-aa66-4dfb-861a-25d127b775c9) |External accounts with owner permissions should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveExternalAccountsWithOwnerPermissions_Audit.json) |
+|[External accounts with read permissions should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5f76cf89-fbf2-47fd-a3f4-b891fa780b60) |External accounts with read privileges should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveExternalAccountsReadPermissions_Audit.json) |
+|[External accounts with write permissions should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5c607a2e-c700-4744-8254-d77e7c9eb5e4) |External accounts with write privileges should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveExternalAccountsWritePermissions_Audit.json) |
+|[Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) |
+
+### Segregation of Functions-3.1
+
+**ID**: RBI IT Framework 3.1.b
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[\[Preview\]: Secure Boot should be enabled on supported Windows virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F97566dd7-78ae-4997-8b36-1c7bfe0d8121) |Enable Secure Boot on supported Windows virtual machines to mitigate against malicious and unauthorized changes to the boot chain. Once enabled, only trusted bootloaders, kernel and kernel drivers will be allowed to run. This assessment only applies to trusted launch enabled Windows virtual machines. |Audit, Disabled |[3.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableWindowsSB_Audit.json) |
+|[\[Preview\]: vTPM should be enabled on supported virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1c30f9cd-b84c-49cc-aa2c-9288447cc3b3) |Enable virtual TPM device on supported virtual machines to facilitate Measured Boot and other OS security features that require a TPM. Once enabled, vTPM can be used to attest boot integrity. This assessment only applies to trusted launch enabled virtual machines. |Audit, Disabled |[2.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableVTPM_Audit.json) |
+|[App Service apps should have remote debugging turned off](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fcb510bfd-1cba-4d9f-a230-cb0976f4bb71) |Remote debugging requires inbound ports to be opened on an App Service app. Remote debugging should be turned off. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_DisableRemoteDebugging_WebApp_Audit.json) |
+|[Function apps should have 'Client Certificates (Incoming client certificates)' enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Feaebaea7-8013-4ceb-9d14-7eb32271373c) |Client certificates allow for the app to request a certificate for incoming requests. Only clients with valid certificates will be able to reach the app. |Audit, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_FunctionApp_Audit_ClientCert.json) |
+|[Function apps should have remote debugging turned off](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0e60b895-3786-45da-8377-9c6b4b6ac5f9) |Remote debugging requires inbound ports to be opened on Function apps. Remote debugging should be turned off. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_DisableRemoteDebugging_FunctionApp_Audit.json) |
+|[Virtual machines' Guest Configuration extension should be deployed with system-assigned managed identity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd26f7642-7545-4e18-9b75-8c9bbdee3a9a) |The Guest Configuration extension requires a system assigned managed identity. Azure virtual machines in the scope of this policy will be non-compliant when they have the Guest Configuration extension installed but do not have a system assigned managed identity. Learn more at [https://aka.ms/gcpol](https://aka.ms/gcpol) |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_GCExtOnVmWithNoSAMI.json) |
+
+### Role based Access Control-3.1
+
+**ID**: RBI IT Framework 3.1.c
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[A maximum of 3 owners should be designated for your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4f11b553-d42e-4e3a-89be-32ca364cad4c) |It is recommended to designate up to 3 subscription owners in order to reduce the potential for breach by a compromised owner. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_DesignateLessThanXOwners_Audit.json) |
+|[Azure subscriptions should have a log profile for Activity Log](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7796937f-307b-4598-941c-67d3a05ebfe7) |This policy ensures if a log profile is enabled for exporting activity logs. It audits if there is no log profile created to export the logs either to a storage account or to an event hub. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/Logprofile_activityLogs_Audit.json) |
+|[Deprecated accounts should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6b1cbf55-e8b6-442f-ba4c-7246b6381474) |Deprecated accounts should be removed from your subscriptions. Deprecated accounts are accounts that have been blocked from signing in. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveDeprecatedAccounts_Audit.json) |
+|[Deprecated accounts with owner permissions should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Febb62a0c-3560-49e1-89ed-27e074e9f8ad) |Deprecated accounts with owner permissions should be removed from your subscription. Deprecated accounts are accounts that have been blocked from signing in. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveDeprecatedAccountsWithOwnerPermissions_Audit.json) |
+|[Email notification to subscription owner for high severity alerts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0b15565f-aa9e-48ba-8619-45960f2c314d) |To ensure your subscription owners are notified when there is a potential security breach in their subscription, set email notifications to subscription owners for high severity alerts in Security Center. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Email_notification_to_subscription_owner.json) |
+|[External accounts with owner permissions should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff8456c1c-aa66-4dfb-861a-25d127b775c9) |External accounts with owner permissions should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveExternalAccountsWithOwnerPermissions_Audit.json) |
+|[External accounts with read permissions should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5f76cf89-fbf2-47fd-a3f4-b891fa780b60) |External accounts with read privileges should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveExternalAccountsReadPermissions_Audit.json) |
+|[External accounts with write permissions should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5c607a2e-c700-4744-8254-d77e7c9eb5e4) |External accounts with write privileges should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveExternalAccountsWritePermissions_Audit.json) |
+|[Management ports of virtual machines should be protected with just-in-time network access control](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb0f33259-77d7-4c9e-aac6-3aabcfae693c) |Possible network Just In Time (JIT) access will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_JITNetworkAccess_Audit.json) |
+|[MFA should be enabled accounts with write permissions on your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9297c21d-2ed6-4474-b48f-163f75654ce3) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with write privileges to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForWritePermissions_Audit.json) |
+|[MFA should be enabled on accounts with owner permissions on your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faa633080-8b72-40c4-a2d7-d00c03e80bed) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with owner permissions to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForOwnerPermissions_Audit.json) |
+|[MFA should be enabled on accounts with read permissions on your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe3576e28-8b17-4677-84c3-db2990658d64) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with read privileges to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForReadPermissions_Audit.json) |
+|[Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) |
+|[Subscriptions should have a contact email address for security issues](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4f4f78b8-e367-4b10-a341-d9a4ad5cf1c7) |To ensure the relevant people in your organization are notified when there is a potential security breach in one of your subscriptions, set a security contact to receive email notifications from Security Center. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Security_contact_email.json) |
+|[There should be more than one owner assigned to your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F09024ccc-0c5f-475e-9457-b7c0d9ed487b) |It is recommended to designate more than one subscription owner in order to have administrator access redundancy. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_DesignateMoreThanOneOwner_Audit.json) |
+
+### Maker-checker-3.1
+
+**ID**: RBI IT Framework 3.1.f
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[A maximum of 3 owners should be designated for your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4f11b553-d42e-4e3a-89be-32ca364cad4c) |It is recommended to designate up to 3 subscription owners in order to reduce the potential for breach by a compromised owner. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_DesignateLessThanXOwners_Audit.json) |
+|[Audit usage of custom RBAC rules](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa451c1ef-c6ca-483d-87ed-f49761e3ffb5) |Audit built-in roles such as 'Owner, Contributer, Reader' instead of custom RBAC roles, which are error prone. Using custom roles is treated as an exception and requires a rigorous review and threat modeling |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/General/Subscription_AuditCustomRBACRoles_Audit.json) |
+|[Azure Defender for App Service should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2913021d-f2fd-4f3d-b958-22354e2bdbcb) |Azure Defender for App Service leverages the scale of the cloud, and the visibility that Azure has as a cloud provider, to monitor for common web app attacks. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnAppServices_Audit.json) |
+|[Azure Defender for Azure SQL Database servers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7fe3b40f-802b-4cdd-8bd4-fd799c948cc2) |Azure Defender for SQL provides functionality for surfacing and mitigating potential database vulnerabilities, detecting anomalous activities that could indicate threats to SQL databases, and discovering and classifying sensitive data. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedDataSecurityOnSqlServers_Audit.json) |
+|[Azure Defender for DNS should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbdc59948-5574-49b3-bb91-76b7c986428d) |Azure Defender for DNS provides an additional layer of protection for your cloud resources by continuously monitoring all DNS queries from your Azure resources. Azure Defender alerts you about suspicious activity at the DNS layer. Learn more about the capabilities of Azure Defender for DNS at [https://aka.ms/defender-for-dns](https://aka.ms/defender-for-dns) . Enabling this Azure Defender plan results in charges. Learn about the pricing details per region on Security Center's pricing page: [https://aka.ms/pricing-security-center](https://aka.ms/pricing-security-center) . |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAzureDefenderOnDns_Audit.json) |
+|[Azure Defender for Key Vault should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0e6763cc-5078-4e64-889d-ff4d9a839047) |Azure Defender for Key Vault provides an additional layer of protection and security intelligence by detecting unusual and potentially harmful attempts to access or exploit key vault accounts. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnKeyVaults_Audit.json) |
+|[Azure Defender for open-source relational databases should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0a9fbe0d-c5c4-4da8-87d8-f4fd77338835) |Azure Defender for open-source relational databases detects anomalous activities indicating unusual and potentially harmful attempts to access or exploit databases. Learn more about the capabilities of Azure Defender for open-source relational databases at [https://aka.ms/AzDforOpenSourceDBsDocu](https://aka.ms/AzDforOpenSourceDBsDocu). Important: Enabling this plan will result in charges for protecting your open-source relational databases. Learn about the pricing on Security Center's pricing page: [https://aka.ms/pricing-security-center](https://aka.ms/pricing-security-center) |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAzureDefenderOnOpenSourceRelationalDatabases_Audit.json) |
+|[Azure Defender for Resource Manager should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc3d20c29-b36d-48fe-808b-99a87530ad99) |Azure Defender for Resource Manager automatically monitors the resource management operations in your organization. Azure Defender detects threats and alerts you about suspicious activity. Learn more about the capabilities of Azure Defender for Resource Manager at [https://aka.ms/defender-for-resource-manager](https://aka.ms/defender-for-resource-manager) . Enabling this Azure Defender plan results in charges. Learn about the pricing details per region on Security Center's pricing page: [https://aka.ms/pricing-security-center](https://aka.ms/pricing-security-center) . |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAzureDefenderOnResourceManager_Audit.json) |
+|[Azure Defender for servers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4da35fc9-c9e7-4960-aec9-797fe7d9051d) |Azure Defender for servers provides real-time threat protection for server workloads and generates hardening recommendations as well as alerts about suspicious activities. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnVM_Audit.json) |
+|[Azure Defender for SQL servers on machines should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6581d072-105e-4418-827f-bd446d56421b) |Azure Defender for SQL provides functionality for surfacing and mitigating potential database vulnerabilities, detecting anomalous activities that could indicate threats to SQL databases, and discovering and classifying sensitive data. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedDataSecurityOnSqlServerVirtualMachines_Audit.json) |
+|[Azure Defender for SQL should be enabled for unprotected Azure SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb4388-5bf4-4ad7-ba82-2cd2f41ceae9) |Audit SQL servers without Advanced Data Security |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_AdvancedDataSecurity_Audit.json) |
+|[Azure Defender for SQL should be enabled for unprotected SQL Managed Instances](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb7388-5bf4-4ad7-ba99-2cd2f41cebb9) |Audit each SQL Managed Instance without advanced data security. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlManagedInstance_AdvancedDataSecurity_Audit.json) |
+|[Azure Defender for Storage should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F308fbb08-4ab8-4e67-9b29-592e93fb94fa) |Azure Defender for Storage provides detections of unusual and potentially harmful attempts to access or exploit storage accounts. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnStorageAccounts_Audit.json) |
+|[Deprecated accounts should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6b1cbf55-e8b6-442f-ba4c-7246b6381474) |Deprecated accounts should be removed from your subscriptions. Deprecated accounts are accounts that have been blocked from signing in. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveDeprecatedAccounts_Audit.json) |
+|[Deprecated accounts with owner permissions should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Febb62a0c-3560-49e1-89ed-27e074e9f8ad) |Deprecated accounts with owner permissions should be removed from your subscription. Deprecated accounts are accounts that have been blocked from signing in. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveDeprecatedAccountsWithOwnerPermissions_Audit.json) |
+|[Email notification for high severity alerts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6e2593d9-add6-4083-9c9b-4b7d2188c899) |To ensure the relevant people in your organization are notified when there is a potential security breach in one of your subscriptions, enable email notifications for high severity alerts in Security Center. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Email_notification.json) |
+|[Email notification to subscription owner for high severity alerts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0b15565f-aa9e-48ba-8619-45960f2c314d) |To ensure your subscription owners are notified when there is a potential security breach in their subscription, set email notifications to subscription owners for high severity alerts in Security Center. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Email_notification_to_subscription_owner.json) |
+|[External accounts with owner permissions should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff8456c1c-aa66-4dfb-861a-25d127b775c9) |External accounts with owner permissions should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveExternalAccountsWithOwnerPermissions_Audit.json) |
+|[External accounts with read permissions should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5f76cf89-fbf2-47fd-a3f4-b891fa780b60) |External accounts with read privileges should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveExternalAccountsReadPermissions_Audit.json) |
+|[External accounts with write permissions should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5c607a2e-c700-4744-8254-d77e7c9eb5e4) |External accounts with write privileges should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveExternalAccountsWritePermissions_Audit.json) |
+|[MFA should be enabled accounts with write permissions on your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9297c21d-2ed6-4474-b48f-163f75654ce3) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with write privileges to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForWritePermissions_Audit.json) |
+|[MFA should be enabled on accounts with owner permissions on your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faa633080-8b72-40c4-a2d7-d00c03e80bed) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with owner permissions to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForOwnerPermissions_Audit.json) |
+|[MFA should be enabled on accounts with read permissions on your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe3576e28-8b17-4677-84c3-db2990658d64) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with read privileges to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForReadPermissions_Audit.json) |
+|[Subscriptions should have a contact email address for security issues](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4f4f78b8-e367-4b10-a341-d9a4ad5cf1c7) |To ensure the relevant people in your organization are notified when there is a potential security breach in one of your subscriptions, set a security contact to receive email notifications from Security Center. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Security_contact_email.json) |
+
+### Trails-3.1
+
+**ID**: RBI IT Framework 3.1.g
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[\[Preview\]: Log Analytics Extension should be enabled for listed virtual machine images](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F32133ab0-ee4b-4b44-98d6-042180979d50) |Reports virtual machines as non-compliant if the virtual machine image is not in the list defined and the extension is not installed. |AuditIfNotExists, Disabled |[2.0.1-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/LogAnalytics_OSImage_Audit.json) |
+|[\[Preview\]: Network traffic data collection agent should be installed on Linux virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F04c4380f-3fae-46e8-96c9-30193528f602) |Security Center uses the Microsoft Dependency agent to collect network traffic data from your Azure virtual machines to enable advanced network protection features such as traffic visualization on the network map, network hardening recommendations and specific network threats. |AuditIfNotExists, Disabled |[1.0.2-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ASC_Dependency_Agent_Audit_Linux.json) |
+|[\[Preview\]: Network traffic data collection agent should be installed on Windows virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2f2ee1de-44aa-4762-b6bd-0893fc3f306d) |Security Center uses the Microsoft Dependency agent to collect network traffic data from your Azure virtual machines to enable advanced network protection features such as traffic visualization on the network map, network hardening recommendations and specific network threats. |AuditIfNotExists, Disabled |[1.0.2-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ASC_Dependency_Agent_Audit_Windows.json) |
+|[Activity log should be retained for at least one year](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb02aacc0-b073-424e-8298-42b22829ee0a) |This policy audits the activity log if the retention is not set for 365 days or forever (retention days set to 0). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ActivityLogRetention_365orGreater.json) |
+|[Application Insights components should block log ingestion and querying from public networks](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1bc02227-0cb6-4e11-8f53-eb0b22eab7e8) |Improve Application Insights security by blocking log ingestion and querying from public networks. Only private-link connected networks will be able to ingest and query logs of this component. Learn more at [https://aka.ms/AzMonPrivateLink#configure-application-insights](https://aka.ms/AzMonPrivateLink#configure-application-insights). |audit, Audit, deny, Deny, disabled, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ApplicationInsightsComponents_NetworkAccessEnabled_Deny.json) |
+|[Auto provisioning of the Log Analytics agent should be enabled on your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F475aae12-b88a-4572-8b36-9b712b2b3a17) |To monitor for security vulnerabilities and threats, Azure Security Center collects data from your Azure virtual machines. Data is collected by the Log Analytics agent, formerly known as the Microsoft Monitoring Agent (MMA), which reads various security-related configurations and event logs from the machine and copies the data to your Log Analytics workspace for analysis. We recommend enabling auto provisioning to automatically deploy the agent to all supported Azure VMs and any new ones that are created. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Automatic_provisioning_log_analytics_monitoring_agent.json) |
+|[Azure Defender for App Service should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2913021d-f2fd-4f3d-b958-22354e2bdbcb) |Azure Defender for App Service leverages the scale of the cloud, and the visibility that Azure has as a cloud provider, to monitor for common web app attacks. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnAppServices_Audit.json) |
+|[Azure Defender for Azure SQL Database servers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7fe3b40f-802b-4cdd-8bd4-fd799c948cc2) |Azure Defender for SQL provides functionality for surfacing and mitigating potential database vulnerabilities, detecting anomalous activities that could indicate threats to SQL databases, and discovering and classifying sensitive data. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedDataSecurityOnSqlServers_Audit.json) |
+|[Azure Defender for DNS should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbdc59948-5574-49b3-bb91-76b7c986428d) |Azure Defender for DNS provides an additional layer of protection for your cloud resources by continuously monitoring all DNS queries from your Azure resources. Azure Defender alerts you about suspicious activity at the DNS layer. Learn more about the capabilities of Azure Defender for DNS at [https://aka.ms/defender-for-dns](https://aka.ms/defender-for-dns) . Enabling this Azure Defender plan results in charges. Learn about the pricing details per region on Security Center's pricing page: [https://aka.ms/pricing-security-center](https://aka.ms/pricing-security-center) . |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAzureDefenderOnDns_Audit.json) |
+|[Azure Defender for open-source relational databases should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0a9fbe0d-c5c4-4da8-87d8-f4fd77338835) |Azure Defender for open-source relational databases detects anomalous activities indicating unusual and potentially harmful attempts to access or exploit databases. Learn more about the capabilities of Azure Defender for open-source relational databases at [https://aka.ms/AzDforOpenSourceDBsDocu](https://aka.ms/AzDforOpenSourceDBsDocu). Important: Enabling this plan will result in charges for protecting your open-source relational databases. Learn about the pricing on Security Center's pricing page: [https://aka.ms/pricing-security-center](https://aka.ms/pricing-security-center) |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAzureDefenderOnOpenSourceRelationalDatabases_Audit.json) |
+|[Azure Defender for Resource Manager should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc3d20c29-b36d-48fe-808b-99a87530ad99) |Azure Defender for Resource Manager automatically monitors the resource management operations in your organization. Azure Defender detects threats and alerts you about suspicious activity. Learn more about the capabilities of Azure Defender for Resource Manager at [https://aka.ms/defender-for-resource-manager](https://aka.ms/defender-for-resource-manager) . Enabling this Azure Defender plan results in charges. Learn about the pricing details per region on Security Center's pricing page: [https://aka.ms/pricing-security-center](https://aka.ms/pricing-security-center) . |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAzureDefenderOnResourceManager_Audit.json) |
+|[Azure Defender for servers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4da35fc9-c9e7-4960-aec9-797fe7d9051d) |Azure Defender for servers provides real-time threat protection for server workloads and generates hardening recommendations as well as alerts about suspicious activities. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnVM_Audit.json) |
+|[Azure Defender for SQL servers on machines should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6581d072-105e-4418-827f-bd446d56421b) |Azure Defender for SQL provides functionality for surfacing and mitigating potential database vulnerabilities, detecting anomalous activities that could indicate threats to SQL databases, and discovering and classifying sensitive data. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedDataSecurityOnSqlServerVirtualMachines_Audit.json) |
+|[Azure Defender for SQL should be enabled for unprotected Azure SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb4388-5bf4-4ad7-ba82-2cd2f41ceae9) |Audit SQL servers without Advanced Data Security |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_AdvancedDataSecurity_Audit.json) |
+|[Azure Defender for SQL should be enabled for unprotected SQL Managed Instances](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb7388-5bf4-4ad7-ba99-2cd2f41cebb9) |Audit each SQL Managed Instance without advanced data security. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlManagedInstance_AdvancedDataSecurity_Audit.json) |
+|[Azure Defender for Storage should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F308fbb08-4ab8-4e67-9b29-592e93fb94fa) |Azure Defender for Storage provides detections of unusual and potentially harmful attempts to access or exploit storage accounts. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnStorageAccounts_Audit.json) |
+|[Azure Kubernetes Service clusters should have Defender profile enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa1840de2-8088-4ea8-b153-b4c723e9cb01) |Microsoft Defender for Containers provides cloud-native Kubernetes security capabilities including environment hardening, workload protection, and run-time protection. When you enable the SecurityProfile.AzureDefender on your Azure Kubernetes Service cluster, an agent is deployed to your cluster to collect security event data. Learn more about Microsoft Defender for Containers in [https://docs.microsoft.com/azure/defender-for-cloud/defender-for-containers-introduction?tabs=defender-for-container-arch-aks](https://docs.microsoft.com/azure/defender-for-cloud/defender-for-containers-introduction?tabs=defender-for-container-arch-aks) |Audit, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ASC_Azure_Defender_Kubernetes_AKS_SecurityProfile_Audit.json) |
+|[Azure Monitor log profile should collect logs for categories 'write,' 'delete,' and 'action'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1a4e592a-6a6e-44a5-9814-e36264ca96e7) |This policy ensures that a log profile collects logs for categories 'write,' 'delete,' and 'action' |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ActivityLog_CaptureAllCategories.json) |
+|[Azure Monitor Logs clusters should be created with infrastructure-encryption enabled (double encryption)](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fea0dfaed-95fb-448c-934e-d6e713ce393d) |To ensure secure data encryption is enabled at the service level and the infrastructure level with two different encryption algorithms and two different keys, use an Azure Monitor dedicated cluster. This option is enabled by default when supported at the region, see [https://docs.microsoft.com/azure/azure-monitor/platform/customer-managed-keys#customer-managed-key-overview](https://docs.microsoft.com/azure/azure-monitor/platform/customer-managed-keys#customer-managed-key-overview). |audit, Audit, deny, Deny, disabled, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/LogAnalyticsClusters_CMKDoubleEncryptionEnabled_Deny.json) |
+|[Azure Monitor Logs clusters should be encrypted with customer-managed key](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1f68a601-6e6d-4e42-babf-3f643a047ea2) |Create Azure Monitor logs cluster with customer-managed keys encryption. By default, the log data is encrypted with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance. Customer-managed key in Azure Monitor gives you more control over the access to you data, see [https://docs.microsoft.com/azure/azure-monitor/platform/customer-managed-keys](https://docs.microsoft.com/azure/azure-monitor/platform/customer-managed-keys). |audit, Audit, deny, Deny, disabled, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/LogAnalyticsClusters_CMKEnabled_Deny.json) |
+|[Azure Monitor Logs for Application Insights should be linked to a Log Analytics workspace](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd550e854-df1a-4de9-bf44-cd894b39a95e) |Link the Application Insights component to a Log Analytics workspace for logs encryption. Customer-managed keys are commonly required to meet regulatory compliance and for more control over the access to your data in Azure Monitor. Linking your component to a Log Analytics workspace that's enabled with a customer-managed key, ensures that your Application Insights logs meet this compliance requirement, see [https://docs.microsoft.com/azure/azure-monitor/platform/customer-managed-keys](https://docs.microsoft.com/azure/azure-monitor/platform/customer-managed-keys). |audit, Audit, deny, Deny, disabled, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ApplicationInsightsComponent_WorkspaceAssociation_Deny.json) |
+|[Azure Monitor should collect activity logs from all regions](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F41388f1c-2db0-4c25-95b2-35d7f5ccbfa9) |This policy audits the Azure Monitor log profile which does not export activities from all Azure supported regions including global. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ActivityLog_CaptureAllRegions.json) |
+|[Azure subscriptions should have a log profile for Activity Log](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7796937f-307b-4598-941c-67d3a05ebfe7) |This policy ensures if a log profile is enabled for exporting activity logs. It audits if there is no log profile created to export the logs either to a storage account or to an event hub. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/Logprofile_activityLogs_Audit.json) |
+|[Disconnections should be logged for PostgreSQL database servers.](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Feb6f77b9-bd53-4e35-a23d-7f65d5f0e446) |This policy helps audit any PostgreSQL databases in your environment without log_disconnections enabled. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_EnableLogDisconnections_Audit.json) |
+|[Flow logs should be configured for every network security group](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc251913d-7d24-4958-af87-478ed3b9ba41) |Audit for network security groups to verify if flow logs are configured. Enabling flow logs allows to log information about IP traffic flowing through network security group. It can be used for optimizing network flows, monitoring throughput, verifying compliance, detecting intrusions and more. |Audit, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/NetworkSecurityGroup_FlowLog_Audit.json) |
+|[Flow logs should be enabled for every network security group](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F27960feb-a23c-4577-8d36-ef8b5f35e0be) |Audit for flow log resources to verify if flow log status is enabled. Enabling flow logs allows to log information about IP traffic flowing through network security group. It can be used for optimizing network flows, monitoring throughput, verifying compliance, detecting intrusions and more. |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/NetworkWatcherFlowLog_Enabled_Audit.json) |
+|[Log Analytics agent should be installed on your virtual machine for Azure Security Center monitoring](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa4fe33eb-e377-4efb-ab31-0784311bc499) |This policy audits any Windows/Linux virtual machines (VMs) if the Log Analytics agent is not installed which Security Center uses to monitor for security vulnerabilities and threats |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_InstallLaAgentOnVm.json) |
+|[Log Analytics agent should be installed on your virtual machine scale sets for Azure Security Center monitoring](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa3a6ea0c-e018-4933-9ef0-5aaa1501449b) |Security Center collects data from your Azure virtual machines (VMs) to monitor for security vulnerabilities and threats. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_InstallLaAgentOnVmss.json) |
+|[Log Analytics extension should be enabled in virtual machine scale sets for listed virtual machine images](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5c3bc7b8-a64c-4e08-a9cd-7ff0f31e1138) |Reports virtual machine scale sets as non-compliant if the virtual machine image is not in the list defined and the extension is not installed. |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/LogAnalytics_OSImage_VMSS_Audit.json) |
+|[Log Analytics workspaces should block log ingestion and querying from public networks](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6c53d030-cc64-46f0-906d-2bc061cd1334) |Improve workspace security by blocking log ingestion and querying from public networks. Only private-link connected networks will be able to ingest and query logs on this workspace. Learn more at [https://aka.ms/AzMonPrivateLink#configure-log-analytics](https://aka.ms/AzMonPrivateLink#configure-log-analytics). |audit, Audit, deny, Deny, disabled, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/LogAnalyticsWorkspaces_NetworkAccessEnabled_Deny.json) |
+|[Log Analytics Workspaces should block non-Azure Active Directory based ingestion.](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe15effd4-2278-4c65-a0da-4d6f6d1890e2) |Enforcing log ingestion to require Azure Active Directory authentication prevents unauthenticated logs from an attacker which could lead to incorrect status, false alerts, and incorrect logs stored in the system. |Deny, Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/LogAnalyticsWorkspaces_DisableLocalAuth_Deny.json) |
+|[Log checkpoints should be enabled for PostgreSQL database servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Feb6f77b9-bd53-4e35-a23d-7f65d5f0e43d) |This policy helps audit any PostgreSQL databases in your environment without log_checkpoints setting enabled. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_EnableLogCheckpoint_Audit.json) |
+|[Log connections should be enabled for PostgreSQL database servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Feb6f77b9-bd53-4e35-a23d-7f65d5f0e442) |This policy helps audit any PostgreSQL databases in your environment without log_connections setting enabled. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_EnableLogConnections_Audit.json) |
+|[Log duration should be enabled for PostgreSQL database servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Feb6f77b9-bd53-4e35-a23d-7f65d5f0e8f3) |This policy helps audit any PostgreSQL databases in your environment without log_duration setting enabled. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_EnableLogDuration_Audit.json) |
+|[Logic Apps Integration Service Environment should be encrypted with customer-managed keys](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1fafeaf6-7927-4059-a50a-8eb2a7a6f2b5) |Deploy into Integration Service Environment to manage encryption at rest of Logic Apps data using customer-managed keys. By default, customer data is encrypted with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Logic%20Apps/LogicApps_ISEWithCustomerManagedKey_AuditDeny.json) |
+|[Network Watcher flow logs should have traffic analytics enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2f080164-9f4d-497e-9db6-416dc9f7b48a) |Traffic analytics analyzes Network Watcher network security group flow logs to provide insights into traffic flow in your Azure cloud. It can be used to visualize network activity across your Azure subscriptions and identify hot spots, identify security threats, understand traffic flow patterns, pinpoint network misconfigurations and more. |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/NetworkSecurityGroup_FlowLog_TrafficAnalytics_Audit.json) |
+|[SQL servers with auditing to storage account destination should be configured with 90 days retention or higher](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F89099bee-89e0-4b26-a5f4-165451757743) |For incident investigation purposes, we recommend setting the data retention for your SQL Server' auditing to storage account destination to at least 90 days. Confirm that you are meeting the necessary retention rules for the regions in which you are operating. This is sometimes required for compliance with regulatory standards. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServerAuditingRetentionDays_Audit.json) |
+|[Storage account containing the container with activity logs must be encrypted with BYOK](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffbb99e8e-e444-4da0-9ff1-75c92f5a85b2) |This policy audits if the Storage account containing the container with activity logs is encrypted with BYOK. The policy works only if the storage account lies on the same subscription as activity logs by design. More information on Azure Storage encryption at rest can be found here [https://aka.ms/azurestoragebyok](https://aka.ms/azurestoragebyok). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ActivityLog_StorageAccountBYOK_Audit.json) |
+|[The Log Analytics extension should be installed on Virtual Machine Scale Sets](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fefbde977-ba53-4479-b8e9-10b957924fbf) |This policy audits any Windows/Linux Virtual Machine Scale Sets if the Log Analytics extension is not installed. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/VMSS_LogAnalyticsAgent_AuditIfNotExists.json) |
+|[Virtual machines should have the Log Analytics extension installed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa70ca396-0a34-413a-88e1-b956c1e683be) |This policy audits any Windows/Linux virtual machines if the Log Analytics extension is not installed. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/VirtualMachines_LogAnalyticsAgent_AuditIfNotExists.json) |
+
+### Public Key Infrastructure (PKI)-3.1
+
+**ID**: RBI IT Framework 3.1.h
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[\[Preview\]: Certificates should have the specified maximum validity period](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0a075868-4c26-42ef-914c-5bc007359560) |Manage your organizational compliance requirements by specifying the maximum amount of time that a certificate can be valid within your key vault. |audit, Audit, deny, Deny, disabled, Disabled |[2.2.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/Certificates_ValidityPeriod.json) |
+|[\[Preview\]: Private endpoint should be configured for Key Vault](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5f0bc445-3935-4915-9981-011aa2b46147) |Private link provides a way to connect Key Vault to your Azure resources without sending traffic over the public internet. Private link provides defense in depth protection against data exfiltration. |Audit, Deny, Disabled |[1.1.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/AzureKeyVaultPrivateEndpointEnabled_Audit.json) |
+|[App Configuration should use a customer-managed key](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F967a4b4b-2da9-43c1-b7d0-f98d0d74d0b1) |Customer-managed keys provide enhanced data protection by allowing you to manage your encryption keys. This is often required to meet compliance requirements. |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Configuration/CustomerManagedKey_Audit.json) |
+|[App Service apps should only be accessible over HTTPS](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa4af4a39-4135-47fb-b175-47fbdf85311d) |Use of HTTPS ensures server/service authentication and protects data in transit from network layer eavesdropping attacks. |Audit, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppServiceWebapp_AuditHTTP_Audit.json) |
+|[App Service apps should use the latest TLS version](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff0e6e85b-9b9f-4a4b-b67b-f730d42f1b0b) |Upgrade to the latest TLS version. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_RequireLatestTls_WebApp_Audit.json) |
+|[App Service Environment should have internal encryption enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffb74e86f-d351-4b8d-b034-93da7391c01f) |Setting InternalEncryption to true encrypts the pagefile, worker disks, and internal network traffic between the front ends and workers in an App Service Environment. To learn more, refer to [https://docs.microsoft.com/azure/app-service/environment/app-service-app-service-environment-custom-settings#enable-internal-encryption](https://docs.microsoft.com/azure/app-service/environment/app-service-app-service-environment-custom-settings#enable-internal-encryption). |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_HostingEnvironment_InternalEncryption_Audit.json) |
+|[Automation account variables should be encrypted](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3657f5a0-770e-44a3-b44e-9431ba1e9735) |It is important to enable encryption of Automation account variable assets when storing sensitive data |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Automation/Automation_AuditUnencryptedVars_Audit.json) |
+|[Azure Cosmos DB accounts should use customer-managed keys to encrypt data at rest](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1f905d99-2ab7-462c-a6b0-f709acca6c8f) |Use customer-managed keys to manage the encryption at rest of your Azure Cosmos DB. By default, the data is encrypted at rest with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more at [https://aka.ms/cosmosdb-cmk](https://aka.ms/cosmosdb-cmk). |audit, Audit, deny, Deny, disabled, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cosmos%20DB/Cosmos_CMK_Deny.json) |
+|[Azure Key Vault should have firewall enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F55615ac9-af46-4a59-874e-391cc3dfb490) |Enable the key vault firewall so that the key vault is not accessible by default to any public IPs. You can then configure specific IP ranges to limit access to those networks. Learn more at: [https://docs.microsoft.com/azure/key-vault/general/network-security](https://docs.microsoft.com/azure/key-vault/general/network-security) |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/AzureKeyVaultFirewallEnabled_Audit.json) |
+|[Azure Monitor Logs clusters should be created with infrastructure-encryption enabled (double encryption)](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fea0dfaed-95fb-448c-934e-d6e713ce393d) |To ensure secure data encryption is enabled at the service level and the infrastructure level with two different encryption algorithms and two different keys, use an Azure Monitor dedicated cluster. This option is enabled by default when supported at the region, see [https://docs.microsoft.com/azure/azure-monitor/platform/customer-managed-keys#customer-managed-key-overview](https://docs.microsoft.com/azure/azure-monitor/platform/customer-managed-keys#customer-managed-key-overview). |audit, Audit, deny, Deny, disabled, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/LogAnalyticsClusters_CMKDoubleEncryptionEnabled_Deny.json) |
+|[Disk encryption should be enabled on Azure Data Explorer](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff4b53539-8df9-40e4-86c6-6b607703bd4e) |Enabling disk encryption helps protect and safeguard your data to meet your organizational security and compliance commitments. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Data%20Explorer/ADX_disk_encrypted.json) |
+|[Enforce SSL connection should be enabled for MySQL database servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe802a67a-daf5-4436-9ea6-f6d821dd0c5d) |Azure Database for MySQL supports connecting your Azure Database for MySQL server to client applications using Secure Sockets Layer (SSL). Enforcing SSL connections between your database server and your client applications helps protect against 'man in the middle' attacks by encrypting the data stream between the server and your application. This configuration enforces that SSL is always enabled for accessing your database server. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MySQL_EnableSSL_Audit.json) |
+|[Enforce SSL connection should be enabled for PostgreSQL database servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd158790f-bfb0-486c-8631-2dc6b4e8e6af) |Azure Database for PostgreSQL supports connecting your Azure Database for PostgreSQL server to client applications using Secure Sockets Layer (SSL). Enforcing SSL connections between your database server and your client applications helps protect against 'man in the middle' attacks by encrypting the data stream between the server and your application. This configuration enforces that SSL is always enabled for accessing your database server. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_EnableSSL_Audit.json) |
+|[Function apps should only be accessible over HTTPS](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6d555dd1-86f2-4f1c-8ed7-5abae7c6cbab) |Use of HTTPS ensures server/service authentication and protects data in transit from network layer eavesdropping attacks. |Audit, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppServiceFunctionApp_AuditHTTP_Audit.json) |
+|[Function apps should use the latest TLS version](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff9d614c5-c173-4d56-95a7-b4437057d193) |Upgrade to the latest TLS version. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_RequireLatestTls_FunctionApp_Audit.json) |
+|[Infrastructure encryption should be enabled for Azure Database for MySQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3a58212a-c829-4f13-9872-6371df2fd0b4) |Enable infrastructure encryption for Azure Database for MySQL servers to have higher level of assurance that the data is secure. When infrastructure encryption is enabled, the data at rest is encrypted twice using FIPS 140-2 compliant Microsoft managed keys. |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MySQL_InfrastructureEncryption_Audit.json) |
+|[Infrastructure encryption should be enabled for Azure Database for PostgreSQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F24fba194-95d6-48c0-aea7-f65bf859c598) |Enable infrastructure encryption for Azure Database for PostgreSQL servers to have higher level of assurance that the data is secure. When infrastructure encryption is enabled, the data at rest is encrypted twice using FIPS 140-2 compliant Microsoft managed keys |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_InfrastructureEncryption_Audit.json) |
+|[Key Vault keys should have an expiration date](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F152b15f7-8e1f-4c1f-ab71-8c010ba5dbc0) |Cryptographic keys should have a defined expiration date and not be permanent. Keys that are valid forever provide a potential attacker with more time to compromise the key. It is a recommended security practice to set expiration dates on cryptographic keys. |Audit, Deny, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/Keys_ExpirationSet.json) |
+|[Key Vault secrets should have an expiration date](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F98728c90-32c7-4049-8429-847dc0f4fe37) |Secrets should have a defined expiration date and not be permanent. Secrets that are valid forever provide a potential attacker with more time to compromise them. It is a recommended security practice to set expiration dates on secrets. |Audit, Deny, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/Secrets_ExpirationSet.json) |
+|[Key vaults should have purge protection enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0b60c0b2-2dc2-4e1c-b5c9-abbed971de53) |Malicious deletion of a key vault can lead to permanent data loss. A malicious insider in your organization can potentially delete and purge key vaults. Purge protection protects you from insider attacks by enforcing a mandatory retention period for soft deleted key vaults. No one inside your organization or Microsoft will be able to purge your key vaults during the soft delete retention period. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/KeyVault_Recoverable_Audit.json) |
+|[Key vaults should have soft delete enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1e66c121-a66a-4b1f-9b83-0fd99bf0fc2d) |Deleting a key vault without soft delete enabled permanently deletes all secrets, keys, and certificates stored in the key vault. Accidental deletion of a key vault can lead to permanent data loss. Soft delete allows you to recover an accidentally deleted key vault for a configurable retention period. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/KeyVault_SoftDeleteMustBeEnabled_Audit.json) |
+|[Managed disks should use a specific set of disk encryption sets for the customer-managed key encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd461a302-a187-421a-89ac-84acdb4edc04) |Requiring a specific set of disk encryption sets to be used with managed disks give you control over the keys used for encryption at rest. You are able to select the allowed encrypted sets and all others are rejected when attached to a disk. Learn more at [https://aka.ms/disks-cmk](https://aka.ms/disks-cmk). |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/ManagedDiskEncryptionSetsAllowed_Deny.json) |
+|[MySQL servers should use customer-managed keys to encrypt data at rest](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F83cef61d-dbd1-4b20-a4fc-5fbc7da10833) |Use customer-managed keys to manage the encryption at rest of your MySQL servers. By default, the data is encrypted at rest with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. |AuditIfNotExists, Disabled |[1.0.4](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MySQL_EnableByok_Audit.json) |
+|[PostgreSQL servers should use customer-managed keys to encrypt data at rest](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F18adea5e-f416-4d0f-8aa8-d24321e3e274) |Use customer-managed keys to manage the encryption at rest of your PostgreSQL servers. By default, the data is encrypted at rest with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. |AuditIfNotExists, Disabled |[1.0.4](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_EnableByok_Audit.json) |
+|[Saved-queries in Azure Monitor should be saved in customer storage account for logs encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffa298e57-9444-42ba-bf04-86e8470e32c7) |Link storage account to Log Analytics workspace to protect saved-queries with storage account encryption. Customer-managed keys are commonly required to meet regulatory compliance and for more control over the access to your saved-queries in Azure Monitor. For more details on the above, see [https://docs.microsoft.com/azure/azure-monitor/platform/customer-managed-keys?tabs=portal#customer-managed-key-for-saved-queries](https://docs.microsoft.com/azure/azure-monitor/platform/customer-managed-keys?tabs=portal#customer-managed-key-for-saved-queries). |audit, Audit, deny, Deny, disabled, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/LogAnalyticsWorkspaces_CMKBYOSQueryEnabled_Deny.json) |
+|[Secure transfer to storage accounts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F404c3081-a854-4457-ae30-26a93ef643f9) |Audit requirement of Secure transfer in your storage account. Secure transfer is an option that forces your storage account to accept requests only from secure connections (HTTPS). Use of HTTPS ensures authentication between the server and the service and protects data in transit from network layer attacks such as man-in-the-middle, eavesdropping, and session-hijacking |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/Storage_AuditForHTTPSEnabled_Audit.json) |
+|[Storage account encryption scopes should use customer-managed keys to encrypt data at rest](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb5ec538c-daa0-4006-8596-35468b9148e8) |Use customer-managed keys to manage the encryption at rest of your storage account encryption scopes. Customer-managed keys enable the data to be encrypted with an Azure key-vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more about storage account encryption scopes at [https://aka.ms/encryption-scopes-overview](https://aka.ms/encryption-scopes-overview). |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/Storage_EncryptionScopesShouldUseCMK_Audit.json) |
+|[Storage account encryption scopes should use double encryption for data at rest](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbfecdea6-31c4-4045-ad42-71b9dc87247d) |Enable infrastructure encryption for encryption at rest of your storage account encryption scopes for added security. Infrastructure encryption ensures that your data is encrypted twice. |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/StorageEncryptionScopesShouldUseDoubleEncryption_Audit.json) |
+|[Storage accounts should have infrastructure encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4733ea7b-a883-42fe-8cac-97454c2a9e4a) |Enable infrastructure encryption for higher level of assurance that the data is secure. When infrastructure encryption is enabled, data in a storage account is encrypted twice. |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/StorageAccountInfrastructureEncryptionEnabled_Audit.json) |
+|[Storage accounts should use customer-managed key for encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6fac406b-40ca-413b-bf8e-0bf964659c25) |Secure your blob and file storage account with greater flexibility using customer-managed keys. When you specify a customer-managed key, that key is used to protect and control access to the key that encrypts your data. Using customer-managed keys provides additional capabilities to control rotation of the key encryption key or cryptographically erase data. |Audit, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/StorageAccountCustomerManagedKeyEnabled_Audit.json) |
+|[Transparent Data Encryption on SQL databases should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F17k78e20-9358-41c9-923c-fb736d382a12) |Transparent data encryption should be enabled to protect data-at-rest and meet compliance requirements |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlDBEncryption_Audit.json) |
+|[Virtual machines should encrypt temp disks, caches, and data flows between Compute and Storage resources](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0961003e-5a0a-4549-abde-af6a37f2724d) |By default, a virtual machine's OS and data disks are encrypted-at-rest using platform-managed keys. Temp disks, data caches and data flowing between compute and storage aren't encrypted. Disregard this recommendation if: 1. using encryption-at-host, or 2. server-side encryption on Managed Disks meets your security requirements. Learn more in: Server-side encryption of Azure Disk Storage: [https://aka.ms/disksse,](https://aka.ms/disksse,) Different disk encryption offerings: [https://aka.ms/diskencryptioncomparison](https://aka.ms/diskencryptioncomparison) |AuditIfNotExists, Disabled |[2.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_UnencryptedVMDisks_Audit.json) |
+
+### Vulnerability Management-3.3
+
+**ID**: RBI IT Framework 3.3
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[A vulnerability assessment solution should be enabled on your virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F501541f7-f7e7-4cd6-868c-4190fdad3ac9) |Audits virtual machines to detect whether they are running a supported vulnerability assessment solution. A core component of every cyber risk and security program is the identification and analysis of vulnerabilities. Azure Security Center's standard pricing tier includes vulnerability scanning for your virtual machines at no extra cost. Additionally, Security Center can automatically deploy this tool for you. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ServerVulnerabilityAssessment_Audit.json) |
+|[App Service apps that use Java should use the latest 'Java version'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F496223c3-ad65-4ecd-878a-bae78737e9ed) |Periodically, newer versions are released for Java software either due to security flaws or to include additional functionality. Using the latest Java version for web apps is recommended in order to take advantage of security fixes, if any, and/or new functionalities of the latest version. Currently, this policy only applies to Linux apps. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_WebApp_Audit_java_Latest.json) |
+|[App Service apps that use PHP should use the latest 'PHP version'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7261b898-8a84-4db8-9e04-18527132abb3) |Periodically, newer versions are released for PHP software either due to security flaws or to include additional functionality. Using the latest PHP version for App Service apps is recommended in order to take advantage of security fixes, if any, and/or new functionalities of the latest version. Currently, this policy only applies to Linux apps. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_Webapp_Audit_PHP_Latest.json) |
+|[App Service apps that use Python should use the latest 'Python version'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7008174a-fd10-4ef0-817e-fc820a951d73) |Periodically, newer versions are released for Python software either due to security flaws or to include additional functionality. Using the latest Python version for App Service apps is recommended in order to take advantage of security fixes, if any, and/or new functionalities of the latest version. This policy only applies to Linux apps. |AuditIfNotExists, Disabled |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_WebApp_Audit_python_Latest.json) |
+|[Container registry images should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5f0f936f-2f01-4bf5-b6be-d423792fa562) |Container image vulnerability assessment scans your registry for security vulnerabilities and exposes detailed findings for each image. Resolving the vulnerabilities can greatly improve your containers' security posture and protect them from attacks. |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ContainerRegistryVulnerabilityAssessment_Audit.json) |
+|[Function apps that use Java should use the latest 'Java version'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9d0b6ea4-93e2-4578-bf2f-6bb17d22b4bc) |Periodically, newer versions are released for Java software either due to security flaws or to include additional functionality. Using the latest Java version for Function apps is recommended in order to take advantage of security fixes, if any, and/or new functionalities of the latest version. Currently, this policy only applies to Linux apps. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_FunctionApp_Audit_java_Latest.json) |
+|[Function apps that use Python should use the latest 'Python version'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7238174a-fd10-4ef0-817e-fc820a951d73) |Periodically, newer versions are released for Python software either due to security flaws or to include additional functionality. Using the latest Python version for Function apps is recommended in order to take advantage of security fixes, if any, and/or new functionalities of the latest version. This policy only applies to Linux apps since Python is not supported on Windows apps. |AuditIfNotExists, Disabled |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_FunctionApp_Audit_python_Latest.json) |
+|[Kubernetes Services should be upgraded to a non-vulnerable Kubernetes version](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffb893a29-21bb-418c-a157-e99480ec364c) |Upgrade your Kubernetes service cluster to a later Kubernetes version to protect against known vulnerabilities in your current Kubernetes version. Vulnerability CVE-2019-9946 has been patched in Kubernetes versions 1.11.9+, 1.12.7+, 1.13.5+, and 1.14.0+ |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_UpgradeVersion_KubernetesService_Audit.json) |
+|[Running container images should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0fc39691-5a3f-4e3e-94ee-2e6447309ad9) |Container image vulnerability assessment scans container images running on your Kubernetes clusters for security vulnerabilities and exposes detailed findings for each image. Resolving the vulnerabilities can greatly improve your containers' security posture and protect them from attacks. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_KuberenetesRuningImagesVulnerabilityAssessment_Audit.json) |
+|[SQL databases should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffeedbf84-6b99-488c-acc2-71c829aa5ffc) |Monitor vulnerability assessment scan results and recommendations for how to remediate database vulnerabilities. |AuditIfNotExists, Disabled |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_SQLDbVulnerabilities_Audit.json) |
+|[SQL servers on machines should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6ba6d016-e7c3-4842-b8f2-4992ebc0d72d) |SQL vulnerability assessment scans your database for security vulnerabilities, and exposes any deviations from best practices such as misconfigurations, excessive permissions, and unprotected sensitive data. Resolving the vulnerabilities found can greatly improve your database security posture. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ServerSQLVulnerabilityAssessment_Audit.json) |
+|[System updates on virtual machine scale sets should be installed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc3f317a7-a95c-4547-b7e7-11017ebdf2fe) |Audit whether there are any missing system security updates and critical updates that should be installed to ensure that your Windows and Linux virtual machine scale sets are secure. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_VmssMissingSystemUpdates_Audit.json) |
+|[System updates should be installed on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F86b3d65f-7626-441e-b690-81a8b71cff60) |Missing security system updates on your servers will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_MissingSystemUpdates_Audit.json) |
+|[Vulnerabilities in container security configurations should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe8cbc669-f12d-49eb-93e7-9273119e9933) |Audit vulnerabilities in security configuration on machines with Docker installed and display as recommendations in Azure Security Center. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ContainerBenchmark_Audit.json) |
+|[Vulnerabilities in security configuration on your machines should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe1e5fd5d-3e4c-4ce1-8661-7d1873ae6b15) |Servers which do not satisfy the configured baseline will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_OSVulnerabilities_Audit.json) |
+|[Vulnerabilities in security configuration on your virtual machine scale sets should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3c735d8a-a4ba-4a3a-b7cf-db7754cf57f4) |Audit the OS vulnerabilities on your virtual machine scale sets to protect them from attacks. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_VmssOSVulnerabilities_Audit.json) |
+|[Vulnerability Assessment settings for SQL server should contain an email address to receive scan reports](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F057d6cfe-9c4f-4a6d-bc60-14420ea1f1a9) |Ensure that an email address is provided for the 'Send scan reports to' field in the Vulnerability Assessment settings. This email address receives scan result summary after a periodic scan runs on SQL servers. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_VulnerabilityAssessmentEmails_Audit.json) |
+|[Vulnerability assessment should be enabled on SQL Managed Instance](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1b7aa243-30e4-4c9e-bca8-d0d3022b634a) |Audit each SQL Managed Instance which doesn't have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnManagedInstance_Audit.json) |
+|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
+|[Vulnerability assessment should be enabled on your Synapse workspaces](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0049a6b3-a662-4f3e-8635-39cf44ace45a) |Discover, track, and remediate potential vulnerabilities by configuring recurring SQL vulnerability assessment scans on your Synapse workspaces. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Synapse/ASC_SQLVulnerabilityAssessmentOnSynapse_Audit.json) |
+
+### Digital Signatures-3.8
+
+**ID**: RBI IT Framework 3.8
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[\[Preview\]: Certificates should have the specified maximum validity period](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0a075868-4c26-42ef-914c-5bc007359560) |Manage your organizational compliance requirements by specifying the maximum amount of time that a certificate can be valid within your key vault. |audit, Audit, deny, Deny, disabled, Disabled |[2.2.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/Certificates_ValidityPeriod.json) |
+|[App Service apps should have 'Client Certificates (Incoming client certificates)' enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5bb220d9-2698-4ee4-8404-b9c30c9df609) |Client certificates allow for the app to request a certificate for incoming requests. Only clients that have a valid certificate will be able to reach the app. |Audit, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_Webapp_Audit_ClientCert.json) |
+|[Certificates should be issued by the specified integrated certificate authority](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8e826246-c976-48f6-b03e-619bb92b3d82) |Manage your organizational compliance requirements by specifying the Azure integrated certificate authorities that can issue certificates in your key vault such as Digicert or GlobalSign. |audit, Audit, deny, Deny, disabled, Disabled |[2.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/Certificates_Issuers_SupportedCAs.json) |
+|[Certificates should use allowed key types](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1151cede-290b-4ba0-8b38-0ad145ac888f) |Manage your organizational compliance requirements by restricting the key types allowed for certificates. |audit, Audit, deny, Deny, disabled, Disabled |[2.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/Certificates_AllowedKeyTypes.json) |
+|[Certificates using elliptic curve cryptography should have allowed curve names](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbd78111f-4953-4367-9fd5-7e08808b54bf) |Manage the allowed elliptic curve names for ECC Certificates stored in key vault. More information can be found at [https://aka.ms/akvpolicy](https://aka.ms/akvpolicy). |audit, Audit, deny, Deny, disabled, Disabled |[2.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/Certificates_EC_AllowedCurveNames.json) |
+|[Certificates using RSA cryptography should have the specified minimum key size](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fcee51871-e572-4576-855c-047c820360f0) |Manage your organizational compliance requirements by specifying a minimum key size for RSA certificates stored in your key vault. |audit, Audit, deny, Deny, disabled, Disabled |[2.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/Certificates_RSA_MinimumKeySize.json) |
+|[Function apps should have 'Client Certificates (Incoming client certificates)' enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Feaebaea7-8013-4ceb-9d14-7eb32271373c) |Client certificates allow for the app to request a certificate for incoming requests. Only clients with valid certificates will be able to reach the app. |Audit, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_FunctionApp_Audit_ClientCert.json) |
+
+## IT Operations
+
+### IT Operations-4.2
+
+**ID**: RBI IT Framework 4.2
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[\[Preview\]: Network traffic data collection agent should be installed on Linux virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F04c4380f-3fae-46e8-96c9-30193528f602) |Security Center uses the Microsoft Dependency agent to collect network traffic data from your Azure virtual machines to enable advanced network protection features such as traffic visualization on the network map, network hardening recommendations and specific network threats. |AuditIfNotExists, Disabled |[1.0.2-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ASC_Dependency_Agent_Audit_Linux.json) |
+
+### IT Operations-4.4
+
+**ID**: RBI IT Framework 4.4.a
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[A vulnerability assessment solution should be enabled on your virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F501541f7-f7e7-4cd6-868c-4190fdad3ac9) |Audits virtual machines to detect whether they are running a supported vulnerability assessment solution. A core component of every cyber risk and security program is the identification and analysis of vulnerabilities. Azure Security Center's standard pricing tier includes vulnerability scanning for your virtual machines at no extra cost. Additionally, Security Center can automatically deploy this tool for you. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ServerVulnerabilityAssessment_Audit.json) |
+|[Azure Defender for App Service should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2913021d-f2fd-4f3d-b958-22354e2bdbcb) |Azure Defender for App Service leverages the scale of the cloud, and the visibility that Azure has as a cloud provider, to monitor for common web app attacks. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnAppServices_Audit.json) |
+|[Azure Defender for Azure SQL Database servers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7fe3b40f-802b-4cdd-8bd4-fd799c948cc2) |Azure Defender for SQL provides functionality for surfacing and mitigating potential database vulnerabilities, detecting anomalous activities that could indicate threats to SQL databases, and discovering and classifying sensitive data. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedDataSecurityOnSqlServers_Audit.json) |
+
+### MIS For Top Management-4.4
+
+**ID**: RBI IT Framework 4.4.b
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[A vulnerability assessment solution should be enabled on your virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F501541f7-f7e7-4cd6-868c-4190fdad3ac9) |Audits virtual machines to detect whether they are running a supported vulnerability assessment solution. A core component of every cyber risk and security program is the identification and analysis of vulnerabilities. Azure Security Center's standard pricing tier includes vulnerability scanning for your virtual machines at no extra cost. Additionally, Security Center can automatically deploy this tool for you. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ServerVulnerabilityAssessment_Audit.json) |
+|[Azure Defender for App Service should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2913021d-f2fd-4f3d-b958-22354e2bdbcb) |Azure Defender for App Service leverages the scale of the cloud, and the visibility that Azure has as a cloud provider, to monitor for common web app attacks. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnAppServices_Audit.json) |
+|[Azure Defender for Azure SQL Database servers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7fe3b40f-802b-4cdd-8bd4-fd799c948cc2) |Azure Defender for SQL provides functionality for surfacing and mitigating potential database vulnerabilities, detecting anomalous activities that could indicate threats to SQL databases, and discovering and classifying sensitive data. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedDataSecurityOnSqlServers_Audit.json) |
+
+## IS Audit
+
+### Policy for Information System Audit (IS Audit)-5
+
+**ID**: RBI IT Framework 5
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[\[Preview\]: All Internet traffic should be routed via your deployed Azure Firewall](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffc5e4038-4584-4632-8c85-c0448d374b2c) |Azure Security Center has identified that some of your subnets aren't protected with a next generation firewall. Protect your subnets from potential threats by restricting access to them with Azure Firewall or a supported next generation firewall |AuditIfNotExists, Disabled |[3.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/ASC_All_Internet_traffic_should_be_routed_via_Azure_Firewall.json) |
+|[All network ports should be restricted on network security groups associated to your virtual machine](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9daedab3-fb2d-461e-b861-71790eead4f6) |Azure Security Center has identified some of your network security groups' inbound rules to be too permissive. Inbound rules should not allow access from 'Any' or 'Internet' ranges. This can potentially enable attackers to target your resources. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_UnprotectedEndpoints_Audit.json) |
+|[Azure Cosmos DB accounts should have firewall rules](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F862e97cf-49fc-4a5c-9de4-40d4e2e7c8eb) |Firewall rules should be defined on your Azure Cosmos DB accounts to prevent traffic from unauthorized sources. Accounts that have at least one IP rule defined with the virtual network filter enabled are deemed compliant. Accounts disabling public access are also deemed compliant. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cosmos%20DB/Cosmos_NetworkRulesExist_Audit.json) |
+|[Azure Web Application Firewall should be enabled for Azure Front Door entry-points](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F055aa869-bc98-4af8-bafc-23f1ab6ffe2c) |Deploy Azure Web Application Firewall (WAF) in front of public facing web applications for additional inspection of incoming traffic. Web Application Firewall (WAF) provides centralized protection of your web applications from common exploits and vulnerabilities such as SQL injections, Cross-Site Scripting, local and remote file executions. You can also restrict access to your web applications by countries, IP address ranges, and other http(s) parameters via custom rules. |Audit, Deny, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/WAF_AFD_Enabled_Audit.json) |
+|[Flow logs should be configured for every network security group](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc251913d-7d24-4958-af87-478ed3b9ba41) |Audit for network security groups to verify if flow logs are configured. Enabling flow logs allows to log information about IP traffic flowing through network security group. It can be used for optimizing network flows, monitoring throughput, verifying compliance, detecting intrusions and more. |Audit, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/NetworkSecurityGroup_FlowLog_Audit.json) |
+|[Flow logs should be enabled for every network security group](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F27960feb-a23c-4577-8d36-ef8b5f35e0be) |Audit for flow log resources to verify if flow log status is enabled. Enabling flow logs allows to log information about IP traffic flowing through network security group. It can be used for optimizing network flows, monitoring throughput, verifying compliance, detecting intrusions and more. |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/NetworkWatcherFlowLog_Enabled_Audit.json) |
+|[Internet-facing virtual machines should be protected with network security groups](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff6de0be7-9a8a-4b8a-b349-43cf02d22f7c) |Protect your virtual machines from potential threats by restricting access to them with network security groups (NSG). Learn more about controlling traffic with NSGs at [https://aka.ms/nsg-doc](https://aka.ms/nsg-doc) |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_NetworkSecurityGroupsOnInternetFacingVirtualMachines_Audit.json) |
+|[IP firewall rules on Azure Synapse workspaces should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F56fd377d-098c-4f02-8406-81eb055902b8) |Removing all IP firewall rules improves security by ensuring your Azure Synapse workspace can only be accessed from a private endpoint. This configuration audits creation of firewall rules that allow public network access on the workspace. |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Synapse/SynapseWorkspaceFirewallRules_Audit.json) |
+|[IP Forwarding on your virtual machine should be disabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbd352bd5-2853-4985-bf0d-73806b4a5744) |Enabling IP forwarding on a virtual machine's NIC allows the machine to receive traffic addressed to other destinations. IP forwarding is rarely required (e.g., when using the VM as a network virtual appliance), and therefore, this should be reviewed by the network security team. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_IPForwardingOnVirtualMachines_Audit.json) |
+|[Non-internet-facing virtual machines should be protected with network security groups](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbb91dfba-c30d-4263-9add-9c2384e659a6) |Protect your non-internet-facing virtual machines from potential threats by restricting access with network security groups (NSG). Learn more about controlling traffic with NSGs at [https://aka.ms/nsg-doc](https://aka.ms/nsg-doc) |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_NetworkSecurityGroupsOnInternalVirtualMachines_Audit.json) |
+|[Subnets should be associated with a Network Security Group](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe71308d3-144b-4262-b144-efdc3cc90517) |Protect your subnet from potential threats by restricting access to it with a Network Security Group (NSG). NSGs contain a list of Access Control List (ACL) rules that allow or deny network traffic to your subnet. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_NetworkSecurityGroupsOnSubnets_Audit.json) |
+|[Web Application Firewall (WAF) should be enabled for Application Gateway](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F564feb30-bf6a-4854-b4bb-0d2d2d1e6c66) |Deploy Azure Web Application Firewall (WAF) in front of public facing web applications for additional inspection of incoming traffic. Web Application Firewall (WAF) provides centralized protection of your web applications from common exploits and vulnerabilities such as SQL injections, Cross-Site Scripting, local and remote file executions. You can also restrict access to your web applications by countries, IP address ranges, and other http(s) parameters via custom rules. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/WAF_AppGatewayEnabled_Audit.json) |
+|[Web Application Firewall (WAF) should use the specified mode for Application Gateway](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F12430be1-6cc8-4527-a9a8-e3d38f250096) |Mandates the use of 'Detection' or 'Prevention' mode to be active on all Web Application Firewall policies for Application Gateway. |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/WAF_AppGatewayMode_Audit.json) |
+|[Web Application Firewall (WAF) should use the specified mode for Azure Front Door Service](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F425bea59-a659-4cbb-8d31-34499bd030b8) |Mandates the use of 'Detection' or 'Prevention' mode to be active on all Web Application Firewall policies for Azure Front Door Service. |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/WAF_AFD_Mode_Audit.json) |
+
+### Coverage-5.2
+
+**ID**: RBI IT Framework 5.2
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Azure Backup should be enabled for Virtual Machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F013e242c-8828-4970-87b3-ab247555486d) |Ensure protection of your Azure Virtual Machines by enabling Azure Backup. Azure Backup is a secure and cost effective data protection solution for Azure. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Backup/VirtualMachines_EnableAzureBackup_Audit.json) |
+|[Geo-redundant backup should be enabled for Azure Database for MariaDB](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0ec47710-77ff-4a3d-9181-6aa50af424d0) |Azure Database for MariaDB allows you to choose the redundancy option for your database server. It can be set to a geo-redundant backup storage in which the data is not only stored within the region in which your server is hosted, but is also replicated to a paired region to provide recovery option in case of a region failure. Configuring geo-redundant storage for backup is only allowed during server create. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/GeoRedundant_DBForMariaDB_Audit.json) |
+|[Geo-redundant backup should be enabled for Azure Database for MySQL](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F82339799-d096-41ae-8538-b108becf0970) |Azure Database for MySQL allows you to choose the redundancy option for your database server. It can be set to a geo-redundant backup storage in which the data is not only stored within the region in which your server is hosted, but is also replicated to a paired region to provide recovery option in case of a region failure. Configuring geo-redundant storage for backup is only allowed during server create. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/GeoRedundant_DBForMySQL_Audit.json) |
+|[Geo-redundant backup should be enabled for Azure Database for PostgreSQL](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F48af4db5-9b8b-401c-8e74-076be876a430) |Azure Database for PostgreSQL allows you to choose the redundancy option for your database server. It can be set to a geo-redundant backup storage in which the data is not only stored within the region in which your server is hosted, but is also replicated to a paired region to provide recovery option in case of a region failure. Configuring geo-redundant storage for backup is only allowed during server create. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/GeoRedundant_DBForPostgreSQL_Audit.json) |
+
+## Business Continuity Planning
+
+### Business Continuity Planning (BCP) and Disaster Recovery-6
+
+**ID**: RBI IT Framework 6
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[\[Preview\]: Azure Recovery Services vaults should use customer-managed keys for encrypting backup data](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2e94d99a-8a36-4563-bc77-810d8893b671) |Use customer-managed keys to manage the encryption at rest of your backup data. By default, customer data is encrypted with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more at [https://aka.ms/AB-CmkEncryption](https://aka.ms/AB-CmkEncryption). |Audit, Deny, Disabled |[1.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Backup/AzBackupRSVault_CMKEnabled_Audit.json) |
+|[\[Preview\]: Azure Recovery Services vaults should use private link for backup](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fdeeddb44-9f94-4903-9fa0-081d524406e3) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Azure Recovery Services vaults, data leakage risks are reduced. Learn more about private links at: [https://aka.ms/AB-PrivateEndpoints](https://aka.ms/AB-PrivateEndpoints). |Audit, Disabled |[2.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Backup/RecoveryServices_PrivateEndpoint_Audit.json) |
+|[\[Preview\]: Recovery Services vaults should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F11e3da8c-1d68-4392-badd-0ff3c43ab5b0) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Azure Recovery Services vaults, data leakage risks are reduced. Learn more about private links for Azure Site Recovery at: [https://aka.ms/HybridScenarios-PrivateLink](https://aka.ms/HybridScenarios-PrivateLink) and [https://aka.ms/AzureToAzure-PrivateLink](https://aka.ms/AzureToAzure-PrivateLink). |Audit, Disabled |[1.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Site%20Recovery/RecoveryServices_SiteRecovery_PrivateEndpoint_Audit.json) |
+|[Audit virtual machines without disaster recovery configured](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0015ea4d-51ff-4ce3-8d8c-f3f8f0179a56) |Audit virtual machines which do not have disaster recovery configured. To learn more about disaster recovery, visit [https://aka.ms/asr-doc](https://aka.ms/asr-doc). |auditIfNotExists |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/RecoveryServices_DisasterRecovery_Audit.json) |
+|[Azure Backup should be enabled for Virtual Machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F013e242c-8828-4970-87b3-ab247555486d) |Ensure protection of your Azure Virtual Machines by enabling Azure Backup. Azure Backup is a secure and cost effective data protection solution for Azure. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Backup/VirtualMachines_EnableAzureBackup_Audit.json) |
+|[Geo-redundant backup should be enabled for Azure Database for MariaDB](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0ec47710-77ff-4a3d-9181-6aa50af424d0) |Azure Database for MariaDB allows you to choose the redundancy option for your database server. It can be set to a geo-redundant backup storage in which the data is not only stored within the region in which your server is hosted, but is also replicated to a paired region to provide recovery option in case of a region failure. Configuring geo-redundant storage for backup is only allowed during server create. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/GeoRedundant_DBForMariaDB_Audit.json) |
+|[Geo-redundant backup should be enabled for Azure Database for MySQL](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F82339799-d096-41ae-8538-b108becf0970) |Azure Database for MySQL allows you to choose the redundancy option for your database server. It can be set to a geo-redundant backup storage in which the data is not only stored within the region in which your server is hosted, but is also replicated to a paired region to provide recovery option in case of a region failure. Configuring geo-redundant storage for backup is only allowed during server create. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/GeoRedundant_DBForMySQL_Audit.json) |
+|[Geo-redundant backup should be enabled for Azure Database for PostgreSQL](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F48af4db5-9b8b-401c-8e74-076be876a430) |Azure Database for PostgreSQL allows you to choose the redundancy option for your database server. It can be set to a geo-redundant backup storage in which the data is not only stored within the region in which your server is hosted, but is also replicated to a paired region to provide recovery option in case of a region failure. Configuring geo-redundant storage for backup is only allowed during server create. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/GeoRedundant_DBForPostgreSQL_Audit.json) |
+|[Long-term geo-redundant backup should be enabled for Azure SQL Databases](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd38fc420-0735-4ef3-ac11-c806f651a570) |This policy audits any Azure SQL Database with long-term geo-redundant backup not enabled. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/GeoRedundant_SQLDatabase_AuditIfNotExists.json) |
+
+### Recovery strategy / Contingency Plan-6.2
+
+**ID**: RBI IT Framework 6.2
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[\[Preview\]: Azure Recovery Services vaults should use customer-managed keys for encrypting backup data](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2e94d99a-8a36-4563-bc77-810d8893b671) |Use customer-managed keys to manage the encryption at rest of your backup data. By default, customer data is encrypted with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more at [https://aka.ms/AB-CmkEncryption](https://aka.ms/AB-CmkEncryption). |Audit, Deny, Disabled |[1.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Backup/AzBackupRSVault_CMKEnabled_Audit.json) |
+|[\[Preview\]: Azure Recovery Services vaults should use private link for backup](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fdeeddb44-9f94-4903-9fa0-081d524406e3) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Azure Recovery Services vaults, data leakage risks are reduced. Learn more about private links at: [https://aka.ms/AB-PrivateEndpoints](https://aka.ms/AB-PrivateEndpoints). |Audit, Disabled |[2.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Backup/RecoveryServices_PrivateEndpoint_Audit.json) |
+|[Audit virtual machines without disaster recovery configured](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0015ea4d-51ff-4ce3-8d8c-f3f8f0179a56) |Audit virtual machines which do not have disaster recovery configured. To learn more about disaster recovery, visit [https://aka.ms/asr-doc](https://aka.ms/asr-doc). |auditIfNotExists |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/RecoveryServices_DisasterRecovery_Audit.json) |
+|[Azure Backup should be enabled for Virtual Machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F013e242c-8828-4970-87b3-ab247555486d) |Ensure protection of your Azure Virtual Machines by enabling Azure Backup. Azure Backup is a secure and cost effective data protection solution for Azure. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Backup/VirtualMachines_EnableAzureBackup_Audit.json) |
+|[Geo-redundant backup should be enabled for Azure Database for MariaDB](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0ec47710-77ff-4a3d-9181-6aa50af424d0) |Azure Database for MariaDB allows you to choose the redundancy option for your database server. It can be set to a geo-redundant backup storage in which the data is not only stored within the region in which your server is hosted, but is also replicated to a paired region to provide recovery option in case of a region failure. Configuring geo-redundant storage for backup is only allowed during server create. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/GeoRedundant_DBForMariaDB_Audit.json) |
+|[Geo-redundant backup should be enabled for Azure Database for MySQL](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F82339799-d096-41ae-8538-b108becf0970) |Azure Database for MySQL allows you to choose the redundancy option for your database server. It can be set to a geo-redundant backup storage in which the data is not only stored within the region in which your server is hosted, but is also replicated to a paired region to provide recovery option in case of a region failure. Configuring geo-redundant storage for backup is only allowed during server create. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/GeoRedundant_DBForMySQL_Audit.json) |
+|[Geo-redundant backup should be enabled for Azure Database for PostgreSQL](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F48af4db5-9b8b-401c-8e74-076be876a430) |Azure Database for PostgreSQL allows you to choose the redundancy option for your database server. It can be set to a geo-redundant backup storage in which the data is not only stored within the region in which your server is hosted, but is also replicated to a paired region to provide recovery option in case of a region failure. Configuring geo-redundant storage for backup is only allowed during server create. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/GeoRedundant_DBForPostgreSQL_Audit.json) |
+|[Long-term geo-redundant backup should be enabled for Azure SQL Databases](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd38fc420-0735-4ef3-ac11-c806f651a570) |This policy audits any Azure SQL Database with long-term geo-redundant backup not enabled. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/GeoRedundant_SQLDatabase_AuditIfNotExists.json) |
+
+### Recovery strategy / Contingency Plan-6.3
+
+**ID**: RBI IT Framework 6.3
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[\[Preview\]: Azure Recovery Services vaults should use customer-managed keys for encrypting backup data](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2e94d99a-8a36-4563-bc77-810d8893b671) |Use customer-managed keys to manage the encryption at rest of your backup data. By default, customer data is encrypted with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more at [https://aka.ms/AB-CmkEncryption](https://aka.ms/AB-CmkEncryption). |Audit, Deny, Disabled |[1.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Backup/AzBackupRSVault_CMKEnabled_Audit.json) |
+|[\[Preview\]: Azure Recovery Services vaults should use private link for backup](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fdeeddb44-9f94-4903-9fa0-081d524406e3) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Azure Recovery Services vaults, data leakage risks are reduced. Learn more about private links at: [https://aka.ms/AB-PrivateEndpoints](https://aka.ms/AB-PrivateEndpoints). |Audit, Disabled |[2.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Backup/RecoveryServices_PrivateEndpoint_Audit.json) |
+|[Azure Backup should be enabled for Virtual Machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F013e242c-8828-4970-87b3-ab247555486d) |Ensure protection of your Azure Virtual Machines by enabling Azure Backup. Azure Backup is a secure and cost effective data protection solution for Azure. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Backup/VirtualMachines_EnableAzureBackup_Audit.json) |
+|[Geo-redundant backup should be enabled for Azure Database for MariaDB](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0ec47710-77ff-4a3d-9181-6aa50af424d0) |Azure Database for MariaDB allows you to choose the redundancy option for your database server. It can be set to a geo-redundant backup storage in which the data is not only stored within the region in which your server is hosted, but is also replicated to a paired region to provide recovery option in case of a region failure. Configuring geo-redundant storage for backup is only allowed during server create. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/GeoRedundant_DBForMariaDB_Audit.json) |
+|[Geo-redundant backup should be enabled for Azure Database for MySQL](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F82339799-d096-41ae-8538-b108becf0970) |Azure Database for MySQL allows you to choose the redundancy option for your database server. It can be set to a geo-redundant backup storage in which the data is not only stored within the region in which your server is hosted, but is also replicated to a paired region to provide recovery option in case of a region failure. Configuring geo-redundant storage for backup is only allowed during server create. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/GeoRedundant_DBForMySQL_Audit.json) |
+|[Geo-redundant backup should be enabled for Azure Database for PostgreSQL](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F48af4db5-9b8b-401c-8e74-076be876a430) |Azure Database for PostgreSQL allows you to choose the redundancy option for your database server. It can be set to a geo-redundant backup storage in which the data is not only stored within the region in which your server is hosted, but is also replicated to a paired region to provide recovery option in case of a region failure. Configuring geo-redundant storage for backup is only allowed during server create. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/GeoRedundant_DBForPostgreSQL_Audit.json) |
+|[Long-term geo-redundant backup should be enabled for Azure SQL Databases](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd38fc420-0735-4ef3-ac11-c806f651a570) |This policy audits any Azure SQL Database with long-term geo-redundant backup not enabled. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/GeoRedundant_SQLDatabase_AuditIfNotExists.json) |
+
+### Recovery strategy / Contingency Plan-6.4
+
+**ID**: RBI IT Framework 6.4
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[\[Preview\]: Azure Recovery Services vaults should use customer-managed keys for encrypting backup data](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2e94d99a-8a36-4563-bc77-810d8893b671) |Use customer-managed keys to manage the encryption at rest of your backup data. By default, customer data is encrypted with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more at [https://aka.ms/AB-CmkEncryption](https://aka.ms/AB-CmkEncryption). |Audit, Deny, Disabled |[1.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Backup/AzBackupRSVault_CMKEnabled_Audit.json) |
+|[\[Preview\]: Azure Recovery Services vaults should use private link for backup](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fdeeddb44-9f94-4903-9fa0-081d524406e3) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Azure Recovery Services vaults, data leakage risks are reduced. Learn more about private links at: [https://aka.ms/AB-PrivateEndpoints](https://aka.ms/AB-PrivateEndpoints). |Audit, Disabled |[2.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Backup/RecoveryServices_PrivateEndpoint_Audit.json) |
+|[\[Preview\]: Recovery Services vaults should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F11e3da8c-1d68-4392-badd-0ff3c43ab5b0) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Azure Recovery Services vaults, data leakage risks are reduced. Learn more about private links for Azure Site Recovery at: [https://aka.ms/HybridScenarios-PrivateLink](https://aka.ms/HybridScenarios-PrivateLink) and [https://aka.ms/AzureToAzure-PrivateLink](https://aka.ms/AzureToAzure-PrivateLink). |Audit, Disabled |[1.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Site%20Recovery/RecoveryServices_SiteRecovery_PrivateEndpoint_Audit.json) |
+|[Audit virtual machines without disaster recovery configured](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0015ea4d-51ff-4ce3-8d8c-f3f8f0179a56) |Audit virtual machines which do not have disaster recovery configured. To learn more about disaster recovery, visit [https://aka.ms/asr-doc](https://aka.ms/asr-doc). |auditIfNotExists |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/RecoveryServices_DisasterRecovery_Audit.json) |
+
+## Next steps
+
+Additional articles about Azure Policy:
+
+- [Regulatory Compliance](../concepts/regulatory-compliance.md) overview.
+- See the [initiative definition structure](../concepts/initiative-definition-structure.md).
+- Review other examples at [Azure Policy samples](./index.md).
+- Review [Understanding policy effects](../concepts/effects.md).
+- Learn how to [remediate non-compliant resources](../how-to/remediate-resources.md).
governance Rmit Malaysia https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/rmit-malaysia.md
Title: Regulatory Compliance details for RMIT Malaysia description: Details of the RMIT Malaysia Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 07/26/2022 Last updated : 08/01/2022
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | ||||| |[Azure Backup should be enabled for Virtual Machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F013e242c-8828-4970-87b3-ab247555486d) |Ensure protection of your Azure Virtual Machines by enabling Azure Backup. Azure Backup is a secure and cost effective data protection solution for Azure. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Backup/VirtualMachines_EnableAzureBackup_Audit.json) |
-|[Saved-queries in Azure Monitor should be saved in customer storage account for logs encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffa298e57-9444-42ba-bf04-86e8470e32c7) |Link storage account to Log Analytics workspace to protect saved-queries with storage account encryption. Customer-managed keys are commonly required to meet regulatory compliance and for more control over the access to your saved-queries in Azure Monitor. For more details on the above, see [/azure/azure-monitor/platform/customer-managed-keys?tabs=portal#customer-managed-key-for-saved-queries](/azure/azure-monitor/platform/customer-managed-keys?tabs=portal#customer-managed-key-for-saved-queries). |audit, Audit, deny, Deny, disabled, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/LogAnalyticsWorkspaces_CMKBYOSQueryEnabled_Deny.json) |
+|[Saved-queries in Azure Monitor should be saved in customer storage account for logs encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffa298e57-9444-42ba-bf04-86e8470e32c7) |Link storage account to Log Analytics workspace to protect saved-queries with storage account encryption. Customer-managed keys are commonly required to meet regulatory compliance and for more control over the access to your saved-queries in Azure Monitor. For more details on the above, see [https://docs.microsoft.com/azure/azure-monitor/platform/customer-managed-keys?tabs=portal#customer-managed-key-for-saved-queries](/azure/azure-monitor/platform/customer-managed-keys?tabs=portal#customer-managed-key-for-saved-queries). |audit, Audit, deny, Deny, disabled, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/LogAnalyticsWorkspaces_CMKBYOSQueryEnabled_Deny.json) |
## Network Resilience
initiative definition.
||||| |[App Configuration should use a customer-managed key](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F967a4b4b-2da9-43c1-b7d0-f98d0d74d0b1) |Customer-managed keys provide enhanced data protection by allowing you to manage your encryption keys. This is often required to meet compliance requirements. |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Configuration/CustomerManagedKey_Audit.json) | |[Azure Container Instance container group should use customer-managed key for encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0aa61e00-0a01-4a3c-9945-e93cffedf0e6) |Secure your containers with greater flexibility using customer-managed keys. When you specify a customer-managed key, that key is used to protect and control access to the key that encrypts your data. Using customer-managed keys provides additional capabilities to control rotation of the key encryption key or cryptographically erase data. |Audit, Disabled, Deny |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Instance/ContainerInstance_CMK_Audit.json) |
-|[Azure Monitor Logs clusters should be encrypted with customer-managed key](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1f68a601-6e6d-4e42-babf-3f643a047ea2) |Create Azure Monitor logs cluster with customer-managed keys encryption. By default, the log data is encrypted with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance. Customer-managed key in Azure Monitor gives you more control over the access to you data, see [https://docs.microsoft.com/azure/azure-monitor/platform/customer-managed-keys](https://docs.microsoft.com/azure/azure-monitor/platform/customer-managed-keys). |audit, Audit, deny, Deny, disabled, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/LogAnalyticsClusters_CMKEnabled_Deny.json) |
+|[Azure Monitor Logs clusters should be encrypted with customer-managed key](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1f68a601-6e6d-4e42-babf-3f643a047ea2) |Create Azure Monitor logs cluster with customer-managed keys encryption. By default, the log data is encrypted with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance. Customer-managed key in Azure Monitor gives you more control over the access to you data, see [https://docs.microsoft.com/azure/azure-monitor/platform/customer-managed-keys](/azure/azure-monitor/platform/customer-managed-keys). |audit, Audit, deny, Deny, disabled, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/LogAnalyticsClusters_CMKEnabled_Deny.json) |
|[Container registries should be encrypted with a customer-managed key](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5b9159ae-1701-4a6f-9a7a-aa9c8ddd0580) |Use customer-managed keys to manage the encryption at rest of the contents of your registries. By default, the data is encrypted at rest with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more at [https://aka.ms/acr/CMK](https://aka.ms/acr/CMK). |Audit, Deny, Disabled |[1.1.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_CMKEncryptionEnabled_Audit.json) | |[Event Hub namespaces should use a customer-managed key for encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa1ad735a-e96f-45d2-a7b2-9a4932cab7ec) |Azure Event Hubs supports the option of encrypting data at rest with either Microsoft-managed keys (default) or customer-managed keys. Choosing to encrypt data using customer-managed keys enables you to assign, rotate, disable, and revoke access to the keys that Event Hub will use to encrypt data in your namespace. Note that Event Hub only supports encryption with customer-managed keys for namespaces in dedicated clusters. |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Event%20Hub/EventHub_CustomerManagedKeyEnabled_Audit.json) | |[Geo-redundant backup should be enabled for Azure Database for MySQL](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F82339799-d096-41ae-8538-b108becf0970) |Azure Database for MySQL allows you to choose the redundancy option for your database server. It can be set to a geo-redundant backup storage in which the data is not only stored within the region in which your server is hosted, but is also replicated to a paired region to provide recovery option in case of a region failure. Configuring geo-redundant storage for backup is only allowed during server create. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/GeoRedundant_DBForMySQL_Audit.json) |
initiative definition.
|[Managed disks should use a specific set of disk encryption sets for the customer-managed key encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd461a302-a187-421a-89ac-84acdb4edc04) |Requiring a specific set of disk encryption sets to be used with managed disks give you control over the keys used for encryption at rest. You are able to select the allowed encrypted sets and all others are rejected when attached to a disk. Learn more at [https://aka.ms/disks-cmk](https://aka.ms/disks-cmk). |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/ManagedDiskEncryptionSetsAllowed_Deny.json) | |[OS and data disks should be encrypted with a customer-managed key](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F702dd420-7fcc-42c5-afe8-4026edd20fe0) |Use customer-managed keys to manage the encryption at rest of the contents of your managed disks. By default, the data is encrypted at rest with platform-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more at [https://aka.ms/disks-cmk](https://aka.ms/disks-cmk). |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/OSAndDataDiskCMKRequired_Deny.json) | |[PostgreSQL servers should use customer-managed keys to encrypt data at rest](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F18adea5e-f416-4d0f-8aa8-d24321e3e274) |Use customer-managed keys to manage the encryption at rest of your PostgreSQL servers. By default, the data is encrypted at rest with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. |AuditIfNotExists, Disabled |[1.0.4](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_EnableByok_Audit.json) |
-|[Saved-queries in Azure Monitor should be saved in customer storage account for logs encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffa298e57-9444-42ba-bf04-86e8470e32c7) |Link storage account to Log Analytics workspace to protect saved-queries with storage account encryption. Customer-managed keys are commonly required to meet regulatory compliance and for more control over the access to your saved-queries in Azure Monitor. For more details on the above, see [https://docs.microsoft.com/azure/azure-monitor/platform/customer-managed-keys?tabs=portal#customer-managed-key-for-saved-queries](https://docs.microsoft.com/azure/azure-monitor/platform/customer-managed-keys?tabs=portal#customer-managed-key-for-saved-queries). |audit, Audit, deny, Deny, disabled, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/LogAnalyticsWorkspaces_CMKBYOSQueryEnabled_Deny.json) |
+|[Saved-queries in Azure Monitor should be saved in customer storage account for logs encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffa298e57-9444-42ba-bf04-86e8470e32c7) |Link storage account to Log Analytics workspace to protect saved-queries with storage account encryption. Customer-managed keys are commonly required to meet regulatory compliance and for more control over the access to your saved-queries in Azure Monitor. For more details on the above, see [https://docs.microsoft.com/azure/azure-monitor/platform/customer-managed-keys?tabs=portal#customer-managed-key-for-saved-queries](/azure/azure-monitor/platform/customer-managed-keys?tabs=portal#customer-managed-key-for-saved-queries). |audit, Audit, deny, Deny, disabled, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/LogAnalyticsWorkspaces_CMKBYOSQueryEnabled_Deny.json) |
|[SQL servers should use customer-managed keys to encrypt data at rest](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0a370ff3-6cab-4e85-8995-295fd854c5b8) |Implementing Transparent Data Encryption (TDE) with your own key provides increased transparency and control over the TDE Protector, increased security with an HSM-backed external service, and promotion of separation of duties. This recommendation applies to organizations with a related compliance requirement. |Audit, Deny, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_EnsureServerTDEisEncryptedWithYourOwnKey_Deny.json) | |[Storage account containing the container with activity logs must be encrypted with BYOK](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffbb99e8e-e444-4da0-9ff1-75c92f5a85b2) |This policy audits if the Storage account containing the container with activity logs is encrypted with BYOK. The policy works only if the storage account lies on the same subscription as activity logs by design. More information on Azure Storage encryption at rest can be found here [https://aka.ms/azurestoragebyok](https://aka.ms/azurestoragebyok). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ActivityLog_StorageAccountBYOK_Audit.json) | |[Storage accounts should use customer-managed key for encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6fac406b-40ca-413b-bf8e-0bf964659c25) |Secure your blob and file storage account with greater flexibility using customer-managed keys. When you specify a customer-managed key, that key is used to protect and control access to the key that encrypts your data. Using customer-managed keys provides additional capabilities to control rotation of the key encryption key or cryptographically erase data. |Audit, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/StorageAccountCustomerManagedKeyEnabled_Audit.json) |
initiative definition.
|[App Service apps should have resource logs enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F91a78b24-f231-4a8a-8da9-02c35b2b6510) |Audit enabling of resource logs on the app. This enables you to recreate activity trails for investigation purposes if a security incident occurs or your network is compromised. |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_ResourceLoggingMonitoring_Audit.json) | |[Audit diagnostic setting](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7f89b1eb-583c-429a-8828-af049802c1d9) |Audit diagnostic setting for selected resource types |AuditIfNotExists |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/DiagnosticSettingsForTypes_Audit.json) | |[Azure Monitor log profile should collect logs for categories 'write,' 'delete,' and 'action'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1a4e592a-6a6e-44a5-9814-e36264ca96e7) |This policy ensures that a log profile collects logs for categories 'write,' 'delete,' and 'action' |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ActivityLog_CaptureAllCategories.json) |
-|[Azure Monitor Logs for Application Insights should be linked to a Log Analytics workspace](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd550e854-df1a-4de9-bf44-cd894b39a95e) |Link the Application Insights component to a Log Analytics workspace for logs encryption. Customer-managed keys are commonly required to meet regulatory compliance and for more control over the access to your data in Azure Monitor. Linking your component to a Log Analytics workspace that's enabled with a customer-managed key, ensures that your Application Insights logs meet this compliance requirement, see [https://docs.microsoft.com/azure/azure-monitor/platform/customer-managed-keys](https://docs.microsoft.com/azure/azure-monitor/platform/customer-managed-keys). |audit, Audit, deny, Deny, disabled, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ApplicationInsightsComponent_WorkspaceAssociation_Deny.json) |
+|[Azure Monitor Logs for Application Insights should be linked to a Log Analytics workspace](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd550e854-df1a-4de9-bf44-cd894b39a95e) |Link the Application Insights component to a Log Analytics workspace for logs encryption. Customer-managed keys are commonly required to meet regulatory compliance and for more control over the access to your data in Azure Monitor. Linking your component to a Log Analytics workspace that's enabled with a customer-managed key, ensures that your Application Insights logs meet this compliance requirement, see [https://docs.microsoft.com/azure/azure-monitor/platform/customer-managed-keys](/azure/azure-monitor/platform/customer-managed-keys). |audit, Audit, deny, Deny, disabled, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ApplicationInsightsComponent_WorkspaceAssociation_Deny.json) |
|[Azure Monitor should collect activity logs from all regions](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F41388f1c-2db0-4c25-95b2-35d7f5ccbfa9) |This policy audits the Azure Monitor log profile which does not export activities from all Azure supported regions including global. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ActivityLog_CaptureAllRegions.json) | |[Azure Monitor solution 'Security and Audit' must be deployed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3e596b57-105f-48a6-be97-03e9243bad6e) |This policy ensures that Security and Audit is deployed. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/Security_Audit_MustBeDeployed.json) | |[Azure subscriptions should have a log profile for Activity Log](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7796937f-307b-4598-941c-67d3a05ebfe7) |This policy ensures if a log profile is enabled for exporting activity logs. It audits if there is no log profile created to export the logs either to a storage account or to an event hub. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/Logprofile_activityLogs_Audit.json) |
initiative definition.
|[Deploy Diagnostic Settings for Stream Analytics to Event Hub](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fedf3780c-3d70-40fe-b17e-ab72013dafca) |Deploys the diagnostic settings for Stream Analytics to stream to a regional Event Hub when any Stream Analytics which is missing this diagnostic settings is created or updated. |DeployIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/StreamAnalytics_DeployDiagnosticLog_Deploy_EventHub.json) | |[Deploy Diagnostic Settings for Stream Analytics to Log Analytics workspace](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F237e0f7e-b0e8-4ec4-ad46-8c12cb66d673) |Deploys the diagnostic settings for Stream Analytics to stream to a regional Log Analytics workspace when any Stream Analytics which is missing this diagnostic settings is created or updated. |DeployIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/StreamAnalytics_DeployDiagnosticLog_Deploy_LogAnalytics.json) | |[Log Analytics extension should be enabled in virtual machine scale sets for listed virtual machine images](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5c3bc7b8-a64c-4e08-a9cd-7ff0f31e1138) |Reports virtual machine scale sets as non-compliant if the virtual machine image is not in the list defined and the extension is not installed. |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/LogAnalytics_OSImage_VMSS_Audit.json) |
-|[Resource logs in Azure Key Vault Managed HSM should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa2a5b911-5617-447e-a49e-59dbe0e0434b) |To recreate activity trails for investigation purposes when a security incident occurs or when your network is compromised, you may want to audit by enabling resource logs on Managed HSMs. Please follow the instructions here: [https://docs.microsoft.com/azure/key-vault/managed-hsm/logging](https://docs.microsoft.com/azure/key-vault/managed-hsm/logging). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/ManagedHsm_AuditDiagnosticLog_Audit.json) |
+|[Resource logs in Azure Key Vault Managed HSM should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa2a5b911-5617-447e-a49e-59dbe0e0434b) |To recreate activity trails for investigation purposes when a security incident occurs or when your network is compromised, you may want to audit by enabling resource logs on Managed HSMs. Please follow the instructions here: [https://docs.microsoft.com/azure/key-vault/managed-hsm/logging](/azure/key-vault/managed-hsm/logging). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/ManagedHsm_AuditDiagnosticLog_Audit.json) |
|[Resource logs in Key Vault should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fcf820ca0-f99e-4f3e-84fb-66e913812d21) |Audit enabling of resource logs. This enables you to recreate activity trails to use for investigation purposes when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[5.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/KeyVault_AuditDiagnosticLog_Audit.json) | |[The Log Analytics extension should be installed on Virtual Machine Scale Sets](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fefbde977-ba53-4479-b8e9-10b957924fbf) |This policy audits any Windows/Linux Virtual Machine Scale Sets if the Log Analytics extension is not installed. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/VMSS_LogAnalyticsAgent_AuditIfNotExists.json) | |[Virtual machines should have the Log Analytics extension installed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa70ca396-0a34-413a-88e1-b956c1e683be) |This policy audits any Windows/Linux virtual machines if the Log Analytics extension is not installed. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/VirtualMachines_LogAnalyticsAgent_AuditIfNotExists.json) |
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | ||||| |[Azure Key Vault Managed HSM should have purge protection enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc39ba22d-4428-4149-b981-70acb31fc383) |Malicious deletion of an Azure Key Vault Managed HSM can lead to permanent data loss. A malicious insider in your organization can potentially delete and purge Azure Key Vault Managed HSM. Purge protection protects you from insider attacks by enforcing a mandatory retention period for soft deleted Azure Key Vault Managed HSM. No one inside your organization or Microsoft will be able to purge your Azure Key Vault Managed HSM during the soft delete retention period. |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/ManagedHsm_Recoverable_Audit.json) |
-|[Azure Monitor Logs clusters should be encrypted with customer-managed key](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1f68a601-6e6d-4e42-babf-3f643a047ea2) |Create Azure Monitor logs cluster with customer-managed keys encryption. By default, the log data is encrypted with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance. Customer-managed key in Azure Monitor gives you more control over the access to you data, see [https://docs.microsoft.com/azure/azure-monitor/platform/customer-managed-keys](https://docs.microsoft.com/azure/azure-monitor/platform/customer-managed-keys). |audit, Audit, deny, Deny, disabled, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/LogAnalyticsClusters_CMKEnabled_Deny.json) |
+|[Azure Monitor Logs clusters should be encrypted with customer-managed key](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1f68a601-6e6d-4e42-babf-3f643a047ea2) |Create Azure Monitor logs cluster with customer-managed keys encryption. By default, the log data is encrypted with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance. Customer-managed key in Azure Monitor gives you more control over the access to you data, see [https://docs.microsoft.com/azure/azure-monitor/platform/customer-managed-keys](/azure/azure-monitor/platform/customer-managed-keys). |audit, Audit, deny, Deny, disabled, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/LogAnalyticsClusters_CMKEnabled_Deny.json) |
|[Configure App Configuration to disable public network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F73290fa2-dfa7-4bbb-945d-a5e23b75df2c) |Disable public network access for App Configuration so that it isn't accessible over the public internet. This configuration helps protect them against data leakage risks. You can limit exposure of the your resources by creating private endpoints instead. Learn more at: [https://aka.ms/appconfig/private-endpoint](https://aka.ms/appconfig/private-endpoint). |Modify, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Configuration/PrivateLink_PublicNetworkAccess_Modify.json) | |[Configure Azure SQL Server to disable public network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F28b0b1e5-17ba-4963-a7a4-5a1ab4400a0b) |Disabling the public network access property shuts down public connectivity such that Azure SQL Server can only be accessed from a private endpoint. This configuration disables the public network access for all databases under the Azure SQL Server. |Modify, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_PublicNetworkAccess_Modify.json) | |[Configure Container registries to disable public network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa3701552-92ea-433e-9d17-33b7f1208fc9) |Disable public network access for your Container Registry resource so that it's not accessible over the public internet. This can reduce data leakage risks. Learn more at [https://aka.ms/acr/portal/public-network](https://aka.ms/acr/portal/public-network) and [https://aka.ms/acr/private-link](https://aka.ms/acr/private-link). |Modify, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_PublicNetworkAccess_Modify.json) |
governance Ukofficial Uknhs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/ukofficial-uknhs.md
Title: Regulatory Compliance details for UK OFFICIAL and UK NHS description: Details of the UK OFFICIAL and UK NHS Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 07/26/2022 Last updated : 08/01/2022
hdinsight Hdinsight Hadoop Oms Log Analytics Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-oms-log-analytics-tutorial.md
description: Learn how to use Azure Monitor logs to monitor jobs running in an H
Previously updated : 09/21/2021 Last updated : 08/01/2021 # Use Azure Monitor logs to monitor HDInsight clusters
az hdinsight monitor disable --name $cluster --resource-group $resourceGroup
``` ## <a name="oms-with-firewall"></a>Prerequisites for clusters behind a firewall
-To be able to successfully setup Azure Monitor integration with HDInsight, behind a firewall, some customers may need to enable the following endpoints:
+To be able to successfully set up Azure Monitor integration with HDInsight, behind a firewall, some customers may need to enable the following endpoints:
|Agent Resource | Ports | Direction | Bypass HTTPS inspection | |||||
Once the setup is successful, enabling necessary endpoints for data ingestion is
## Install HDInsight cluster management solutions
-HDInsight provides cluster-specific management solutions that you can add for Azure Monitor logs. [Management solutions](../azure-monitor/insights/solutions.md) add functionality to Azure Monitor logs, providing more data and analysis tools. These solutions collect important performance metrics from your HDInsight clusters. And provide the tools to search the metrics. These solutions also provide visualizations and dashboards for most cluster types supported in HDInsight. By using the metrics that you collect with the solution, you can create custom monitoring rules and alerts.
+HDInsight provides cluster-specific management solutions that you can add for Azure Monitor Logs. [Management solutions](../azure-monitor/insights/solutions.md) add functionality to Azure Monitor Logs, providing more data and analysis tools. These solutions collect important performance metrics from your HDInsight clusters. And provide the tools to search the metrics. These solutions also provide visualizations and dashboards for most cluster types supported in HDInsight. By using the metrics that you collect with the solution, you can create custom monitoring rules and alerts.
Available HDInsight solutions:
If you have Azure Monitor Integration enabled on a cluster, updating the OMS age
``` ## Next steps-
+* [Selective logging analysis](selective-logging-analysis.md)
* [Query Azure Monitor logs to monitor HDInsight clusters](hdinsight-hadoop-oms-log-analytics-use-queries.md) * [How to monitor cluster availability with Apache Ambari and Azure Monitor logs](./hdinsight-cluster-availability.md)
hdinsight Hdinsight Hadoop Provision Linux Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-provision-linux-clusters.md
Title: Set up clusters in HDInsight with Apache Hadoop, Apache Spark, Apache Kafka, and more
-description: Set up Hadoop, Kafka, Spark, HBase, or Storm clusters for HDInsight from a browser, the Azure classic CLI, Azure PowerShell, REST, or SDK.
+description: Set up Hadoop, Kafka, Spark, or HBase clusters for HDInsight from a browser, the Azure classic CLI, Azure PowerShell, REST, or SDK.
Previously updated : 03/30/2022 Last updated : 07/22/2022 # Set up clusters in HDInsight with Apache Hadoop, Apache Spark, Apache Kafka, and more [!INCLUDE [selector](includes/hdinsight-create-linux-cluster-selector.md)]
-Learn how to set up and configure Apache Hadoop, Apache Spark, Apache Kafka, Interactive Query, Apache HBase, or Apache Storm in HDInsight. Also, learn how to customize clusters and add security by joining them to a domain.
+Learn how to set up and configure Apache Hadoop, Apache Spark, Apache Kafka, Interactive Query, or Apache HBase or in HDInsight. Also, learn how to customize clusters and add security by joining them to a domain.
A Hadoop cluster consists of several virtual machines (nodes) that are used for distributed processing of tasks. Azure HDInsight handles implementation details of installation and configuration of individual nodes, so you only have to provide general configuration information.
-> [!IMPORTANT]
+> [!IMPORTANT]
> HDInsight cluster billing starts once a cluster is created and stops when the cluster is deleted. Billing is pro-rated per minute, so you should always delete your cluster when it is no longer in use. Learn how to [delete a cluster.](hdinsight-delete-cluster.md) If you're using multiple clusters together, you'll want to create a virtual network, and if you're using a Spark cluster you'll also want to use the Hive Warehouse Connector. For more information, see [Plan a virtual network for Azure HDInsight](./hdinsight-plan-virtual-network-deployment.md) and [Integrate Apache Spark and Apache Hive with the Hive Warehouse Connector](interactive-query/apache-hive-warehouse-connector.md).
You don't need to specify the cluster location explicitly: The cluster is in the
Azure HDInsight currently provides the following cluster types, each with a set of components to provide certain functionalities.
-> [!IMPORTANT]
-> HDInsight clusters are available in various types, each for a single workload or technology. There is no supported method to create a cluster that combines multiple types, such as Storm and HBase on one cluster. If your solution requires technologies that are spread across multiple HDInsight cluster types, an [Azure virtual network](../virtual-network/index.yml) can connect the required cluster types.
+> [!IMPORTANT]
+> HDInsight clusters are available in various types, each for a single workload or technology. There is no supported method to create a cluster that combines multiple types, such HBase on one cluster. If your solution requires technologies that are spread across multiple HDInsight cluster types, an [Azure virtual network](../virtual-network/index.yml) can connect the required cluster types.
| Cluster type | Functionality | | | |
Azure HDInsight currently provides the following cluster types, each with a set
| [Interactive Query](./interactive-query/apache-interactive-query-get-started.md) |In-memory caching for interactive and faster Hive queries | | [Kafka](kafk) | A distributed streaming platform that can be used to build real-time streaming data pipelines and applications | | [Spark](spark/apache-spark-overview.md) |In-memory processing, interactive queries, micro-batch stream processing |
-| [Storm](storm/apache-storm-overview.md) |Real-time event processing |
#### Version
With HDInsight clusters, you can configure two user accounts during cluster crea
The HTTP username has the following restrictions: * Allowed special characters: `_` and `@`
-* Characters not allowed: #;."',\/:`!*?$(){}[]<>|&--=+%~^space
+* Characters not allowed: #;."',/:`!*?$(){}[]<>|&--=+%~^space
* Max length: 20 The SSH username has the following restrictions: * Allowed special characters:`_` and `@`
-* Characters not allowed: #;."',\/:`!*?$(){}[]<>|&--=+%~^space
+* Characters not allowed: #;."',/:`!*?$(){}[]<>|&--=+%~^space
* Max length: 64
-* Reserved names: hadoop, users, oozie, hive, mapred, ambari-qa, zookeeper, tez, hdfs, sqoop, yarn, hcat, ams, hbase, storm, administrator, admin, user, user1, test, user2, test1, user3, admin1, 1, 123, a, actuser, adm, admin2, aspnet, backup, console, david, guest, john, owner, root, server, sql, support, support_388945a0, sys, test2, test3, user4, user5, spark
+* Reserved names: hadoop, users, oozie, hive, mapred, ambari-qa, zookeeper, tez, hdfs, sqoop, yarn, hcat, ams, hbase, administrator, admin, user, user1, test, user2, test1, user3, admin1, 1, 123, a, actuser, adm, admin2, aspnet, backup, console, david, guest, john, owner, root, server, sql, support, support_388945a0, sys, test2, test3, user4, user5, spark
## Storage
HDInsight clusters can use the following storage options:
For more information on storage options with HDInsight, see [Compare storage options for use with Azure HDInsight clusters](hdinsight-hadoop-compare-storage-options.md).
-> [!WARNING]
+> [!WARNING]
> Using an additional storage account in a different location from the HDInsight cluster is not supported. During configuration, for the default storage endpoint you specify a blob container of an Azure Storage account or Data Lake Storage. The default storage contains application and system logs. Optionally, you can specify additional linked Azure Storage accounts and Data Lake Storage accounts that the cluster can access. The HDInsight cluster and the dependent storage accounts must be in the same Azure location.
During configuration, for the default storage endpoint you specify a blob contai
> [!IMPORTANT] > Enabling secure storage transfer after creating a cluster can result in errors using your storage account and is not recommended. It is better to create a new cluster using a storage account with secure transfer already enabled.
-> [!Note]
+> [!Note]
> Azure HDInsight does not automatically transfer, move or copy your data stored in Azure Storage from one region to another. ### Metastore settings
You can create optional Hive or Apache Oozie metastores. However, not all cluste
For more information, see [Use external metadata stores in Azure HDInsight](./hdinsight-use-external-metadata-stores.md).
-> [!IMPORTANT]
+> [!IMPORTANT]
> When you create a custom metastore, don't use dashes, hyphens, or spaces in the database name. This can cause the cluster creation process to fail. #### SQL database for Hive
To increase performance when using Oozie, use a custom metastore. A metastore ca
Ambari is used to monitor HDInsight clusters, make configuration changes, and store cluster management information as well as job history. The custom Ambari DB feature allows you to deploy a new cluster and setup Ambari in an external database that you manage. For more information, see [Custom Ambari DB](./hdinsight-custom-ambari-db.md).
-> [!IMPORTANT]
+> [!IMPORTANT]
> You cannot reuse a custom Oozie metastore. To use a custom Oozie metastore, you must provide an empty Azure SQL Database when creating the HDInsight cluster. ## Security + networking
For more information, see [Managed identities in Azure HDInsight](./hdinsight-ma
## Configuration + pricing You're billed for node usage for as long as the cluster exists. Billing starts when a cluster is created and stops when the cluster is deleted. Clusters can't be de-allocated or put on hold.
Each cluster type has its own number of nodes, terminology for nodes, and defaul
| | | | | Hadoop |Head node (2), Worker node (1+) |:::image type="content" source="./media/hdinsight-hadoop-provision-linux-clusters/hdinsight-hadoop-cluster-type-nodes.png" alt-text="HDInsight Hadoop cluster nodes" border="false"::: | | HBase |Head server (2), region server (1+), master/ZooKeeper node (3) |:::image type="content" source="./media/hdinsight-hadoop-provision-linux-clusters/hdinsight-hbase-cluster-type-setup.png" alt-text="HDInsight HBase cluster type setup" border="false"::: |
-| Storm |Nimbus node (2), supervisor server (1+), ZooKeeper node (3) |:::image type="content" source="./media/hdinsight-hadoop-provision-linux-clusters/hdinsight-storm-cluster-type-setup.png" alt-text="HDInsight storm cluster type setup" border="false"::: |
| Spark |Head node (2), Worker node (1+), ZooKeeper node (3) (free for A1 ZooKeeper VM size) |:::image type="content" source="./media/hdinsight-hadoop-provision-linux-clusters/hdinsight-spark-cluster-type-setup.png" alt-text="HDInsight spark cluster type setup" border="false"::: | For more information, see [Default node configuration and virtual machine sizes for clusters](hdinsight-supported-node-configuration.md) in "What are the Hadoop components and versions in HDInsight?"
For more information, see [Default node configuration and virtual machine sizes
The cost of HDInsight clusters is determined by the number of nodes and the virtual machines sizes for the nodes. Different cluster types have different node types, numbers of nodes, and node sizes:+ * Hadoop cluster type default:
- * Two *head nodes*
- * Four *Worker nodes*
-* Storm cluster type default:
- * Two *Nimbus nodes*
- * Three *ZooKeeper nodes*
- * Four *supervisor nodes*
+ * Two *head nodes*
++
+ * Four *Worker nodes*
If you're just trying out HDInsight, we recommend you use one Worker node. For more information about HDInsight pricing, see [HDInsight pricing](https://go.microsoft.com/fwLink/?LinkID=282635&clcid=0x409).
-> [!NOTE]
+> [!NOTE]
> The cluster size limit varies among Azure subscriptions. Contact [Azure billing support](../azure-portal/supportability/how-to-create-azure-support-request.md) to increase the limit. When you use the Azure portal to configure the cluster, the node size is available through the **Configuration + pricing** tab. In the portal, you can also see the cost associated with the different node sizes.
When you deploy clusters, choose compute resources based on the solution you pla
To find out what value you should use to specify a VM size while creating a cluster using the different SDKs or while using Azure PowerShell, see [VM sizes to use for HDInsight clusters](../cloud-services/cloud-services-sizes-specs.md#size-tables). From this linked article, use the value in the **Size** column of the tables.
-> [!IMPORTANT]
+> [!IMPORTANT]
> If you need more than 32 Worker nodes in a cluster, you must select a head node size with at least 8 cores and 14 GB of RAM. For more information, see [Sizes for virtual machines](../virtual-machines/sizes.md). For information about pricing of the various sizes, see [HDInsight pricing](https://azure.microsoft.com/pricing/details/hdinsight).
+### Disk attachment
+
+On each of the **NodeManager** machines, **LocalResources** are ultimately localized in the target directories.
+
+By normal configuration only the default disk is added as the local disk in NodeManager. For large applications this disk space may not be enough which can result in job failure.
+
+If the cluster is expected to run large data application, you can choose to add extra disks to the **NodeManager**.
+
+You can add number of disks per VM and each disk will be of 1 TB size.
+
+1. From **Configuration + pricing** tab
+1. Select **Enable managed disk** option
+1. From **Standard disks**, Enter the **Number of disks**
+1. Choose your **Worker node**
+
+You can verify the number of disks from **Review + create** tab, under **Cluster configuration**
+ ### Add application
-An HDInsight application is an application that users can install on a Linux-based HDInsight cluster. You can use applications provided by Microsoft, third parties, or that you develop yourself. For more information, see [Install third-party Apache Hadoop applications on Azure HDInsight](hdinsight-apps-install-applications.md).
+HDInsight application is an application, that users can install on a Linux-based HDInsight cluster. You can use applications provided by Microsoft, third parties, or developed by you. For more information, see [Install third-party Apache Hadoop applications on Azure HDInsight](hdinsight-apps-install-applications.md).
Most of the HDInsight applications are installed on an empty edge node. An empty edge node is a Linux virtual machine with the same client tools installed and configured as in the head node. You can use the edge node for accessing the cluster, testing your client applications, and hosting your client applications. For more information, see [Use empty edge nodes in HDInsight](hdinsight-apps-use-edge-node.md).
You can install additional components or customize cluster configuration by usin
Some native Java components, like Apache Mahout and Cascading, can be run on the cluster as Java Archive (JAR) files. These JAR files can be distributed to Azure Storage and submitted to HDInsight clusters with Hadoop job submission mechanisms. For more information, see [Submit Apache Hadoop jobs programmatically](hadoop/submit-apache-hadoop-jobs-programmatically.md).
-> [!NOTE]
+> [!NOTE]
> If you have issues deploying JAR files to HDInsight clusters, or calling JAR files on HDInsight clusters, contact [Microsoft Support](https://azure.microsoft.com/support/options/).
->
+>
> Cascading is not supported by HDInsight and is not eligible for Microsoft Support. For lists of supported components, see [What's new in the cluster versions provided by HDInsight](hdinsight-component-versioning.md). Sometimes, you want to configure the following configuration files during the creation process:
-* clusterIdentity.xml
-* core-site.xml
-* gateway.xml
-* hbase-env.xml
-* hbase-site.xml
-* hdfs-site.xml
-* hive-env.xml
-* hive-site.xml
-* mapred-site
-* oozie-site.xml
-* oozie-env.xml
-* storm-site.xml
-* tez-site.xml
-* webhcat-site.xml
-* yarn-site.xml
+ * clusterIdentity.xml
+ * core-site.xml
+ * gateway.xml
+ * hbase-env.xml
+ * hbase-site.xml
+ * hdfs-site.xml
+ * hive-env.xml
+ * hive-site.xml
+ * mapred-site
+ * oozie-site.xml
+ * oozie-env.xml
+ * tez-site.xml
+ * webhcat-site.xml
+ * yarn-site.xml
For more information, see [Customize HDInsight clusters using Bootstrap](hdinsight-hadoop-customize-cluster-bootstrap.md).
hdinsight Log Analytics Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/log-analytics-migration.md
Considering customer feedback, the Azure HDInsight team invested in integration
> [!NOTE]
-> New Azure Montitor integration is in Public Preview. It is only available in East US and West Europe regions.
+> New Azure Montitor integration is in Public Preview across all regions where HDInsight is available.
## Benefits of the new Azure Monitor integration
This document outlines the changes to the Azure Monitor integration and provides
**Redesigned schemas**: The schema formatting for the new Azure Monitor integration is better organized and easy to understand. There are two-thirds fewer schemas to remove as much ambiguity in the legacy schemas as possible.
-**Selective Logging (releasing soon)**: There are logs and metrics available through Log Analytics. To help you save on monitoring costs, we'll be releasing a new selective logging feature. Use this feature to turn on and off different logs and metric sources. With this feature, you'll only have to pay for what you use.
+**Selective Logging**: There are logs and metrics available through Log Analytics. To help you save on monitoring costs, we'll be releasing a new selective logging feature. Use this feature to turn on and off different logs and metric sources. With this feature, you'll only have to pay for what you use. For more details see [Selective Logging](selective-logging-analysis.md)
**Logs cluster portal integration**: The **Logs** pane is new to the HDInsight Cluster portal. Anyone with access to the cluster can go to this pane to query any table that the cluster resource sends records to. Users don't need access to the Log Analytics workspace anymore to see the records for a specific cluster resource.
hdinsight Selective Logging Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/selective-logging-analysis.md
+
+ Title: Use selective logging feature with script action in Azure HDInsight clusters
+description: Learn how to use Selective logging feature using script action to monitor logs.
+++ Last updated : 07/31/2022++
+# Learn how to use selective logging feature with script action in Azure HDInsight
+
+[Azure Monitor logs](../azure-monitor/logs/log-query-overview.md) is an Azure Monitor service that monitors your cloud and on-premises environments. The monitoring is to maintain their availability and performance. It collects data generated by resources in your cloud, on-premises environments and from other monitoring tools. The data is used to provide analysis across multiple sources by enabling selective logging feature using script action in Azure portal in HDInsight.
+
+## About selective logging
+
+Selective logging is a part of Azure overall monitoring system. You can connect your cluster to a log analytics workspace. Once enabled, you can see the logs and metrics like HDInsight Security Logs, Yarn Resource Manager, System Metrics etc. You can monitor workloads and see how they're affecting cluster stability.
+Selective logging allows you to enable/disable all the tables or enable selective tables in log analytics workspace. You can adjust the source type for each table, since in new version of Geneva monitoring one table has multiple sources.
+
+> [!NOTE]
+> If log analytics is reinstalled in a cluster, then, you'll have to disable all the tables/log types again, since the reinstallation resets all the configuration files to its original state.
+
+## Using script action
+
+* The Geneva monitoring system uses mdsd(MDS daemon) which is a monitoring agent and fluentd for collecting logs using unified logging layer.
+* Selective Logging uses script action to disable/enable tables and their log types. Since it doesn't open any new ports or change any existing security setting hence, there are no security changes.
+* Script Action runs in parallel on all specified nodes and changes the configuration files for disabling/enabling tables and their log types.
+
+## Prerequisites
+
+* A Log Analytics workspace. You can think of this workspace as a unique Azure Monitor logs environment with its own data repository, data sources, and solutions. For the instructions, see [Create a Log Analytics workspace](../azure-monitor/vm/monitor-virtual-machine.md).
+* An Azure HDInsight cluster. Currently, you can use selective logging feature with the following HDInsight cluster types:
+ * Hadoop
+ * HBase
+ * Interactive Query
+ * Spark
+
+For the instructions on how to create an HDInsight cluster, see [Get started with Azure HDInsight](hadoop/apache-hadoop-linux-tutorial-get-started.md).
+
+## Enable/disable logs using script action for multiple tables and log types
+
+1. Go to script action in your cluster and create a new Script Action for disabling/enabling table and log type.
+
+ :::image type="content" source="./media/hdinsight-hadoop-oms-selective-log-analytics-tutorial/select-submit-script-action.png" alt-text="Screenshot showing select submit script action.":::
+
+ :::image type="content" source="./media/hdinsight-hadoop-oms-selective-log-analytics-tutorial/submit-script-action-window.png" alt-text="Screenshot showing submit script action window.":::
+
+1. In the script type, select **custom**.
+1. Name the script. For example, **Disable two tables and two sources**.
+1. Bash Script URL must be the link of the [selectiveLoggingScript.sh](https://hdiconfigactions.blob.core.windows.net/log-analytics-patch/selectiveLoggingScripts/selectiveLoggingScript.sh).
+1. Select all the nodes of the cluster. For example, Head Node, Worker node, Zookeepr node.
+1. Define the parameters in the parameter box. For example:
+ - Spark: `spark HDInsightSparkLogs:SparkExecutorLog --disable`
+ - Interactivehive: `interactivehive HDInsightSparkLogs:SparkExecutorLog --enable`
+ - Hadoop: `hadoop HDInsightSparkLogs:SparkExecutorLog --disable`
+ - HBase: `hbase HDInsightSparkLogs: HDInsightHBaseLogs --enable`
+
+ For more details, see [Parameters](#parameters-syntax) section.
+
+1. Select Create.
+1. After a few minutes, you'll see a green tick next to your script action history, which means script has successfully run.
+
+ :::image type="content" source="./media/hdinsight-hadoop-oms-selective-log-analytics-tutorial/enable-table-and-log-types.png" alt-text="Screenshot showing enable table and log types.":::
+
+You will see the desired changes in the log analytics workspace.
+
+## Troubleshooting
+
+### Scenario 1
+
+If Script Action is submitted but there are no changes in the log analytics workspace.
+
+1. Go to Ambari Home and check debug information.
+
+ :::image type="content" source="./media/hdinsight-hadoop-oms-selective-log-analytics-tutorial/select-dashboard-ambari-home.png" alt-text="Screenshot showing select dashboard ambari home.":::
+
+1. Select settings button.
+
+ :::image type="content" source="./media/hdinsight-hadoop-oms-selective-log-analytics-tutorial/ambari-dash-board.png" alt-text="Screenshot showing ambari dash board.":::
+
+1. You will get your latest script run at the top of the list.
+
+ :::image type="content" source="./media/hdinsight-hadoop-oms-selective-log-analytics-tutorial/background-operations.png" alt-text="Screenshot showing background operations.":::
+
+1. Verify the script run status in all the nodes individually.
+
+ :::image type="content" source="./media/hdinsight-hadoop-oms-selective-log-analytics-tutorial/background-operations-all.png" alt-text="Screenshot showing background operations all.":::
+
+1. Check if the parameter syntax from the parameter syntax section is correct.
+1. Check if the log analytics workspace is connected to the cluster and log analytics monitoring is turned on.
+1. Check if the script that you run from script action was checked as persisted.
+
+ :::image type="content" source="./media/hdinsight-hadoop-oms-selective-log-analytics-tutorial/script-action-persists.png" alt-text="Screenshot showing script action persists.":::
+
+1. It's possible, that a new node has been added to the cluster recently.
+
+ > [!NOTE]
+ > For the script to run in the latest cluster, and the script must persist the script.
+
+1. Make sure all the node types are selected while running the script action.
+
+ :::image type="content" source="./media/hdinsight-hadoop-oms-selective-log-analytics-tutorial/select-node-types.png" alt-text="Screenshot showing select node types.":::
+
+### Scenario 2
+
+If the script action is showing a Failed status in the script action history
+
+1. Make sure the parameter syntax is correct while using the parameter syntax section.
+1. Check that the script link is correct.
+1. Correct link for the script: https://hdiconfigactions.blob.core.windows.net/log-analytics-patch/selectiveLoggingScripts/selectiveLoggingScript.sh
+
+## Table names
+
+### Spark cluster
+
+Different log types(sources) inside **Spark** tables
+
+| S.no | Table Name | Log Types | Description |
+| | | | |
+| 1. | HDInsightAmbariCluster Alerts | No log types | This table contains Ambari Cluster Alerts from each node in the cluster (except for edge nodes). Each alert is a record in this table.
+| 2. | HDInsightAmbariSystem Metrics | No log types | This table contains system metrics collected from Ambari. The metrics now come from each node in the cluster (except for edge nodes) instead of just the two headnodes. Each metric is now a column and each metric is reported once per record. |
+| 3. | HDInsightHadoopAnd YarnLogs | Head Node: MRJobSummary, Resource Manager, TimelineServer Worker Node: NodeManager | This table contains all logs generated from the Hadoop and YARN frameworks. |
+| 4. | HDInsightSecurityLogs | AmbariAuditLog, AuthLog | This table contains records from the Ambari Audit and Auth Logs. |
+| 5. | HDInsightSparkLogs | Head Node: JupyterLog, LivyLog, SparkThriftDriverLog Worker Node: SparkExecutorLog, SparkDriverLog | This table contains all logs related to Spark and its related component: Livy and Jupyter. |
+| 6. | HDInsightHadoopAnd YarnMetrics | No log types | This table contains JMX metrics from the Hadoop and YARN frameworks. It contains all the same JMX metrics as the old Custom Logs tables, plus more metrics we considered important. We added Timeline Server, Node Manager, and Job History Server metrics. It contains one metric per record. |
+| 7. | HDInsightOozieLogs | Oozie | This table contains all logs generated from the Oozie framework. |
+
+### Interactive query cluster
+
+Different log types(sources) inside **interactive query** tables
+
+| S.no | Table Name | Log Types | Description |
+| | | | |
+| 1. | HDInsightAmbariClusterAlerts | No log types | This table contains Ambari Cluster Alerts from each node in the cluster (except for edge nodes). Each alert is a record in this table. |
+| 2. | HDInsightAmbariSystem Metrics | No log types | This table contains system metrics collected from Ambari. The metrics now come from each node in the cluster (except for edge nodes) instead of just the two headnodes. Each metric is now a column and each metric is reported once per record. |
+| 3. | HDInsightHadoopAndYarnLogs | **Head Node** : MRJobSummary, Resource Manager, TimelineServer **WorkerNode:** NodeManager | This table contains all logs generated from the Hadoop and YARN frameworks. |
+| 4. | HDInsightHadoopAndYarnMetrics | No log types | This table contains JMX metrics from the Hadoop and YARN frameworks. It contains all the same JMX metrics as the old Custom Logs tables, plus more metrics we considered important. We added Timeline Server, Node Manager, and Job History Server metrics. It contains one metric per record. |
+| 5. | HDInsightHiveAndLLAPLogs | Head Node: InteractiveHiveHSILog, InteractiveHiveMetastoreLog, ZeppelinLog | This table contains logs generated from Hive, LLAP, and their related components: WebHCat and Zeppelin. |
+| 6. | HDInsightHiveAndLLAPmetrics | No log types | This table contains JMX metrics from the Hive and LLAP frameworks. It contains all the same JMX metrics as the old Custom Logs tables. It contains one metric per record. |
+| 7. | HDInsightHiveTezAppStats | No log types |
+| 8. | HDInsightSecurityLogs | **Head Node:** AmbariAuditLog, AuthLog **Zookeeper Node, Worker Node:** AuthLog | This table contains records from the Ambari Audit and Auth Logs. |
+
+### HBase cluster
+
+Different log types(sources) inside **HBase** tables
+
+| S.no | Table Name | Log Types | Description |
+| | | | |
+| 1. | HDInsightAmbariClusterAlerts | No other log types | This table contains Ambari Cluster Alerts from each node in the cluster (except for edge nodes). Each alert is a record in this table.
+| 2. | HDInsightAmbariSystem Metrics | No other log types | This table contains system metrics collected from Ambari. The metrics now come from each node in the cluster (except for edge nodes) instead of just the two headnodes. Each metric is now a column and each metric is reported once per record. |
+| 3. | HDInsightHadoopAndYarnLogs | **Head Node** : MRJobSummary, Resource Manager, TimelineServer **WorkerNode:** NodeManager | This table contains all logs generated from the Hadoop and YARN frameworks. |
+| 4. | HDInsightSecurityLogs | **Head Node:** AmbariAuditLog, AuthLog **Worker Node:** AuthLog **ZooKeper Node:** AuthLog | This table contains records from the Ambari Audit and Auth Logs. |
+| 5. | HDInsightHBaseLogs | **Head Node** : HDFSGarbageCollectorLog, HDFSNameNodeLog **WorkerNode:** PhoenixServerLog, HBaseRegionServerLog, HBaseRestServerLog **Zookeeper Node:** HBaseMasterLog | This table contains logs from HBase and its related components: Phoenix and HDFS. |
+| 6. | HDInsightHBaseMetrics | No log types | This table contains JMX metrics from HBase. It contains all the same JMX metrics from the tables listed in the Old Schema column. In contrast from the old tables, each row contains one metric. |
+| 7. | HDInsightHadoopAndYarn Metrics | No log types | This table contains JMX metrics from the Hadoop and YARN frameworks. It contains all the same JMX metrics as the old Custom Logs tables, plus more metrics we considered important. We added Timeline Server, Node Manager, and Job History Server metrics. It contains one metric per record. |
+
+### Hadoop cluster
+
+Different log types(sources) inside **Hadoop** tables
+
+| S.no | Table Name | Log Types | Description |
+| | | | |
+| 1. | HDInsightAmbariClusterAlerts | No log types | This table contains Ambari Cluster Alerts from each node in the cluster (except for edge nodes). Each alert is a record in this table. |
+| 2. | HDInsightAmbariSystem Metrics | No log types | This table contains system metrics collected from Ambari. The metrics now come from each node in the cluster (except for edge nodes) instead of just the two headnodes. Each metric is now a column and each metric is reported once per record. |
+| 3. | HDInsightHadoopAndYarnLogs | **Head Node:** MRJobSummary, Resource Manager, TimelineServer Worker Node: NodeManager | This table contains all logs generated from the Hadoop and YARN frameworks. |
+| 4. | HDInsightHadoopAndYarnMetrics | No Log Types | This table contains JMX metrics from the Hadoop and YARN frameworks. It contains all the same JMX metrics as the old Custom Logs tables, plus more metrics we considered important. We added Timeline Server, Node Manager, and Job History Server metrics. It contains one metric per record. |
+| 5. | HDInsightHiveAndLLAPLogs | **Head Node:** HiveMetastoreLog, HiveServer2Log, WebHcatLog | This table contains logs generated from Hive, LLAP, and their related components: WebHCat and Zeppelin. |
+| 6. | HDInsight Hive And LLAP Metrics | No log types | This table contains JMX metrics from the Hive and LLAP frameworks. It contains all the same JMX metrics as the old Custom Logs tables. It contains one metric per record. |
+| 7. | HDInsight Security Logs | Head Node: AmbariAuditLog, AuthLog Zookeeper Node: AuthLog | This table contains records from the Ambari Audit and Auth Logs. |
+
+## Parameters syntax
+
+Parameters define the cluster type, table names, source names and the action.
++
+Parameter contains three parts:
+- Cluster type
+- Tables and Log types
+- Action (The action can be either `--disable` or `--enable`.)
+
+* Multiple tables syntax
+Rule: The tables are separated with a (,) or comma.
+
+For example,
+
+`spark HDInsightSecurityLogs, HDInsightAmbariSystemMetrics --disable`
+
+`hbase HDInsightSecurityLogs, HDInsightAmbariSystemMetrics --enable`
+
+> [!NOTE]
+> The tables are separated with a comma.
+
+* Multiple source type/log type
+Rule:The source types/log types are separated with a space.
+Rule:For disabling a source the user must write the table name in which the log types is then followed by a colon, then the real log type name.
+TableName : LogTypeName
+
+For example,
+
+spark HDInsightSecurityLogs is a table, which has two log types AmbariAuditLog and AuthLog.
+For Disabling both the log types the correct syntax would be:
+spark HDInsightSecurityLogs: AmbariAuditLog AuthLog --disable
+
+> [!NOTE]
+>* The source/log types are separated by a space.
+>* Table and its source types are separated by a colon.
+
+* Multiple tables and source types
+If there are two tables and two source types, which we need to be disabled
+
+- Spark: InteractiveHiveMetastoreLog logtype in HDInsightHiveAndLLAPLogs table
+- Hbase: InteractiveHiveHSILog logtype in HDInsightHiveAndLLAPLogs table
+- Hadoop: HDInsightHiveAndLLAPMetrics table
+- Hadoop: HDInsightHiveTezAppStats table
+
+Correct Parameter syntax for such cases would be
+
+```
+interactivehive HDInsightHiveAndLLAPLogs: InteractiveHiveMetastoreLog, HDInsightHiveAndLLAPMetrics, HDInsightHiveTezAppStats, HDInsightHiveAndLLAPLogs: InteractiveHiveHSILog --enable
+```
+
+> [!NOTE]
+>* Different tables are separated with a comma(,).
+>* Sources are denoted with a colon(:) after the table name in which they reside.
+
+## Next steps
+
+* [Query Azure Monitor logs to monitor HDInsight clusters](hdinsight-hadoop-oms-log-analytics-use-queries.md)
+* [How to monitor cluster availability with Apache Ambari and Azure Monitor logs](./hdinsight-cluster-availability.md)
healthcare-apis Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure API for FHIR description: Lists Azure Policy Regulatory Compliance controls available for Azure API for FHIR. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 07/26/2022 Last updated : 08/01/2022
healthcare-apis Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/overview.md
Title: What is FHIR service?
-description: The FHIR service enables rapid exchange of data through FHIR APIs. Ingest, manage, and persist Protected Health Information PHI with a managed cloud service.
+ Title: What is the FHIR service in Azure Health Data Services?
+description: The FHIR service enables rapid exchange of health data through FHIR APIs. Ingest, manage, and persist Protected Health Information (PHI) with a managed cloud service.
Previously updated : 06/06/2022 Last updated : 08/01/2022
-# What is FHIR&reg; service?
+# What is the FHIR service in Azure Health Data Services?
-FHIR service in Azure Health Data Services (hereby called the FHIR service) enables rapid exchange of data through Fast Healthcare Interoperability Resources (FHIR®) APIs, backed by a managed Platform-as-a Service (PaaS) offering in the cloud. It makes it easier for anyone working with health data to ingest, manage, and persist Protected Health Information [PHI](https://www.hhs.gov/answers/hipaa/what-is-phi/https://docsupdatetracker.net/index.html) in the cloud:
+The FHIR service in Azure Health Data Services enables rapid exchange of health data using the Fast Healthcare Interoperability Resources (FHIR®) data standard. Offered as a managed Platform-as-a-Service (PaaS) for the storage and exchange of FHIR data, the FHIR service makes it easy for anyone working with health data to securely manage Protected Health Information ([PHI](https://www.hhs.gov/answers/hipaa/what-is-phi/https://docsupdatetracker.net/index.html)) in the cloud.
-- Managed FHIR service, provisioned in the cloud in minutes -- Enterprise-grade, FHIR-based endpoint in Azure for data access, and storage in FHIR format
+The FHIR service offers the following:
+
+- Managed FHIR-compliant server, provisioned in the cloud in minutes
+- Enterprise-grade FHIR API endpoint for FHIR data access and storage
- High performance, low latency - Secure management of Protected Health Information (PHI) in a compliant cloud environment-- SMART on FHIR for mobile and web implementations-- Control your own data at scale with role-based access control (RBAC)-- Audit log tracking for access, creation, modification, and reads within each data store
+- SMART on FHIR for mobile and web clients
+- Controlled access to FHIR data at scale with Azure Active Directory-backed Role-Based Access Control (RBAC)
+- Audit log tracking for access, creation, modification, and reads within the FHIR service data store
-FHIR service allows you to create and deploy a FHIR server in just minutes to leverage the elastic scale of the cloud. The Azure services that power the FHIR service are designed for rapid performance no matter what size datasets you're managing.
+The FHIR service allows you to quickly create and deploy a FHIR server in just minutes to leverage the elastic scale of the cloud for ingesting, persisting, and querying FHIR data. The Azure services that power the FHIR service are designed for high performance no matter how much data you're working with.
-The FHIR API and compliant data store enable you to securely connect and interact with any system that utilizes FHIR APIs. Microsoft takes on the operations, maintenance, updates, and compliance requirements in the PaaS offering, so you can free up your own operational and development resources.
+The FHIR API provisioned in the FHIR service enables any FHIR-compliant system to securely connect and interact with FHIR data. As a PaaS offering, Microsoft takes on the operations, maintenance, update, and compliance requirements for the FHIR service so you can free up your own operational and development resources.
## Leveraging the power of your data with FHIR
-The healthcare industry is rapidly transforming health data to the emerging standard of [FHIR&reg;](https://hl7.org/fhir) (Fast Healthcare Interoperability Resources). FHIR enables a robust, extensible data model with standardized semantics and data exchange that enables all systems using FHIR to work together. Transforming your data to FHIR allows you to quickly connect existing data sources such as the electronic health record systems or research databases. FHIR also enables the rapid exchange of data in modern implementations of mobile and web development. Most importantly, FHIR can simplify data ingestion and accelerate development with analytics and machine learning tools.
+The healthcare industry is rapidly adopting [FHIR®](https://hl7.org/fhir) as the industry-wide standard for health data storage, querying, and exchange. FHIR provides a robust, extensible data model with standardized semantics that all FHIR-compliant systems can use interchangeably. With FHIR, organizations can unify disparate electronic health record systems (EHRs) and other health data repositories – allowing for all data to be persisted and exchanged in a single, universal format. With the addition of SMART on FHIR, user-facing mobile and web-based applications can securely interact with FHIR data – opening a new range of possibilities for health data access. Most of all, FHIR simplifies the process of assembling large health datasets for research – providing a path for researchers and clinicians to unlock health insights through machine learning and analytics.
### Securely manage health data in the cloud
-FHIR service allows for the exchange of data via consistent, RESTful, FHIR APIs based on the HL7 FHIR specification. Backed by a managed PaaS offering in Azure, it also provides a scalable and secure environment for the management and storage of Protected Health Information (PHI) data in the native FHIR format.
+The FHIR service in Azure Health Data Services makes FHIR data available to clients through a FHIR RESTful API ΓÇô an implementation of the HL7 FHIR API specification. Provisioned as a managed PaaS offering in Azure, the FHIR service gives organizations a scalable and secure environment for the storage and exchange of Protected Health Information (PHI) in the native FHIR format.
### Free up your resources to innovate
-You could invest resources building and running your own FHIR server, but with FHIR service in Azure Health Data Services, Microsoft takes on the workload of operations, maintenance, updates and compliance requirements, allowing you to free up your own operational and development resources.
+You could invest resources building and running your own FHIR server, but with the FHIR service in Azure Health Data Services, Microsoft handles setting up the server's components, ensuring all compliance requirements are met so you can focus on building innovative solutions.
### Enable interoperability with FHIR
-Using the FHIR service enables to you connect with any system that leverages FHIR APIs for read, write, search, and other functions. It can be used as a powerful tool to consolidate, normalize, and apply machine learning with clinical data from electronic health records, clinician and patient dashboards, remote monitoring programs, or with databases outside of your system that have FHIR APIs.
+The FHIR service enables connection with any health data system or application capable of sending FHIR API requests. Coupled with other parts of the Azure ecosystem, the FHIR service forms a link between electronic health records systems (EHRs) and Azure's powerful suite of data analytics and machine learning tools ΓÇô enabling organizations to build patient and provider-facing applications that harness the full power of the Microsoft cloud.
### Control Data Access at Scale
-You control your data. Role-based access control (RBAC) enables you to manage how your data is stored and accessed. Providing increased security and reducing administrative workload, you determine who has access to the datasets you create, based on role definitions you create for your environment.
+With the FHIR service, you control your data ΓÇô at scale. The FHIR service's Role-Based Access Control (RBAC) is rooted in Azure AD identities management, which means you can grant or deny access to health data based on the roles given to individuals in your organization. These RBAC settings for the FHIR service are configurable in Azure Health Data Services at the workspace level. This simplifies system management and guarantees your organization's PHI is safe within a HIPAA and HITRUST-compliant environment.
### Secure your data
-Protect your PHI with unparalleled security intelligence. Your data is isolated to a unique database per API instance and protected with multi-region failover. FHIR service implements a layered, in-depth defense and advanced threat protection for your data.
+As part of the Azure family of services, the FHIR service protects your organization's PHI with an unparalleled level of security. In Azure Health Data Services, your FHIR data is isolated to a unique database per FHIR service instance and protected with multi-region failover. On top of this, FHIR service implements a layered, in-depth defense and advanced threat protection for your data ΓÇô giving you peace of mind that your organization's PHI is guarded by Azure's industry-leading security.
## Applications for the FHIR service
-FHIR servers are key tools for interoperability of health data. The FHIR service is designed as an API and service that you can create, deploy, and begin using quickly. As the FHIR standard expands in healthcare, use cases will continue to grow, but some initial customer applications where FHIR service is useful are below:
+FHIR servers are essential for interoperability of health data. The FHIR service is designed as a managed FHIR server with a RESTful API for connecting to a broad range of client systems and applications. Some of the key use cases for the FHIR service are listed below:
-- **Startup/IoT and App Development:** Customers developing a patient or provider centric app (mobile or web) can leverage FHIR service as a fully managed backend service. The FHIR service provides a valuable resource in that customers can manage and exchange data in a secure cloud environment designed for health data, leverage SMART on FHIR implementation guidelines, and enable their technology to be utilized by all provider systems (for example, most EHRs have enabled FHIR read APIs).
+- **Startup App Development:** Customers developing a patient- or provider-centric app (mobile or web) can leverage FHIR service as a fully managed backend for their health data transactions. The FHIR service enables secure transfer of PHI, and with SMART on FHIR, app developers can take advantage of the robust identities management in Azure AD for authorization of FHIR RESTful API actions.
-- **Healthcare Ecosystems:** While EHRs exist as the primary ΓÇÿsource of truthΓÇÖ in many clinical settings, it isn't uncommon for providers to have multiple databases that arenΓÇÖt connected to one another or store data in different formats. Utilizing the FHIR service as a service that sits on top of those systems allows you to standardize data in the FHIR format. This helps to enable data exchange across multiple systems with a consistent data format.
+- **Healthcare Ecosystems:** While EHRs exist as the primary ΓÇÿsource of truthΓÇÖ in many clinical settings, it isn't uncommon for providers to have multiple databases that arenΓÇÖt connected to one another (often because the data is stored in different formats). Utilizing the FHIR service as a conversion layer between these systems allows organizations to standardize data in the FHIR format. Ingesting and persisting in FHIR enables health data querying and exchange across multiple disparate systems.
-- **Research:** Healthcare researchers will find the FHIR standard in general and the FHIR service useful as it normalizes data around a common FHIR data model and reduces the workload for machine learning and data sharing.
-Exchange of data via the FHIR service provides audit logs and access controls that help control the flow of data and who has access to what data types.
+- **Research:** Health researchers have embraced the FHIR standard as it gives the community a shared data model and removes barriers to assembling large datasets for machine learning and analytics. With the FHIR service's data conversion and PHI de-identification capabilities, researchers can prepare HIPAA-compliant secondary-use data before sending it to Azure machine learning and analytics pipelines. The FHIR service's audit logging and alert mechanisms also play an important role in research workflows.
-## FHIR from Microsoft
+## FHIR platforms from Microsoft
FHIR capabilities from Microsoft are available in three configurations:
-* The FHIR service in Azure Health Data Services is a platform as a service (PaaS) offering in Azure that's easily provisioned in the Azure portal and managed by Microsoft. Includes the ability to provision other datasets, such as DICOM in the same workspace.
-* Azure API for FHIR - A PaaS offering in Azure, easily provisioned in the Azure portal and managed by Microsoft. This implementation only includes FHIR data and is a GA product.
-* FHIR Server for Azure ΓÇô an open-source project that can be deployed into your Azure subscription, available on GitHub at https://github.com/Microsoft/fhir-server.
+* The **FHIR service** is a managed platform as a service (PaaS) that operates as part of Azure Health Data Services. In addition to the FHIR service, Azure Health Data Services includes managed services for other types of health data, such as the DICOM service for medical imaging data and the MedTech service for medical IoT data. All services (FHIR service, DICOM service, and MedTech service) can be connected and administered within an Azure Health Data Services workspace.
+* **Azure API for FHIR** is a managed FHIR server offered as a PaaS in Azure ΓÇô easily provisioned in the Azure portal. Azure API for FHIR is not part of Azure Health Data Services and lacks some of the features of the FHIR service.
+* **FHIR Server for Azure**, an open-source FHIR server that can be deployed into your Azure subscription, is available on GitHub at https://github.com/Microsoft/fhir-server.
-For use cases that requires extending or customizing FHIR server or require access the underlying servicesΓÇösuch as the databaseΓÇöwithout going through the FHIR APIs, developers should choose the open-source FHIR Server for Azure. For implementation of a turn-key, production-ready FHIR API and backend service where persisted data should only be accessed through the FHIR API, developers should choose FHIR service.
+For use cases that require customizing a FHIR server or that require access to the underlying services ΓÇô such as access to the database without going through the FHIR API, developers should choose the open-source FHIR Server for Azure. For implementation of a turnkey, production-ready FHIR API with a provisioned database backend (i.e., data can only be accessed through the FHIR API - not the database directly), developers should choose the FHIR service.
## Next Steps
-To start working with the FHIR service, follow the 5-minute quickstart to deploy FHIR service.
+To start working with the FHIR service, follow the 5-minute quickstart instructions for FHIR service deployment.
>[!div class="nextstepaction"] >[Deploy FHIR service](fhir-portal-quickstart.md)
healthcare-apis Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/release-notes.md
Title: Azure Health Data Services monthly releases description: This article provides details about the Azure Health Data Services monthly features and enhancements. -+ Last updated 06/29/2022-+ # Release notes: Azure Health Data Services
healthcare-apis Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Health Data Services FHIR service description: Lists Azure Policy Regulatory Compliance controls available. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 07/26/2022 Last updated : 08/01/2022
industrial-iot Reference Command Line Arguments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/industrial-iot/reference-command-line-arguments.md
Title: Microsoft OPC Publisher Command-line Arguments
-description: This article provides an overview of the OPC Publisher Command-line Arguments
+ Title: Microsoft OPC Publisher command-line arguments
+description: This article provides an overview of the OPC Publisher Command-line Arguments.
Last updated 3/22/2021
-# Command-line Arguments
-
-In the following, there are several Command-line Arguments described that can be used to set global settings for OPC Publisher.
-
-## OPC Publisher Command-line Arguments for Version 2.5 and below
-
-* Usage: opcpublisher.exe \<applicationname> [\<iothubconnectionstring>] [\<options>]
-
-* applicationname: the OPC UA application name to use, required
- The application name is also used to register the publisher under this name in the
- IoT Hub device registry.
-
-* iothubconnectionstring: the IoT Hub owner connectionstring, optional. Typically you specify the IoTHub owner connectionstring only on the first start of the application. The connection string is encrypted and stored in the platforms certificate store.
-On subsequent calls, it's read from there and reused. If you specify the connectionstring on each start, the device, which is created for the application in the IoT Hub device registry is removed and recreated each time.
-
-There are a couple of environment variables, which can be used to control the application:
-```
- _HUB_CS: sets the IoTHub owner connectionstring
- _GW_LOGP: sets the filename of the log file to use
- _TPC_SP: sets the path to store certificates of trusted stations
- _GW_PNFP: sets the filename of the publishing configuration file
-```
-
-> [!NOTE]
-> Command-line Arguments overrule environment variable settings.
-
-```
- --pf, --publishfile=VALUE
- the filename to configure the nodes to publish.
- Default: '/appdata/publishednodes.json'
- --tc, --telemetryconfigfile=VALUE
- the filename to configure the ingested telemetry
- Default: ''
- -s, --site=VALUE
- the site OPC Publisher is working in. if specified this domain is appended (delimited by a ':' to the 'ApplicationURI' property when telemetry is sent to IoTHub.
- The value must follow the syntactical rules of a
- DNS hostname.
- Default: not set
- --ic, --iotcentral
- OPC Publisher sends OPC UA data in IoTCentral
- compatible format (DisplayName of a node is used
- as key, this key is the Field name in IoTCentral)
- . you need to ensure that all DisplayName's are
- unique. (Auto enables fetch display name)
- Default: False
- --sw, --sessionconnectwait=VALUE
- specify the wait time in seconds publisher is
- trying to connect to disconnected endpoints and
- starts monitoring unmonitored items
- Min: 10
- Default: 10
-
- --mq, --monitoreditemqueuecapacity=VALUE
- specify how many notifications of monitored items
- can be stored in the internal queue, if the data
- can not be sent quick enough to IoTHub
- Min: 1024
- Default: 8192
- --di, --diagnosticsinterval=VALUE
- shows publisher diagnostic info at the specified
- interval in seconds (need log level info).
- -1 disables remote diagnostic log and diagnostic
- output
- 0 disables diagnostic output
- Default: 0
- --ns, --noshutdown=VALUE
- same as runforever.
- Default: False
- --rf, --runforever
- OPC Publisher can not be stopped by pressing a key on
- the console, but runs forever.
- Default: False
- --lf, --logfile=VALUE
- the filename of the logfile to use.
- Default: './<hostname>-publisher.log'
- --lt, --logflushtimespan=VALUE
- the timespan in seconds when the logfile should be
- flushed.
- Default: 00:00:30 sec
- --ll, --loglevel=VALUE
- the loglevel to use (allowed: fatal, error, warn,
- info, debug, verbose).
- Default: info
- --ih, --iothubprotocol=VALUE
- the protocol to use for communication with IoTHub (allowed values: Amqp, Http1, Amqp_WebSocket_Only,
- Amqp_Tcp_Only, Mqtt, Mqtt_WebSocket_Only, Mqtt_
- Tcp_Only) or IoT EdgeHub (allowed values: Mqtt_
- Tcp_Only, Amqp_Tcp_Only).
- Default for IoTHub: Mqtt_WebSocket_Only
- Default for IoT EdgeHub: Amqp_Tcp_Only
- --ms, --iothubmessagesize=VALUE
- the max size of a message which can be sent to
- IoTHub. When telemetry of this size is available
- it is sent.
- 0 enforces immediate send when telemetry is
- available
- Min: 0
- Max: 262144
- Default: 262144
- --si, --iothubsendinterval=VALUE
- the interval in seconds when telemetry should be
- sent to IoTHub. If 0, then only the
- iothubmessagesize parameter controls when
- telemetry is sent.
- Default: '10'
- --dc, --deviceconnectionstring=VALUE
- if publisher is not able to register itself with
- IoTHub, you can create a device with name <
- applicationname> manually and pass in the
- connectionstring of this device.
- Default: none
- -c, --connectionstring=VALUE
- the IoTHub owner connectionstring.
- Default: none
- --hb, --heartbeatinterval=VALUE
- the publisher is using this as default value in
- seconds for the heartbeat interval setting of
- nodes without
- a heartbeat interval setting.
- Default: 0
- --sf, --skipfirstevent=VALUE
- the publisher is using this as default value for
- the skip first event setting of nodes without
- a skip first event setting.
- Default: False
- --pn, --portnum=VALUE
- the server port of the publisher OPC server
- endpoint.
- Default: 62222
- --pa, --path=VALUE
- the enpoint URL path part of the publisher OPC
- server endpoint.
- Default: '/UA/Publisher'
- --lr, --ldsreginterval=VALUE
- the LDS(-ME) registration interval in ms. If 0,
- then the registration is disabled.
- Default: 0
- --ol, --opcmaxstringlen=VALUE
- the max length of a string opc can transmit/
- receive.
- Default: 131072
- --ot, --operationtimeout=VALUE
- the operation timeout of the publisher OPC UA
- client in ms.
- Default: 120000
- --oi, --opcsamplinginterval=VALUE
- the publisher is using this as default value in
- milliseconds to request the servers to sample
- the nodes with this interval
- this value might be revised by the OPC UA
- servers to a supported sampling interval.
- please check the OPC UA specification for
- details how this is handled by the OPC UA stack.
- a negative value sets the sampling interval
- to the publishing interval of the subscription
- this node is on.
- 0 configures the OPC UA server to sample in
- the highest possible resolution and should be
- taken with care.
- Default: 1000
- --op, --opcpublishinginterval=VALUE
- the publisher is using this as default value in
- milliseconds for the publishing interval setting
- of the subscriptions established to the OPC UA
- servers.
- please check the OPC UA specification for
- details how this is handled by the OPC UA stack.
- a value less than or equal zero lets the
- server revise the publishing interval.
- Default: 0
- --ct, --createsessiontimeout=VALUE
- specify the timeout in seconds used when creating
- a session to an endpoint. On unsuccessful
- connection attemps a backoff up to 5 times the
- specified timeout value is used.
- Min: 1
- Default: 10
- --ki, --keepaliveinterval=VALUE
- specify the interval in seconds the publisher is
- sending keep alive messages to the OPC servers
- on the endpoints it is connected to.
- Min: 2
- Default: 2
- --kt, --keepalivethreshold=VALUE
- specify the number of keep alive packets a server
- can miss, before the session is disconneced
- Min: 1
- Default: 5
- --aa, --autoaccept
- the OPC Publisher trusts all servers it is
- establishing a connection to.
- Default: False
- --tm, --trustmyself=VALUE
- same as trustowncert.
- Default: False
- --to, --trustowncert
- the OPC Publisher certificate is put into the trusted
- certificate store automatically.
- Default: False
- --fd, --fetchdisplayname=VALUE
- same as fetchname.
- Default: False
- --fn, --fetchname
- enable to read the display name of a published
- node from the server. this increases the
- runtime.
- Default: False
- --ss, --suppressedopcstatuscodes=VALUE
- specifies the OPC UA status codes for which no
- events should be generated.
- Default: BadNoCommunication,
- BadWaitingForInitialData
- --at, --appcertstoretype=VALUE
- the own application cert store type.
- (allowed values: Directory, X509Store)
- Default: 'Directory'
- --ap, --appcertstorepath=VALUE
- the path where the own application cert should be
- stored
- Default (depends on store type):
- X509Store: 'CurrentUser\UA_MachineDefault'
- Directory: 'pki/own'
- --tp, --trustedcertstorepath=VALUE
- the path of the trusted cert store
- Default: 'pki/trusted'
- --rp, --rejectedcertstorepath=VALUE
- the path of the rejected cert store
- Default 'pki/rejected'
- --ip, --issuercertstorepath=VALUE
- the path of the trusted issuer cert store
- Default 'pki/issuer'
- --csr
- show data to create a certificate signing request
- Default 'False'
- --ab, --applicationcertbase64=VALUE
- update/set this applications certificate with the
- certificate passed in as bas64 string
- --af, --applicationcertfile=VALUE
- update/set this applications certificate with the
- certificate file specified
- --pb, --privatekeybase64=VALUE
- initial provisioning of the application
- certificate (with a PEM or PFX fomat) requires a
- private key passed in as base64 string
- --pk, --privatekeyfile=VALUE
- initial provisioning of the application
- certificate (with a PEM or PFX fomat) requires a
- private key passed in as file
- --cp, --certpassword=VALUE
- the optional password for the PEM or PFX or the
- installed application certificate
- --tb, --addtrustedcertbase64=VALUE
- adds the certificate to the applications trusted
- cert store passed in as base64 string (multiple
- comma-separated strings supported)
- --tf, --addtrustedcertfile=VALUE
- adds the certificate file(s) to the applications
- trusted cert store passed in as base64 string (
- multiple comma-separated filenames supported)
- --ib, --addissuercertbase64=VALUE
- adds the specified issuer certificate to the
- applications trusted issuer cert store passed in
- as base64 string (multiple comma-separated strings supported)
- --if, --addissuercertfile=VALUE
- adds the specified issuer certificate file(s) to
- the applications trusted issuer cert store (
- multiple comma-separated filenames supported)
- --rb, --updatecrlbase64=VALUE
- update the CRL passed in as base64 string to the
- corresponding cert store (trusted or trusted
- issuer)
- --uc, --updatecrlfile=VALUE
- update the CRL passed in as file to the
- corresponding cert store (trusted or trusted
- issuer)
- --rc, --removecert=VALUE
- remove cert(s) with the given thumbprint(s) (
- multiple comma-separated thumbprints supported)
- --dt, --devicecertstoretype=VALUE
- the iothub device cert store type.
- (allowed values: Directory, X509Store)
- Default: X509Store
- --dp, --devicecertstorepath=VALUE
- the path of the iot device cert store
- Default Default (depends on store type):
- X509Store: 'My'
- Directory: 'CertificateStores/IoTHub'
- -i, --install
- register OPC Publisher with IoTHub and then exits.
- Default: False
- -h, --help
- show this message and exit
- --st, --opcstacktracemask=VALUE
- ignored.
- --sd, --shopfloordomain=VALUE
- same as site option
- The value must follow the syntactical rules of a
- DNS hostname.
- Default: not set
- --vc, --verboseconsole=VALUE
- ignored.
- --as, --autotrustservercerts=VALUE
- same as autoaccept
- Default: False
- --tt, --trustedcertstoretype=VALUE
- ignored.
- the trusted cert store always resides in a
- directory.
- --rt, --rejectedcertstoretype=VALUE
- ignored.
- the rejected cert store always resides in a
- directory.
- --it, --issuercertstoretype=VALUE
- ignored.
- the trusted issuer cert store always
- resides in a directory.
-```
--
-## OPC Publisher Command-line Arguments for Version 2.6 and above
-```
- --pf, --publishfile=VALUE
- the filename to configure the nodes to publish.
- If this Option is specified it puts OPC Publisher into stadalone mode.
- --lf, --logfile=VALUE
- the filename of the logfile to use.
- --ll. --loglevel=VALUE
- the log level to use (allowed: fatal, error,
- warn, info, debug, verbose).
- --me, --messageencoding=VALUE
- the messaging encoding for outgoing messages
- allowed values: Json, Uadp
- --mm, --messagingmode=VALUE
- the messaging mode for outgoing messages
- allowed values: PubSub, Samples
- --fm, --fullfeaturedmessage=VALUE
- the full featured mode for messages (all fields filled in).
- Default is 'true', for legacy compatibility use 'false'
- --aa, --autoaccept
- the publisher trusted all servers it is establishing a connection to
- --bs, --batchsize=VALUE
- the number of OPC UA data-change messages to be cached for batching.
- --si, --iothubsendinterval=VALUE
- the trigger batching interval in seconds.
- --ms, --iothubmessagesize=VALUE
- the maximum size of the (IoT D2C) message.
- --om, --maxoutgressmessages=VALUE
- the maximum size of the (IoT D2C) message egress buffer.
- --di, --diagnosticsinterval=VALUE
- shows publisher diagnostic info at the specified interval in seconds
- (need log level info). -1 disables remote diagnostic log and diagnostic output
- --lt, --logflugtimespan=VALUE
- the timespan in seconds when the logfile should be flushed.
- --ih, --iothubprotocol=VALUE
- protocol to use for communication with the hub.
- allowed values: AmqpOverTcp, AmqpOverWebsocket, MqttOverTcp,
- MqttOverWebsocket, Amqp, Mqtt, Tcp, Websocket, Any
- --hb, --heartbeatinterval=VALUE
- the publisher is using this as default value in seconds for the
- heartbeat interval setting of nodes without a heartbeat interval setting.
- --ot, --operationtimeout=VALUE
- the operation timeout of the publisher OPC UA client in ms.
- --ol, --opcmaxstringlen=VALUE
- the max length of a string opc can transmit/receive.
- --oi, --opcsamplinginterval=VALUE
- default value in milliseconds to request the servers to sample values
- --op, --opcpublishinginterval=VALUE
- default value in milliseconds for the publishing interval setting
- of the subscriptions against the OPC UA server.
- --ct, --createsessiontimeout=VALUE
- the interval in seconds the publisher is sending keep alive
- messages to the OPC servers on the endpoints it is connected to.
- --kt, --keepalivethresholt=VALUE
- specify the number of keep alive packets a server can miss,
- before the session is disconnected.
- --tm, --trustmyself
- the publisher certificate is put into the trusted store automatically.
- --at, --appcertstoretype=VALUE
- the own application cert store type (allowed: Directory, X509Store).
-```
-
-## OPC Publisher Command-line Arguments for Version 2.8.2 and above
-
-The following OPC Publisher configuration can be applied by Command Line Interface (CLI) options or as environment variable settings.
-The `Alternative` field, where present, refers to the CLI argument applicable in **standalone mode only**. When both environment variable and CLI argument are provided, the latest will overrule the env variable.
-```
- PublishedNodesFile=VALUE
- The file used to store the configuration of the nodes to be published
- along with the information to connect to the OPC UA server sources
- When this file is specified, or the default file is accessible by
- the module, OPC Publisher will start in standalone mode
- Alternative: --pf, --publishfile
- Mode: Standalone only
- Type: string - file name, optionally prefixed with the path
- Default: publishednodes.json
-
- site=VALUE
- The site OPC Publisher is assigned to
- Alternative: --s, --site
- Mode: Standalone and Orchestrated
- Type: string
- Default: <not set>
-
- LogFileName==VALUE
- The filename of the logfile to use
- Alternative: --lf, --logfile
- Mode: Standalone only
- Type: string - file name, optionally prefixed with the path
- Default: <not set>
-
- LogFileFlushTimeSpan=VALUE
- The time span in seconds when the logfile should be flushed in the storage
- Alternative: --lt, --logflushtimespan
- Mode: Standalone only
- Environment variable type: time span string {[d.]hh:mm:ss[.fffffff]}
- Alternative argument type: integer in seconds
- Default: {00:00:30}
-
- loglevel=Value
- The level for logs to pe persisted in the logfile
- Alternative: --ll --loglevel
- Mode: Standalone only
- Type: string enum - Fatal, Error, Warning, Information, Debug, Verbose
- Default: info
-
- EdgeHubConnectionString=VALUE
- An IoT Edge Device or IoT Edge module connection string to use,
- when deployed as module in IoT Edge, the environment variable
- is already set as part of the container deployment
- Alternative: --dc, --deviceconnectionstring
- --ec, --edgehubconnectionstring
- Mode: Standalone and Orchestrated
- Type: connection string
- Default: <not set> <set by iotedge runtime>
-
- Transport=VALUE
- Protocol to use for upstream communication to edgeHub or IoTHub
- Alternative: --ih, --iothubprotocol
- Mode: Standalone and Orchestrated
- Type: string enum: Any, Amqp, Mqtt, AmqpOverTcp, AmqpOverWebsocket,
- MqttOverTcp, MqttOverWebsocket, Tcp, Websocket.
- Default: MqttOverTcp
-
- BypassCertVerification=VALUE
- Enables/disables bypass of certificate verification for upstream communication to edgeHub
- Alternative: N/A
- Mode: Standalone and Orchestrated
- Type: boolean
- Default: false
-
- EnableMetrics=VALUE
- Enables/disables upstream metrics propagation
- Alternative: N/A
- Mode: Standalone and Orchestrated
- Type: boolean
- Default: true
-
- DefaultPublishingInterval=VALUE
- Default value for the OPC UA publishing interval of OPC UA subscriptions
- created to an OPC UA server. This value is used when no explicit setting
- is configured.
- Alternative: --op, --opcpublishinginterval
- Mode: Standalone only
- Environment variable type: time span string {[d.]hh:mm:ss[.fffffff]}
- Alternative argument type: integer in milliseconds
- Default: {00:00:01} (1000)
-
- DefaultSamplingInterval=VALUE
- Default value for the OPC UA sampling interval of nodes to publish.
- This value is used when no explicit setting is configured.
- Alternative: --oi, --opcsamplinginterval
- Mode: Standalone only
- Environment variable type: time span string {[d.]hh:mm:ss[.fffffff]}
- Alternative argument type: integer in milliseconds
- Default: {00:00:01} (1000)
-
- DefaultQueueSize=VALUE
- Default setting value for the monitored item's queue size to be used when
- not explicitly specified in pn.json file
- Alternative: --mq, --monitoreditemqueuecapacity
- Mode: Standalone only
- Type: integer
- Default: 1
-
- DefaultHeartbeatInterval=VALUE
- Default value for the heartbeat interval setting of published nodes
- having no explicit setting for heartbeat interval.
- Alternative: --hb, --heartbeatinterval
- Mode: Standalone
- Environment variable type: time span string {[d.]hh:mm:ss[.fffffff]}
- Alternative argument type: integer in seconds
- Default: {00:00:00} meaning heartbeat is disabled
-
- MessageEncoding=VALUE
- The messaging encoding for outgoing telemetry.
- Alternative: --me, --messageencoding
- Mode: Standalone only
- Type: string enum - Json, Uadp
- Default: Json
-
- MessagingMode=VALUE
- The messaging mode for outgoing telemetry.
- Alternative: --mm, --messagingmode
- Mode: Standalone only
- Type: string enum - PubSub, Samples
- Default: Samples
-
- FetchOpcNodeDisplayName=VALUE
- Fetches the DisplayName for the nodes to be published from
- the OPC UA Server when not explicitly set in the configuration.
- Note: This has high impact on OPC Publisher startup performance.
- Alternative: --fd, --fetchdisplayname
- Mode: Standalone only
- Type: boolean
- Default: false
-
- FullFeaturedMessage=VALUE
- The full featured mode for messages (all fields filled in the telemetry).
- Default is 'false' for legacy compatibility.
- Alternative: --fm, --fullfeaturedmessage
- Mode: Standalone only
- Type:boolean
- Default: false
-
- BatchSize=VALUE
- The number of incoming OPC UA data change messages to be cached for batching.
- When BatchSize is 1 or TriggerInterval is set to 0 batching is disabled.
- Alternative: --bs, --batchsize
- Mode: Standalone and Orchestrated
- Type: integer
- Default: 50
-
- BatchTriggerInterval=VALUE
- The batching trigger interval.
- When BatchSize is 1 or TriggerInterval is set to 0 batching is disabled.
- Alternative: --si, --iothubsendinterval
- Mode: Standalone and Orchestrated
- Environment variable type: time span string {[d.]hh:mm:ss[.fffffff]}
- Alternative argument type: integer in seconds
- Default: {00:00:10}
-
- IoTHubMaxMessageSize=VALUE
- The maximum size of the (IoT D2C) telemetry message.
- Alternative: --ms, --iothubmessagesize
- Mode: Standalone and Orchestrated
- Type: integer
- Default: 0
-
- DiagnosticsInterval=VALUE
- Shows publisher diagnostic info at the specified interval in seconds
- (need log level info). -1 disables remote diagnostic log and
- diagnostic output
- Alternative: --di, --diagnosticsinterval
- Mode: Standalone only
- Environment variable type: time span string {[d.]hh:mm:ss[.fffffff]}
- Alternative argument type: integer in seconds
- Default: {00:00:60}
-
- LegacyCompatibility=VALUE
- Forces the Publisher to operate in 2.5 legacy mode, using
- `"application/opcua+uajson"` for `ContentType` on the IoT Hub
- Telemetry message.
- Alternative: --lc, --legacycompatibility
- Mode: Standalone only
- Type: boolean
- Default: false
-
- PublishedNodesSchemaFile=VALUE
- The validation schema filename for published nodes file.
- Alternative: --pfs, --publishfileschema
- Mode: Standalone only
- Type: string
- Default: <not set>
-
- MaxNodesPerDataSet=VALUE
- Maximum number of nodes within a DataSet/Subscription.
- When more nodes than this value are configured for a
- DataSetWriter, they will be added in a separate DataSet/Subscription.
- Alternative: N/A
- Mode: Standalone only
- Type: integer
- Default: 1000
-
- ApplicationName=VALUE
- OPC UA Client Application Config - Application name as per
- OPC UA definition. This is used for authentication during communication
- init handshake and as part of own certificate validation.
- Alternative: --an, --appname
- Mode: Standalone and Orchestrated
- Type: string
- Default: "Microsoft.Azure.IIoT"
-
- ApplicationUri=VALUE
- OPC UA Client Application Config - Application URI as per
- OPC UA definition.
- Alternative: N/A
- Mode: Standalone and Orchestrated
- Type: string
- Default: $"urn:localhost:{ApplicationName}:microsoft:"
-
- ProductUri=VALUE
- OPC UA Client Application Config - Product URI as per
- OPC UA definition.
- Alternative: N/A
- Mode: Standalone and Orchestrated
- Type: string
- Default: "https://www.github.com/Azure/Industrial-IoT"
-
- DefaultSessionTimeout=VALUE
- OPC UA Client Application Config - Session timeout in seconds
- as per OPC UA definition.
- Alternative: --ct --createsessiontimeout
- Mode: Standalone and Orchestrated
- Type: integer
- Default: 0, meaning <not set>
-
- MinSubscriptionLifetime=VALUE
- OPC UA Client Application Config - Minimum subscription lifetime in seconds
- as per OPC UA definition.
- Alternative: N/A
- Mode: Standalone and Orchestrated
- Type: integer
- Default: 0, <not set>
-
- KeepAliveInterval=VALUE
- OPC UA Client Application Config - Keep alive interval in seconds
- as per OPC UA definition.
- Alternative: --ki, --keepaliveinterval
- Mode: Standalone and Orchestrated
- Type: integer milliseconds
- Default: 10,000 (10s)
-
- MaxKeepAliveCount=VALUE
- OPC UA Client Application Config - Maximum count of keep alive events
- as per OPC UA definition.
- Alternative: --kt, --keepalivethreshold
- Mode: Standalone and Orchestrated
- Type: integer
- Default: 50
-
- PkiRootPath=VALUE
- OPC UA Client Security Config - PKI certificate store root path
- Alternative: N/A
- Mode: Standalone and Orchestrated
- Type: string
- Default: "pki"
-
- ApplicationCertificateStorePath=VALUE
- OPC UA Client Security Config - application's
- own certificate store path
- Alternative: --ap, --appcertstorepath
- Mode: Standalone and Orchestrated
- Type: string
- Default: $"{PkiRootPath}/own"
-
- ApplicationCertificateStoreType=VALUE
- OPC UA Client Security Config - application's
- own certificate store type
- Alternative: --at, --appcertstoretype
- Mode: Standalone and Orchestrated
- Type: enum string : Directory, X509Store
- Default: Directory
-
- ApplicationCertificateSubjectName=VALUE
- OPC UA Client Security Config - the subject name
- in the application's own certificate
- Alternative: --sn, --appcertsubjectname
- Mode: Standalone and Orchestrated
- Type: string
- Default: "CN=Microsoft.Azure.IIoT, C=DE, S=Bav, O=Microsoft, DC=localhost"
-
- TrustedIssuerCertificatesPath=VALUE
- OPC UA Client Security Config - trusted certificate issuer
- store path
- Alternative: --ip, --issuercertstorepath
- Mode: Standalone and Orchestrated
- Type: string
- Default: $"{PkiRootPath}/issuers"
-
- TrustedIssuerCertificatesType=VALUE
- OPC UA Client Security Config - trusted issuer certificates
- store type
- Alternative: N/A
- Mode: Standalone and Orchestrated
- Type: enum string : Directory, X509Store
- Default: Directory
-
- TrustedPeerCertificatesPath=VALUE
- OPC UA Client Security Config - trusted peer certificates
- store path
- Alternative: --tp, --trustedcertstorepath
- Mode: Standalone and Orchestrated
- Type: string
- Default: $"{PkiRootPath}/trusted"
-
- TrustedPeerCertificatesType=VALUE
- OPC UA Client Security Config - trusted peer certificates
- store type
- Alternative: N/A
- Mode: Standalone and Orchestrated
- Type: enum string : Directory, X509Store
- Default: Directory
-
- RejectedCertificateStorePath=VALUE
- OPC UA Client Security Config - rejected certificates
- store path
- Alternative: --rp, --rejectedcertstorepath
- Mode: Standalone and Orchestrated
- Type: string
- Default: $"{PkiRootPath}/rejected"
-
- RejectedCertificateStoreType=VALUE
- OPC UA Client Security Config - rejected certificates
- store type
- Alternative: N/A
- Mode: Standalone and Orchestrated
- Type: enum string : Directory, X509Store
- Default: Directory
-
- AutoAcceptUntrustedCertificates=VALUE
- OPC UA Client Security Config - auto accept untrusted
- peer certificates
- Alternative: --aa, --autoaccept
- Mode: Standalone and Orchestrated
- Type: boolean
- Default: false
-
- RejectSha1SignedCertificates=VALUE
- OPC UA Client Security Config - reject deprecated Sha1
- signed certificates
- Alternative: N/A
- Mode: Standalone and Orchestrated
- Type: boolean
- Default: false
-
- MinimumCertificateKeySize=VALUE
- OPC UA Client Security Config - minimum accepted
- certificates key size
- Alternative: N/A
- Mode: Standalone and Orchestrated
- Type: integer
- Default: 1024
-
- AddAppCertToTrustedStore=VALUE
- OPC UA Client Security Config - automatically copy own
- certificate's public key to the trusted certificate store
- Alternative: --tm, --trustmyself
- Mode: Standalone and Orchestrated
- Type: boolean
- Default: true
-
- SecurityTokenLifetime=VALUE
- OPC UA Stack Transport Secure Channel - Security token lifetime in milliseconds
- Alternative: N/A
- Mode: Standalone and Orchestrated
- Type: integer (milliseconds)
- Default: 3,600,000 (1h)
-
- ChannelLifetime=VALUE
- OPC UA Stack Transport Secure Channel - Channel lifetime in milliseconds
- Alternative: N/A
- Mode: Standalone and Orchestrated
- Type: integer (milliseconds)
- Default: 300,000 (5 min)
-
- MaxBufferSize=VALUE
- OPC UA Stack Transport Secure Channel - Max buffer size
- Alternative: N/A
- Mode: Standalone and Orchestrated
- Type: integer
- Default: 65,535 (64KB -1)
-
- MaxMessageSize=VALUE
- OPC UA Stack Transport Secure Channel - Max message size
- Alternative: N/A
- Mode: Standalone and Orchestrated
- Type: integer
- Default: 4,194,304 (4 MB)
-
- MaxArrayLength=VALUE
- OPC UA Stack Transport Secure Channel - Max array length
- Alternative: N/A
- Mode: Standalone and Orchestrated
- Type: integer
- Default: 65,535 (64KB - 1)
-
- MaxByteStringLength=VALUE
- OPC UA Stack Transport Secure Channel - Max byte string length
- Alternative: N/A
- Mode: Standalone and Orchestrated
- Type: integer
- Default: 1,048,576 (1MB);
-
- OperationTimeout=VALUE
- OPC UA Stack Transport Secure Channel - OPC UA Service call
- operation timeout
- Alternative: --ot, --operationtimeout
- Mode: Standalone and Orchestrated
- Type: integer (milliseconds)
- Default: 120,000 (2 min)
-
- MaxStringLength=VALUE
- OPC UA Stack Transport Secure Channel - Maximum length of a string
- that can be send/received over the OPC UA Secure channel
- Alternative: --ol, --opcmaxstringlen
- Mode: Standalone and Orchestrated
- Type: integer
- Default: 130,816 (128KB - 256)
-
- RuntimeStateReporting=VALUE
- Enables reporting of OPC Publisher restarts.
- Alternative: --rs, --runtimestatereporting
- Mode: Standalone
- Type: boolean
- Default: false
-
- EnableRoutingInfo=VALUE
- Adds the routing info to telemetry messages. The name of the property is
- `$$RoutingInfo` and the value is the `DataSetWriterGroup` for that particular message.
- When the `DataSetWriterGroup` is not configured, the `$$RoutingInfo` property will
- not be added to the message even if this argument is set.
- Alternative: --ri, --enableroutinginfo
- Mode: Standalone
- Type: boolean
- Default: false
-```
+# OPC Publisher command-line arguments
+
+This article describes the command-line arguments that you can use to set global settings for Open Platform Communications (OPC) Publisher.
+
+## Command-line arguments for version 2.5 and earlier
+
+* **Usage**: opcpublisher.exe \<applicationname> [\<iothubconnectionstring>] [\<options>]
+
+* **applicationname**: (Required) The OPC Unified Architecture (OPC UA) application name to use.
+
+ You also use the application name to register the publisher in the IoT hub device registry.
+
+* **iothubconnectionstring**: (Optional) The IoT hub owner connection string.
+
+ You ordinarily specify the connection string only when you start the application for the first time. The connection string is encrypted and stored in the platforms certificate store.
+
+ On subsequent calls, the connection string is read from the platforms certificate store and reused. If you specify the connection string on each start, the device, which is created for the application in the IoT hub device registry, is removed and re-created each time.
+
+To control the application, you can use any of several of environment variables:
+
+* `_HUB_CS`: Sets the IoT hub owner connection string
+* `_GW_LOGP`: Sets the file name of the log file to use
+* `_TPC_SP`: Sets the path to store certificates of trusted stations
+* `_GW_PNFP`: Sets the file name of the publishing configuration file
+
+> [!NOTE]
+> Command-line arguments overrule environment variable settings.
+
+| Argument | Description |
+| | |
+| `--pf, --publishfile=VALUE` | The file name to use to configure the nodes to publish.<br>Default: `/appdata/publishednodes.json` |
+| `--tc, --telemetryconfigfile=VALUE` | The file name to use to configure the ingested telemetry.<br>Default: '' |
+| `-s, --site=VALUE` | The site that OPC Publisher is working in. If it's specified, this domain is appended (delimited by a `:` to the `ApplicationURI` property when telemetry is sent to the IoT hub. The value must follow the syntactical rules of a DNS hostname.<br>Default: \<not set> |
+| `--ic, --iotcentral` | OPC Publisher sends OPC UA data in an Azure IoT Central-compatible format (`DisplayName` of a node is used as a key, which is the field name in Azure IoT Central). Ensure that all `DisplayName` values are unique (automatically enables the fetch display name).<br>Default: `false` |
+| `--sw, --sessionconnectwait=VALUE` | Specifies the wait time, in seconds, that the publisher is trying to connect to disconnected endpoints and starts monitoring unmonitored items.<br>Minimum: 10<br>Default: `10` |
+| `--mq, --monitoreditemqueuecapacity=VALUE` | Specifies how many notifications of monitored items can be stored in the internal queue if the data can't be sent quickly enough to the IoT hub.<br>Minimum: `1024`<br>Default: `8192` |
+| `--di, --diagnosticsinterval=VALUE` | Shows OPC Publisher diagnostics information at the specified interval, in seconds (need log level information). `-1` disables the remote diagnostics log and diagnostics output. `0` disables the diagnostics output.<br>Default: `0` |
+| `--ns, --noshutdown=VALUE` | Same as `runforever`.<br>Default: `false` |
+| `--rf, --runforever` | You can't stop OPC Publisher by pressing a key on the console. It runs forever.<br>Default: `false` |
+| `--lf, --logfile=VALUE` | The file name of the log file to use.<br>Default: `<hostname>-publisher.log` |
+| `--lt, --logflushtimespan=VALUE` | The timespan, in seconds, when the log file should be flushed.<br>Default: `00:00:30` |
+| `--ll, --loglevel=VALUE` | The log level to use. Allowed: `fatal`, `error`, `warn`, `info`, `debug`, `verbose`.<br>Default: `info` |
+| `--ih, --iothubprotocol=VALUE` | The protocol to use for communication with the IoT hub (allowed values: `Amqp`, `Http1`, `Amqp_WebSocket_Only`, `Amqp_Tcp_Only`, `Mqtt`, `Mqtt_WebSocket_Only`, `Mqtt_ Tcp_Only`) or the Azure IoT Edge hub (allowed values: `Mqtt_Tcp_Only`, `Amqp_Tcp_Only`).<br>Default for the IoT hub: `Mqtt_WebSocket_Only`<br>Default for the IoT Edge hub: `Amqp_Tcp_Only` |
+| `--ms, --iothubmessagesize=VALUE` | The maximum size of a message that can be sent to the IoT hub. When telemetry of this size is available, it is sent. `0` enforces immediate send when telemetry is available.<br>Minimum: `0`<br>Maximum: `262144`<br>Default: `262144` |
+| `--si, --iothubsendinterval=VALUE` | The interval, in seconds, when telemetry should be sent to the IoT hub. If the interval is `0`, only the `iothubmessagesize` parameter controls when telemetry is sent.<br>Default: `10` |
+| `--dc, --deviceconnectionstring=VALUE` | If OPC Publisher can't register itself with the IoT hub, you can create a device with the name `\<applicationname>` manually and pass in the connection string of this device.<br>Default: none |
+| `-c, --connectionstring=VALUE` | The IoT hub owner connection string.<br>Default: none |
+| `--hb, --heartbeatinterval=VALUE` | OPC Publisher uses this as a default value, in seconds, for the heartbeat interval setting of nodes without a heartbeat interval setting.<br>Default: `0` |
+| `--sf, --skipfirstevent=VALUE` | OPC Publisher uses this as default value for the `skipfirstevent` setting of nodes without a `skipfirstevent` setting.<br>Default: `false` |
+| `--pn, --portnum=VALUE` | The server port of the publisher OPC server endpoint.<br>Default: `62222` |
+| `--pa, --path=VALUE` | The endpoint URL path part of the publisher OPC server endpoint.<br>Default: `/UA/Publisher` |
+| `--lr, --ldsreginterval=VALUE` | The LDS(-ME) registration interval, in milliseconds (ms). If `0`, the registration is disabled.<br>Default: `0` |
+| `--ol, --opcmaxstringlen=VALUE` | The maximum string length that OPC can transmit or receive.<br>Default: `131072` |
+| `--ot, --operationtimeout=VALUE` | The operation time-out of the publisher OPC UA client, in milliseconds.<br>Default: `120000` |
+| `--oi, --opcsamplinginterval=VALUE` | OPC Publisher uses this as default value, in milliseconds, to request the servers to sample the nodes with this interval. This value might be revised by the OPC UA servers to a supported sampling interval. Check the OPC UA specification for details about how this is handled by the OPC UA stack.<br>A negative value sets the sampling interval to the publishing interval of the subscription this node is on.<br>`0` configures the OPC UA server to sample in the highest possible resolution and should be used with care.<br>Default: `1000` |
+| `--op, --opcpublishinginterval=VALUE` | OPC Publisher uses this as default value, in milliseconds, for the publishing interval setting of the subscriptions established to the OPC UA servers. Check the OPC UA specification for details about how this is handled by the OPC UA stack.<br>A value less than or equal to `0` lets the server revise the publishing interval.<br>Default: `0` |
+| `--ct, --createsessiontimeout=VALUE` | Specifies the time-out, in seconds, that's used when you create a session to an endpoint. On unsuccessful connection, it attempts a backoff up to five times the specified time-out value.<br>Minimum: `1`<br>Default: `10` |
+| `--ki, --keepaliveinterval=VALUE` | Specifies the interval, in seconds, that the publisher sends keep-alive messages to the OPC servers on the endpoints that it's connected to.<br>Minimum: `2`<br>Default: `2` |
+| `--kt, --keepalivethreshold=VALUE` | Specifies the number of keep-alive packets that a server can miss before the session is disconnected.<br>Minimum: `1`<br>Default: `5` |
+| `--aa, --autoaccept` | OPC Publisher trusts all servers that it establishes a connection to.<br>Default: `false` |
+| `--tm, --trustmyself=VALUE` | Same as `trustowncert`.<br>Default: `false` |
+| `--to, --trustowncert` | The OPC Publisher certificate is put into the trusted certificate store automatically.<br>Default: `false` |
+| `--fd, --fetchdisplayname=VALUE` | Same as `fetchname`.<br>Default: `false` |
+| `--fn, --fetchname` | Enable reading the display name of a published node from the server. This setting increases the run time.<br>Default: `false` |
+| `--ss, --suppressedopcstatuscodes=VALUE` | Specifies the OPC UA status codes for which no events should be generated.<br>Default: `BadNoCommunication`, `BadWaitingForInitialData` |
+| `--at, --appcertstoretype=VALUE` | The owned application certificate store type.<br>Allowed values: `Directory`, `X509Store`<br>Default: `Directory` |
+| `--ap, --appcertstorepath=VALUE` | The path where the owned application certificate should be stored.<br>Default (depends on store type):<br>X509Store: `CurrentUser\UA_MachineDefault`<br>Directory: `pki/own` |
+| `--tp, --trustedcertstorepath=VALUE` | The path of the trusted certificate store.<br>Default: `pki/trusted` |
+| `--rp, --rejectedcertstorepath=VALUE` | The path of the rejected certificate store.<br>Default: `pki/rejected` |
+| `--ip, --issuercertstorepath=VALUE` | The path of the trusted issuer certificate store.<br>Default: `pki/issuer` |
+| `--csr` | Shows data to create a certificate signing request.<br>Default: `false` |
+| `--ab, --applicationcertbase64=VALUE` | Updates or sets this application's certificate with the certificate that's passed in as a Base64 string. |
+| `--af, --applicationcertfile=VALUE` | Updates or sets this application's certificate with the specified certificate file. |
+| `--pb, --privatekeybase64=VALUE` | Initially provisions the application certificate (in PEM or PFX format). Requires a private key, which is passed in as a Base64 string. |
+| `--pk, --privatekeyfile=VALUE` | Initially provisions the application certificate (in PEM or PFX format). Requires a private key, which is passed in as file. |
+| `--cp, --certpassword=VALUE` | The optional password for the PEM or PFX of the installed application certificate. |
+| `--tb, --addtrustedcertbase64=VALUE` | Adds the certificate to the application's trusted certificate store, passed in as a Base64 string (multiple comma-separated strings supported). |
+| `--tf, --addtrustedcertfile=VALUE` | Adds the certificate file to the application's trusted certificate store, passed in as a Base64 string (multiple comma-separated file names supported). |
+| `--ib, --addissuercertbase64=VALUE` | Adds the specified issuer certificate to the application's trusted issuer certificate store, passed in as a Base64 string (multiple comma-separated strings supported). |
+| `--if, --addissuercertfile=VALUE` | Adds the specified issuer certificate file to the application's trusted issuer certificate store (multiple comma-separated file names supported). |
+| `--rb, --updatecrlbase64=VALUE` | Updates the certificate revocation list (CRL), passed in as a Base64 string to the corresponding certificate store (trusted or trusted issuer). |
+| `--uc, --updatecrlfile=VALUE` | Updates the CRL, passed in as file to the corresponding certificate store (trusted or trusted issuer). |
+| `--rc, --removecert=VALUE` | Removes certificates with the specified thumbprints (multiple comma-separated thumbprints supported). |
+| `--dt, --devicecertstoretype=VALUE` | The IoT hub device certificate store type.<br>Allowed values: `Directory`, `X509Store`<br>Default: `X509Store` |
+| `--dp, --devicecertstorepath=VALUE` | The path of the IoT device certificate store<br>Default (depends on store type): `X509Store`<br>`My` Directory: `CertificateStores/IoTHub` |
+| `-i, --install` | Registers OPC Publisher with the IoT hub and then exits.<br>Default: `false` |
+| `-h, --help` | Shows this message and exits. |
+| `--st, --opcstacktracemask=VALUE` | Ignored. |
+| `--sd, --shopfloordomain=VALUE` | Same as the site option. The value must follow the syntactical rules of a DNS hostname.<br>Default: \<not set> |
+| `--vc, --verboseconsole=VALUE` | Ignored. |
+| `--as, --autotrustservercerts=VALUE` | Same as `--aa, --autoaccept`.<br>Default: `false` |
+| `--tt, --trustedcertstoretype=VALUE` | Ignored. The trusted certificate store always resides in a directory. |
+| `--rt, --rejectedcertstoretype=VALUE` | Ignored. The rejected certificate store always resides in a directory. |
+| `--it, --issuercertstoretype=VALUE` | Ignored. The trusted issuer certificate store always resides in a directory. |
+
+## Command-line arguments for version 2.6 and later
+
+| Argument | Description |
+| | |
+| `--pf, --publishfile=VALUE` | The file name to configure the nodes to publish. If this option is specified, it puts OPC Publisher into *standalone* mode. |
+| `--lf, --logfile=VALUE` | The file name of the log file to use. |
+| `--ll. --loglevel=VALUE` | The log level to use. Allowed: `fatal`, `error`, `warn`, `info`, `debug`, `verbose`. |
+| `--me, --messageencoding=VALUE` | The messaging encoding for outgoing messages. Allowed values: `Json`, `Uadp`. |
+| `--mm, --messagingmode=VALUE` | The messaging mode for outgoing messages. Allowed values: `PubSub`, `Samples`. |
+| `--fm, --fullfeaturedmessage=VALUE` | The full-featured mode for messages (all fields filled in).<br>Default is `true`. For legacy compatibility, use `false`. |
+| `--aa, --autoaccept` | OPC Publisher trusts all servers that it establishes a connection to. |
+| `--bs, --batchsize=VALUE` | The number of OPC UA data-change messages to be cached for batching. |
+| `--si, --iothubsendinterval=VALUE` | The trigger batching interval, in seconds. |
+| `--ms, --iothubmessagesize=VALUE` | The maximum size of the IoT D2C message. |
+| `--om, --maxoutgressmessages=VALUE` | The maximum size of the IoT D2C message egress buffer. |
+| `--di, --diagnosticsinterval=VALUE` | Shows OPC Publisher diagnostics information at the specified interval, in seconds (need log level information). `-1` disables the remote diagnostics log and diagnostics output. |
+| `--lt, --logflugtimespan=VALUE` | The timespan, in seconds, when the log file should be flushed. |
+| `--ih, --iothubprotocol=VALUE` | The protocol to use for communication with the hub. Allowed values: `AmqpOverTcp`, `AmqpOverWebsocket`, `MqttOverTcp`, `MqttOverWebsocket`, `Amqp`, `Mqtt`, `Tcp`, `Websocket`, `Any`. |
+| `--hb, --heartbeatinterval=VALUE` | OPC Publisher uses this as default value, in seconds, for the heartbeat interval setting of nodes without a heartbeat interval setting. |
+| `--ot, --operationtimeout=VALUE` | The operation time-out of the publisher OPC UA client, in milliseconds (ms). |
+| `--ol, --opcmaxstringlen=VALUE` | The maximum length of a string that OPC Publisher can transmit or receive. |
+| `--oi, --opcsamplinginterval=VALUE` | The default value, in milliseconds, to request the servers to sample values. |
+| `--op, --opcpublishinginterval=VALUE` | The default value, in milliseconds, for the publishing interval setting of the subscriptions against the OPC UA server. |
+| `--ct, --createsessiontimeout=VALUE` | The interval, in seconds, that OPC Publisher sends keep-alive messages to the OPC servers on the endpoints that it's connected to. |
+| `--kt, --keepalivethresholt=VALUE` | Specifies the number of keep-alive packets that a server can miss before a session is disconnected. |
+| `--tm, --trustmyself` | Automatically puts the OPC Publisher certificate into the trusted store. |
+| `--at, --appcertstoretype=VALUE` | The owned application certificate store type. Allowed: `Directory`, `X509Store`. |
+
+## Command-line arguments for version 2.8.2 and later
+
+The following OPC Publisher configuration can be applied by command-line interface (CLI) options or as environment variable settings.
+
+The `Alternative` field, when it's present, refers to the applicable CLI argument in *standalone mode only*. When both the environment variable and the CLI argument are provided, the latest argument overrules the environment variable.
+
+| Argument | Description |
+| | |
+| `PublishedNodesFile=VALUE` | The file that's used to store the configuration of the nodes to be published along with the information to connect to the OPC UA server sources. When this file is specified, or the default file is accessible by the module, OPC Publisher starts in *standalone* mode.<br>Alternative: `--pf, --publishfile`<br>Mode: Standalone only<br>Type: `string` - file name, optionally prefixed with the path<br>Default: `publishednodes.json` |
+| `site=VALUE` | The site that OPC Publisher is assigned to.<br>Alternative: `--s, --site`<br>Mode: Standalone, orchestrated<br>Type: `string`<br>Default: \<not set> |
+| `LogFile name==VALUE` | The file name of the log file to use<br>Alternative: `--lf, --logfile`<br>Mode: Standalone only<br>Type: `string` - file name, optionally prefixed with the path<br>Default: \<not set> |
+| `LogFileFlushTimeSpan=VALUE` | The timespan, in seconds, when the log file should be flushed in the storage account.<br>Alternative: `--lt, --logflushtimespan`<br>Mode: Standalone only<br>Environment variable<br>Type: `timespan string` {[d.]hh:mm:ss[.fffffff]}<br>Alternative argument type: `integer`, in seconds<br>Default: `{00:00:30}` |
+| `loglevel=Value` | The level for logs to be persisted in the log file.<br>Alternative: `--ll` `--loglevel`<br>Mode: Standalone only<br>Type: `string enum` - `fatal`, `error`, `warning`, `information`, `debug`, `verbose`<br>Default: `info` |
+| `EdgeHubConnectionString=VALUE` | An IoT Edge Device or IoT Edge module connection string to use. When it's deployed as a module in IoT Edge, the environment variable is already set as part of the container deployment.<br>Alternative: `--dc, --deviceconnectionstring` \| `--ec, --edgehubconnectionstring`<br>Mode: Standalone, orchestrated<br>Type: connection string<br>Default: \<not set> \<set by iotedge run time> |
+| `Transport=VALUE` | The protocol to use for upstream communication to the IoT Edge hub or the IoT hub.<br>Alternative: `--ih, --iothubprotocol`<br>Mode: Standalone, orchestrated<br>Type: `string enum` - `Any`, `Amqp`, `Mqtt`, `AmqpOverTcp`, `AmqpOverWebsocket`, `MqttOverTcp`, `MqttOverWebsocket`, `Tcp`, `Websocket`<br>Default: `MqttOverTcp` |
+| `BypassCertVerification=VALUE` | Enables/disables the bypassing of certificate verification for upstream communication to EdgeHub.<br>Alternative: N/A<br>Mode: Standalone, orchestrated<br>Type: Boolean<br>Default: `false` |
+| `EnableMetrics=VALUE` | Enables/disables upstream metrics propagation.<br>Alternative: N/A<br>Mode: Standalone, orchestrated<br>Type: Boolean<br>Default: `true` |
+| `DefaultPublishingInterval=VALUE` | The default value for the OPC UA publishing interval of OPC UA subscriptions created to an OPC UA server. This value is used when no explicit setting is configured.<br>Alternative: `--op, --opcpublishinginterval`<br>Mode: Standalone only<br> Environment variable<br>Type: `timespan string` {[d.]hh:mm:ss[.fffffff]}<br>Alternative argument type: `integer`, in milliseconds<br>Default: `{00:00:01}` (1000) |
+| `DefaultSamplingInterval=VALUE` | The default value for the OPC UA sampling interval of nodes to publish. This value is used when no explicit setting is configured.<br>Alternative: `--oi, --opcsamplinginterval`<br>Mode: Standalone only<br>Environment variable<br>Type: `timespan string` {[d.]hh:mm:ss[.fffffff]}<br>Alternative argument type: `integer`, in milliseconds<br>Default: `{00:00:01}` (1000) |
+| `DefaultQueueSize=VALUE` | The default value for the monitored item's queue size, to be used when it isn't explicitly specified in the *pn.json* file.<br>Alternative: `--mq, --monitoreditemqueuecapacity`<br>Mode: Standalone only<br>Type: `integer`<br>Default: `1` |
+| `DefaultHeartbeatInterval=VALUE` | The default value for the heartbeat interval setting of published nodes that have no explicit setting for heartbeat interval.<br>Alternative: `--hb, --heartbeatinterval`<br>Mode: Standalone<br>Environment variable<br>Type: `timespan string` {[d.]hh:mm:ss[.fffffff]}<br>Alternative argument type: `integer`, in seconds<br>Default: `{00:00:00}`, which means that heartbeat is disabled |
+| `MessageEncoding=VALUE` | The messaging encoding for outgoing telemetry.<br>Alternative: `--me, --messageencoding`<br>Mode: Standalone only<br>Type: `string enum` - `Json`, `Uadp`<br>Default: `Json` |
+| `MessagingMode=VALUE` | The messaging mode for outgoing telemetry.<br>Alternative: `--mm, --messagingmode`<br>Mode: Standalone only<br>Type: `string enum` - `PubSub`, `Samples`<br>Default: `Samples` |
+| `FetchOpcNodeDisplayName=VALUE` | Fetches the display name for the nodes to be published from the OPC UA server when it isn't explicitly set in the configuration.<br>**Note**: This argument has a high impact on OPC Publisher startup performance.<br>Alternative: `--fd, --fetchdisplayname`<br>Mode: Standalone only<br>Type: Boolean<br>Default: `false` |
+| `FullFeaturedMessage=VALUE` | The full-featured mode for messages (all fields filled in the telemetry).<br>Default is `false` for legacy compatibility.<br>Alternative: `--fm, --fullfeaturedmessage`<br>Mode: Standalone only<br>Type: Boolean<br>Default: `false` |
+| `BatchSize=VALUE` | The number of incoming OPC UA data change messages to be cached for batching. When `BatchSize` is `1` or `TriggerInterval` is set to `0`, batching is disabled.<br>Alternative: `--bs, --batchsize`<br>Mode: Standalone, orchestrated<br>Type: `integer`<br>Default: `50` |
+| `BatchTriggerInterval=VALUE` | The batching trigger interval. When `BatchSize` is `1` or `TriggerInterval` is set to `0`, batching is disabled.<br>Alternative: `--si, --iothubsendinterva`l<br>Mode: Standalone, orchestrated<br>Environment variable<br>Type: `timespan string` {[d.]hh:mm:ss[.fffffff]}<br> Alternative argument type: `integer`, in seconds<br>Default: `{00:00:10}` |
+| `IoTHubMaxMessageSize=VALUE` | The maximum size of the IoT D2C telemetry message.<br>Alternative: `--ms, --iothubmessagesize`<br>Mode: Standalone, orchestrated<br>Type: `integer`<br>Default: `0` |
+| `DiagnosticsInterval=VALUE` | Shows OPC Publisher diagnostics information at the specified interval, in seconds (need log level information). `-1` disables the remote diagnostics log and diagnostics output.<br>Alternative: `--di, --diagnosticsinterval`<br>Mode: Standalone only<br>Environment variable<br>Type: `timespan string` {[d.]hh:mm:ss[.fffffff]}<br>Alternative argument type: `integer`, in seconds<br>Default: `{00:00:60}` |
+| `LegacyCompatibility=VALUE` | Forces OPC Publisher to operate in 2.5 legacy mode by using `application/opcua+uajson` for `ContentType` on the IoT hub.<br>Telemetry message.<br>Alternative: `--lc, --legacycompatibility`<br>Mode: Standalone only<br>Type: Boolean<br>Default: `false` |
+| `PublishedNodesSchemaFile=VALUE` | The validation schema file name for the published nodes file.<br>Alternative: `--pfs, --publishfileschema`<br>Mode: Standalone only<br>Type: `string`<br>Default: \<not set> |
+| `MaxNodesPerDataSet=VALUE` | The maximum number of nodes within a dataset or subscription. When more nodes than this value are configured for `DataSetWriter`, they're added in a separate dataset or subscription.<br>Alternative: N/A<br>Mode: Standalone only<br>Type: `integer`<br>Default: `1000` |
+| `ApplicationName=VALUE` | The OPC UA Client Application Configuration application name, as per the OPC UA definition. It's used for authentication during the initial communication handshake and as part of owned certificate validation.<br>Alternative: `--an, --appname`<br>Mode: Standalone, orchestrated<br>Type: `string`<br>Default: `Microsoft.Azure.IIoT` |
+| `ApplicationUri=VALUE` | The OPC UA Client Application Configuration application URI, as per the OPC UA definition.<br>Alternative: N/A<br>Mode: Standalone, orchestrated<br>Type: `string`<br>Default: `$"urn:localhost:{ApplicationName}:microsoft:"` |
+| `ProductUri=VALUE` | The OPC UA Client Application Configuration product URI, as per OPC UA definition.<br>Alternative: N/A<br>Mode: Standalone, orchestrated<br>Type: `string`<br>Default: `https://www.github.com/Azure/Industrial-IoT` |
+| `DefaultSessionTimeout=VALUE` | The OPC UA Client Application Configuration session time-out, in seconds, as per OPC UA definition.<br>Alternative: `--ct, --createsessiontimeout`<br>Mode: Standalone, orchestrated<br>Type: `integer`<br>Default: `0`, which means \<not set> |
+| `MinSubscriptionLifetime=VALUE` | The OPC UA Client Application Configuration minimum subscription lifetime, in seconds, as per OPC UA definition.<br>Alternative: N/A<br>Mode: Standalone, orchestrated<br>Type: `integer`<br>Default: `0`, \<not set> |
+| `KeepAliveInterval=VALUE` | The OPC UA Client Application Configuration keep-alive interval, in seconds, as per OPC UA definition.<br>Alternative: `--ki, --keepaliveinterval`<br>Mode: Standalone, orchestrated<br>Type: `integer`, in milliseconds<br>Default: `10,000` (10 sec) |
+| `MaxKeepAliveCount=VALUE` | The OPC UA Client Application Configuration maximum number of keep-alive events, as per OPC UA definition.<br>Alternative: `--kt, --keepalivethreshold`<br>Mode: Standalone, orchestrated<br>Type: `integer`<br>Default: `50` |
+| `PkiRootPath=VALUE` | The OPC UA Client Security Configuration PKI (public key infrastructure) certificate store root path.<br>Alternative: N/A<br>Mode: Standalone, orchestrated<br>Type: `string`<br>Default: `pki` |
+| `ApplicationCertificateStorePath=VALUE` | The OPC UA Client Security Configuration application's owned certificate store path.<br>Alternative: `--ap, --appcertstorepath`<br>Mode: Standalone, orchestrated<br>Type: `string`<br>Default: `$"{PkiRootPath}/own"` |
+| `ApplicationCertificateStoreType=VALUE` | The OPC UA Client Security Configuration application's owned certificate store type.<br>Alternative: `--at, --appcertstoretype`<br>Mode: Standalone, orchestrated<br>Type: `string enum` - `Directory`, `X509Store`<br>Default: `Directory` |
+| `ApplicationCertificateSubjectName=VALUE` | The OPC UA Client Security Configuration subject name in the application's owned certificate.<br>Alternative: `--sn, --appcertsubjectname`<br>Mode: Standalone, orchestrated<br>Type: `string`<br>Default: `"CN=Microsoft.Azure.IIoT, C=DE, S=Bav, O=Microsoft, DC=localhost"` |
+| `TrustedIssuerCertificatesPath=VALUE` | The OPC UA Client Security Configuration trusted certificate issuer store path.<br>Alternative: `--ip, --issuercertstorepath`<br>Mode: Standalone, orchestrated<br>Type: `string`<br>Default: `$"{PkiRootPath}/issuers"` |
+| `TrustedIssuerCertificatesType=VALUE` | The OPC UA Client Security Configuration trusted issuer certificates store type.<br>Alternative: N/A<br>Mode: Standalone, orchestrated<br>Type: `string enum` - `Directory`, `X509Store`<br>Default: `Directory` |
+| `TrustedPeerCertificatesPath=VALUE` | The OPC UA Client Security Configuration trusted peer certificates store path.<br>Alternative: `--tp, --trustedcertstorepath`<br>Mode: Standalone, orchestrated<br>Type: `string`<br>Default: `$"{PkiRootPath}/trusted"` |
+| `TrustedPeerCertificatesType=VALUE` | The OPC UA Client Security Configuration trusted peer certificates store type.<br>Alternative: N/A<br>Mode: Standalone, orchestrated<br>Type: `string enum` - `Directory`, `X509Store`<br>Default: `Directory` |
+| `RejectedCertificateStorePath=VALUE` | The OPC UA Client Security Configuration rejected certificates store path.<br>Alternative: `--rp, --rejectedcertstorepath`<br>Mode: Standalone, orchestrated<br>Type: `string`<br>Default: `$"{PkiRootPath}/rejected"` |
+| `RejectedCertificateStoreType=VALUE` | The OPC UA Client Security Configuration rejected certificates store type.<br>Alternative: N/A<br>Mode: Standalone, orchestrated<br>Type: `string enum` - `Directory`, `X509Store`<br>Default: `Directory` |
+| `AutoAcceptUntrustedCertificates=VALUE` | The OPC UA Client Security Configuration auto accept untrusted peer certificates.<br>Alternative: `--aa, --autoaccept`<br>Mode: Standalone, orchestrated<br>Type: Boolean<br>Default: `false` |
+| `RejectSha1SignedCertificates=VALUE` | The OPC UA Client Security Configuration reject deprecated Sha1 signed certificates.<br>Alternative: N/A<br>Mode: Standalone, orchestrated<br>Type: Boolean<br>Default: `false` |
+| `MinimumCertificateKeySize=VALUE` | The OPC UA Client Security Configuration minimum accepted certificates key size.<br>Alternative: N/A<br>Mode: Standalone, orchestrated<br>Type: `integer`<br>Default: `1024` |
+| `AddAppCertToTrustedStore=VALUE` | The OPC UA Client Security Configuration automatically copy the owned certificate's public key to the trusted certificate store.<br>Alternative: `--tm, --trustmyself`<br>Mode: Standalone, orchestrated<br>Type: Boolean<br>Default: `true` |
+| `SecurityTokenLifetime=VALUE` | The OPC UA Stack Transport Secure Channel security token lifetime. <br>Alternative: N/A<br>Mode: Standalone, orchestrated<br>Type: `integer`, in milliseconds<br>Default: `3,600,000` (1 hour) |
+| `ChannelLifetime=VALUE` | The OPC UA Stack Transport Secure Channel channel lifetime, in milliseconds.<br>Alternative: N/A<br>Mode: Standalone, orchestrated<br>Type: `integer`, in milliseconds<br>Default: `300,000` (5 minutes) |
+| `MaxBufferSize=VALUE` | The OPC UA Stack Transport Secure Channel maximum buffer size.<br>Alternative: N/A<br>Mode: Standalone, orchestrated<br>Type: `integer`, in kilobytes<br>Default: `65,535` (64 KB -1) |
+| `MaxMessageSize=VALUE` | The OPC UA Stack Transport Secure Channel maximum message size.<br>Alternative: N/A<br>Mode: Standalone, orchestrated<br>Type: `integer`<br>Default: `4,194,304` (4 MB) |
+| `MaxArrayLength=VALUE` | The OPC UA Stack Transport Secure Channel maximum array length. <br>Alternative: N/A<br>Mode: Standalone, orchestrated<br>Type: `integer`<br>Default: `65,535` (64 KB - 1) |
+| `MaxByteStringLength=VALUE` | The OPC UA Stack Transport Secure Channel maximum byte string length.<br>Alternative: N/A<br>Mode: Standalone, orchestrated<br>Type: `integer`<br>Default: `1,048,576` (1 MB) |
+| `OperationTimeout=VALUE` | The OPC UA Stack Transport Secure Channel service call operation timeout.<br>Alternative: `--ot, --operationtimeout`<br>Mode: Standalone, orchestrated<br>Type: `integer`, in milliseconds<br>Default: `120,000` (2 min) |
+| `MaxStringLength=VALUE` | The OPC UA Stack Transport Secure Channel maximum length of a string that can be sent/received over the OPC UA secure channel.<br>Alternative: `--ol, --opcmaxstringlen`<br>Mode: Standalone, orchestrated<br>Type: `integer`<br>Default: `130,816` (128 KB - 256) |
+| `RuntimeStateReporting=VALUE` | Enables reporting of OPC Publisher restarts.<br>Alternative: `--rs, --runtimestatereporting`<br>Mode: Standalone<br>Type: Boolean<br>Default: `false` |
+| `EnableRoutingInfo=VALUE` | Adds the routing information to telemetry messages. The name of the property is `$$RoutingInfo`, and the value is `DataSetWriterGroup` for that particular message. When `DataSetWriterGroup` isn't configured, the `$$RoutingInfo` property isn't added to the message even if this argument is set.<br>Alternative: `--ri, --enableroutinginfo`<br>Mode: Standalone<br>Type: Boolean<br>Default: `false` |
## Next steps
-Further resources can be found in the GitHub repositories:
+
+For additional resources, go to the following GitHub repositories:
> [!div class="nextstepaction"] > [OPC Publisher GitHub repository](https://github.com/Azure/Industrial-IoT) > [!div class="nextstepaction"]
-> [IIoT Platform GitHub repository](https://github.com/Azure/iot-edge-opc-publisher)
+> [Industrial IoT platform GitHub repository](https://github.com/Azure/iot-edge-opc-publisher)
industrial-iot Tutorial Configure Industrial Iot Components https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/industrial-iot/tutorial-configure-industrial-iot-components.md
Title: Configure the Azure Industrial IoT components
-description: In this tutorial, you learn how to change the default values of the configuration.
+ Title: Configure Azure Industrial IoT components
+description: In this tutorial, you learn how to change the default values of the Azure Industrial IoT configuration.
Last updated 3/22/2021
-# Tutorial: Configure the Industrial IoT components
+# Tutorial: Configure Industrial IoT components
-The deployment script automatically configures all components to work with each other using default values. However, the settings of the default values can be changed to meet your requirements.
+The deployment script automatically configures all Azure Industrial IoT components to work with each other using default values. However, you can change the settings to meet your requirements.
In this tutorial, you learn how to: > [!div class="checklist"]
-> * Customize the configuration of the components
--
-Here are some of the more relevant customization settings for the components:
-* IoT Hub
- * Networking→Public access: Configure Internet access, for example, IP filters
- * Networking → Private endpoint connections: Create an endpoint that's not accessible
- through the Internet and can be consumed internally by other Azure services or on-premises devices (for example, through a VPN connection)
- * IoT Edge: Manage the configuration of the edge devices that are connected to the OPC
-UA servers
-* Cosmos DB
- * Replicate data globally: Configure data-redundancy
- * Firewall and virtual networks: Configure Internet and VNET access, and IP filters
- * Private endpoint connections: Create an endpoint that is not accessible through the
-Internet
-* Key Vault
- * Secrets: Manage platform settings
- * Access policies: Manage which applications and users may access the data in the Key
-Vault and which operations (for example, read, write, list, delete) they are allowed to perform on the network, firewall, VNET, and private endpoints
-* Microsoft Azure Active Directory (Azure AD)→App registrations
- * <APP_NAME>-web → Authentication: Manage reply URIs, which is the list of URIs that
-can be used as landing pages after authentication succeeds. The deployment script may be unable to configure this automatically under certain scenarios, such as lack of Azure AD admin rights. You may want to add or modify URIs when changing the hostname of the Web app, for example, the port number used by the localhost for debugging
-* App Service
- * Configuration: Manage the environment variables that control the services or UI
-* Virtual machine
- * Networking: Configure supported networks and firewall rules
- * Serial console: SSH access to get insights or for debugging, get the credentials from the
-output of deployment script or reset the password
-* IoT Hub → IoT Edge
- * Manage the identities of the IoT Edge devices that may access the hub, configure which modules are installed and which configuration they use, for example, encoding parameters for the OPC Publisher
-* IoT Hub → IoT Edge → \<DEVICE> → Set Modules → OpcPublisher (for standalone OPC Publisher operation only)
-
-## Configuration via Command-line Arguments for OPC Publisher 2.8.2 and above
-
-There are [several Command-line Arguments](reference-command-line-arguments.md#opc-publisher-command-line-arguments-for-version-282-and-above) that can be used to set global settings for OPC Publisher.
-Refer to the `mode` part in the command line description to check if a Command-line Argument is applicable to orchestrated or standalone mode.
+> * Customize the configuration of Azure Industrial IoT components
+
+## Customization settings
+
+Here are some of the more relevant customization settings for the components.
+
+### IoT Hub
+
+* Networking (public access): Configure internet access (for example, IP filters).
+* Networking(private endpoint connections): Create an endpoint that's inaccessible through the internet but that can be consumed internally by other Azure services or on-premises devices (for example, through a VPN connection).
+* Azure IoT Edge: Manage the configuration of the edge devices that are connected to the OPC Unified Architecture (OPC UA) servers.
+
+### Azure Cosmos DB
+
+* Replicate data globally: Configure data redundancy.
+* Firewall and virtual networks: Configure internet and virtual network access, and IP filters.
+* Private endpoint connections: Create an endpoint that's inaccessible through the internet.
+
+### Azure Key Vault
+
+* Secrets: Manage platform settings.
+* Access policies: Manage which applications and users may access the data in the key vault and which operations (for example, read, write, list, delete) they are allowed to perform on the network, firewall, virtual network, and private endpoints.
+
+### Azure Active Directory app registrations
+
+* <APP_NAME>-web (authentication): Manage reply URIs, which are the lists of URIs that can be used as landing pages after authentication succeeds. The deployment script might be unable to configure this automatically under certain scenarios, such as lack of Azure Active Directory (Azure AD) administrator rights. You might want to add or modify URIs when you're changing the hostname of the web app (for example, the port number that's used by the localhost for debugging).
+
+### Azure App Service
+
+* Configuration: Manage the environment variables that control the services or the user interface.
+
+### Azure Virtual Machines
+
+* Networking: Configure supported networks and firewall rules.
+* Serial console: Get Secure Shell (SSH) access for insights or for debugging, get the credentials from the output of deployment script, or reset the password.
+
+### Azure IoT Hub → Azure IoT Edge
+
+* Manage the identities of the IoT Edge devices that can access the hub. Also, configure which modules are installed and identify which configuration they use (for example, encoding parameters for OPC Publisher).
+
+### IoT Hub → IoT Edge → \<DEVICE> → Set Modules → OpcPublisher
+* This setting applies to *standalone* OPC Publisher operation only.
+
+## Command-line arguments for OPC Publisher version 2.8.2 and later
+
+To establish global settings for OPC Publisher, you can use any of [several command-line arguments](reference-command-line-arguments.md#command-line-arguments-for-version-282-and-later). To learn whether a particular argument applies to *standalone* or *orchestrated* mode, refer to the "Mode" designation in the argument **Description** column of the table.
## Next steps
-Now that you have learned how to change the default values of the configuration, you can
+
+Now that you've learned how to change the default values of the configuration, you can:
> [!div class="nextstepaction"]
-> [Pull IIoT data into ADX](tutorial-industrial-iot-azure-data-explorer.md)
+> [Pull Industrial IoT data into ADX](tutorial-industrial-iot-azure-data-explorer.md)
> [!div class="nextstepaction"]
-> [Visualize and analyze the data using Time Series Insights](tutorial-visualize-data-time-series-insights.md)
+> [Visualize and analyze the data by using Time Series Insights](tutorial-visualize-data-time-series-insights.md)
iot-edge How To Access Dtpm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-access-dtpm.md
A Trusted platform module (TPM) chip is a secure crypto-processor that is designed to carry out cryptographic operations. This technology is designed to provide hardware-based, security-related functions. The Azure IoT Edge for Linux on Windows (EFLOW) virtual machine doesn't have a virtual TPMs attached to the VM. However, the user can enable or disable the TPM passthrough feature, that allows the EFLOW virtual machine to use the Windows host OS TPM. The TPM passthrough feature enables two main scenarios: -- Use TPM technology for IoT Edge device provisioning using Device Provision Service (DPS)
+- Use TPM technology for IoT Edge device provisioning using Device Provisioning Service (DPS)
- Read-only access to cryptographic keys stored inside the TPM. This article describes how to develop a sample code in C# to read cryptographic keys stored inside the device TPM.
The following steps show you how to create a sample executable to access a TPM i
1. In **Solution Explorer**, right-click the project name and select **Manage NuGet Packages**.
-1. Select **Browse** and then search for `Microsoft.TSS`.
+1. Select **Browse** and then search for `Microsoft.TSS`. For more information about this package, see [Microsoft.TSS](https://www.nuget.org/packages/Microsoft.TSS).
1. Choose the **Microsoft.TSS** package from the list then select **Install**.
iot-edge How To Configure Iot Edge For Linux On Windows Iiot Dmz https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-configure-iot-edge-for-linux-on-windows-iiot-dmz.md
EFLOW uses the [route](https://man7.org/linux/man-pages/man8/route.8.html) servi
>[!TIP] >The previous image shows the route command output with the two NIC's assigned (*eth0* and *eth1*). The virtual machine creates two different *default* destinations rules with different metrics. A lower metric value has a higher priority. This routing table will vary depending on the networking scenario configured in the previous steps.
-### Static routes fix
+### Static routes configuration
-Every time EFLOW VM starts, the networking services recreates all routes, and any previously assigned priority could change. To work around this issue, you can assign the desired priority for each route every time the EFLOW VM starts. You can create a service that executes every time the VM starts and use the `route` command to set the desired route priorities.
+Every time EFLOW VM starts, the networking services recreates all routes, and any previously assigned priority could change. To work around this issue, you can assign the desired priority for each route every time the EFLOW VM starts. You can create a service that executes on every VM boot and uses the `route` command to set the desired route priorities.
First, create a bash script that executes the necessary commands to set the routes. For example, following the networking scenario mentioned earlier, the EFLOW VM has two NICs (offline and online networks). NIC *eth0* is connected using the gateway IP xxx.xxx.xxx.xxx. NIC *eth1* is connected using the gateway IP yyy.yyy.yyy.yyy.
iot-edge Iot Edge For Linux On Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/iot-edge-for-linux-on-windows.md
Azure IoT Edge for Linux on Windows emphasizes interoperability between the Linu
For samples that demonstrate communication between Windows applications and Azure IoT Edge modules, see [EFLOW GitHub](https://aka.ms/AzEFLOW-Samples).
+Also, you can use your IoT Edge for Linux on Windows device to act as a transparent gateway for other edge devices. For more information on how to configure EFLOW as a transparent gateway, see [Configure an IoT Edge device to act as a transparent gateway](./how-to-create-transparent-gateway.md).
+ ## Support Use the Azure IoT Edge support and feedback channels to get assistance with Azure IoT Edge for Linux on Windows.
iot-hub Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure IoT Hub description: Lists Azure Policy Regulatory Compliance controls available for Azure IoT Hub. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 07/26/2022 Last updated : 08/01/2022
key-vault Common Error Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/common-error-codes.md
Title: Common error codes for Azure Key Vault | Microsoft Docs description: Common error codes for Azure Key Vault -+ tags: azure-resource-manager
The error codes listed in the following table may be returned by an operation on
| Error code | User message | |--|--|
-| VaultAlreadyExists | Your attempt to create a new key vault with the specified name has failed since the name is already in use. If you recently deleted a key vault with this name, it may still be in the soft deleted state. You can verify if it existis in soft-deleted state [here](./key-vault-recovery.md?tabs=azure-portal#list-recover-or-purge-a-soft-deleted-key-vault) |
+| VaultAlreadyExists | Your attempt to create a new key vault with the specified name has failed since the name is already in use. If you recently deleted a key vault with this name, it may still be in the soft deleted state. You can verify if it exists in soft-deleted state [here](./key-vault-recovery.md?tabs=azure-portal#list-recover-or-purge-a-soft-deleted-key-vault) |
| VaultNameNotValid | The vault name should be string of 3 to 24 characters and can contain only numbers (0-9), letters (a-z, A-Z), and hyphens (-) | | AccessDenied | You may be missing permissions in access policy to do that operation. |
-| ForbiddenByFirewall | Client address is not authorized and caller is not a trusted service. |
-| ConflictError | You're requesting multiple operations on same item. |
-| RegionNotSupported | Specified azure region is not supported for this resource. |
-| SkuNotSupported | Specified SKU type is not supported for this resource. |
-| ResourceNotFound | Specified azure resource is not found. |
-| ResourceGroupNotFound | Specified azure resource group is not found. |
+| ForbiddenByFirewall | Client address isn't authorized and caller isn't a trusted service. |
+| ConflictError | You're requesting multiple operations on the same item, e.g., Key Vault, secret, key, certificate, or common components within a Key Vault like VNET. It's recommended to sequence ope